prompt
stringlengths 0
312k
| target
stringlengths 2
62.4k
|
|---|---|
and $\alpha$, the \textbf{Fusion Module} computes the fused grid $V_t$ at each voxel location.}
\label{fig:architecture}
\end{figure*}
\boldparagraph{Overview.} Given multiple noisy depth streams $D^i_t : \mathbb{R}^2 \rightarrow \mathbb{R}$ from different sensors with known camera calibration, \emph{i.e.}} \def\Ie{\emph{I.e.}~extrinsics $P^i_t \in \mathbb{SE}(3)$ and intrinsics $K^i \in \mathbb{R}^{3 \times 3}$, our method integrates each depth frame at time $t \in \mathbb{N}$ from sensor $i \in \{1, \ 2\}$ into a globally consistent shape $S^i_t$ and feature $F^i_t$ grid.
Through a series of operations, we then decode
$S^i_t$ and $F^i_t$ into a fused TSDF grid $V_t \in \mathbb{R}^{X \times Y \times Z}$, which can be converted into a mesh
with marching cubes~\cite{lorensen1987marching}.
Our overall framework can be split into four components (see Fig.~\ref{fig:architecture}).
First, the \textbf{Shape Integration Module} integrates depth frames $D^i_t$ successively into the zero-initialized shape representation
$S^i_t = \{V^i_t, W^i_t\}$.
$S^i_t$ consists of a TSDF grid $V^i_t \in \mathbb{R}^{X \times Y \times Z}$ and a corresponding weight
grid $W^i_t \in \mathbb{N}^{X \times Y \times Z}$, which keeps track of the number of updates to each voxel.
In parallel, the \textbf{Feature Integration Layer} extracts features from the depth maps using a 2D feature network $\mathcal{F}^i : D^i_t \in \mathbb{R}^{W \times H \times 1} \rightarrow f^i_t \in \mathbb{R}^{W \times H \times n}$, with $n$ being the feature dimension.
We use separate feature networks per sensor to learn sensor-specific depth dependent statistics such as shape and edge information.
The extracted features $f^i_t$ are then integrated into the zero-initialized feature grid $F^i_t \in \mathbb{R}^{X \times Y \times Z \times n}$.
Next, $S^i_t$ and $F^i_t$
are combined and decoded through a 3D \textbf{Weighting Network} $\mathcal{G}$ into a location-dependent
sensor weighting $\alpha \in [0, 1]$.
Together with $S^i_t$ and $\alpha$, the \textbf{Fusion Module} fuses the information into
$V_t$ at each voxel location.
Key to our approach is the separation of per sensor information into different representations along with the successive aggregation of shapes and features in the 3D domain.
This strategy enables $\mathcal{G}$ to learn a fusion strategy of the incoming multi-sensor depth stream.
Our method is able to fuse the sensors in a spatially dependent manner from a smooth combination to a hard selection as illustrated in Fig.~\ref{fig:overview}.
Our scheme hence avoids the need to perform post-outlier filtering by thresholding with the weight $W_t^i$, which is difficult to tune and is prone to reduce scene completion~\cite{Weder2020RoutedFusionLR}.
Another popular outlier filtering technique is free-space carving\footnote{Enforcing free space for voxels along the ray from the camera to the surface~\cite{newcombe2011kinectfusion}. Note that outliers behind surfaces are not removed with this technique.}, but this can be computationally expensive and is not required by our method. Instead, we use the learned $\alpha$ as part of an outlier filter at test time, requiring no manual tuning.
Next, we describe each component in detail.
\ifeccv
\begin{wrapfigure}[20]{R}{0.55\linewidth}
\vspace{-\intextsep}
\centering
\scriptsize
\setlength{\tabcolsep}{0pt}
\renewcommand{\arraystretch}{1}
\begin{tabular}{cccc}
& \multicolumn{3}{l}{\rule[3pt]{0.5cm}{0.4pt} 25 \rule[3pt]{1.5cm}{0.4pt} 75 \rule[3pt]{1.525cm}{0.4pt} 120 $\xrightarrow{\hspace*{0.1cm}}$ Time} \\
\rotatebox[origin=c]{90}{\makecell{Online \\ Sensor Fusion}} & \includegraphics[align=c, width=.24\linewidth]{figures/overview/25frames_fused_small.jpg} & \includegraphics[align=c, width=.32\linewidth]{figures/overview/75frames_fused_small.jpg} & \includegraphics[align=c, width=.345\linewidth]{figures/overview/120frames_fused_small.jpg} \\
\rotatebox[origin=c]{90}{\makecell{Sensor \\ Weighting}} & \includegraphics[align=c, width=.24\linewidth]{figures/overview/25frames_weighting_small.jpg} & \includegraphics[align=c, width=.32\linewidth]{figures/overview/75frames_weighting_small.jpg} & \includegraphics[align=c, width=.345\linewidth]{figures/overview/120frames_weighting_small.jpg} \\
& \multicolumn{3}{c}{\includegraphics[align=c, width=0.91\linewidth]{figures/colorbars/overview_colorbar.PNG}}
\end{tabular}
\begin{tabular}{cccccccc}
& ToF & Stereo & Stereo & ToF & Stereo & ToF & ToF \\
\rotatebox[origin=c]{90}{\makecell{Depth \\ Stream}} & \includegraphics[align=c, width=.13\linewidth]{figures/overview/tof_0.jpg} & \includegraphics[align=c, width=.13\linewidth]{figures/overview/stereo_26.jpg} & \includegraphics[align=c, width=.13\linewidth]{figures/overview/stereo_49.jpg}& \includegraphics[align=c, width=.13\linewidth]{figures/overview/tof_65.jpg} & \includegraphics[align=c, width=.13\linewidth]{figures/overview/stereo_85.jpg} & \includegraphics[align=c, width=.13\linewidth]{figures/overview/tof_103.jpg} & \includegraphics[align=c, width=.13\linewidth]{figures/overview/tof_120.jpg}\\
$t=$ & $0$ & $26$ & $49$ & $65$ & $85$ & $103$ & $120$ \\
\end{tabular}
\caption{\textbf{Overview.} Left to right: Sequential fusion of a multi-sensor noisy depth stream. Our method integrates each depth frame at time $t$ and produces a sensor weighting which fuses the sensors in a spatially dependent manner. For example, areas in yellow denote high trust of the ToF sensor.}
\label{fig:overview}
\end{wrapfigure}
\fi
\boldparagraph{(a) Shape Integration Module.}
For each depth map $D^i_t$ and pixel, a full perspective unprojection of the depth into the world coordinate frame yields a 3D point $\textbf{x}_w \in \mathbb{R}^3$.
Along each ray from the camera center, centered at $\textbf{x}_w$, we sample T points uniformly
over a predetermined distance $l$.
The coordinates are then converted
to the voxel space and a local shape grid $S^{i, *}_{t-1}$ is extracted from $S^i_{t-1}$ through nearest neighbor extraction.
To incrementally update the local shape grid, we follow the moving average update scheme of TSDF Fusion~\cite{curless1996volumetric}. For numerical stability, the weights are clipped at a maximum weight $\omega_\textrm{max}$.
For more details, see the supplementary material.
\boldparagraph{(b) Feature Integration Layer.}
Each depth map $D^i_t$ is passed through a 2D network $\mathcal{F}^i$ to extract context information $f^i_t$, which can be useful during the sensor fusion process. When fusing sensors based on the stereo matching principle, we provide the RGB frame as additional input channels to $\mathcal{F}^i$.
The network is fully convolutional and comprises 7 network blocks, each consisting of the following operations,
1) a $3 \times 3$ convolution with zero padding $1$ and input channel dimension $4$ and output dimension $4$ (except the first block which takes $1$ channel as input when only depth is provided),
2) a $\mathop{\mathrm{tanh}}$ activation,
3) another $3 \times 3$ convolution with zero padding $1$ outputting $4$ channels and
4) a $\mathop{\mathrm{tanh}}$ activation.
The output of the first six blocks is added to the output of the next block via residual connections.
Finally, we normalize the feature vectors at each pixel location and concatenate the input depth.
Next, we repeat the features $T$ times along the direction of the viewing ray from the camera, $f^i_t \xrightarrow[\text{T times}]{\text{Repeat}}f^{i, T}_t \in \mathbb{R}^{W \times H \times T \times n}$.
The local feature grid $F^{i,*}_{t-1}$ is then updated using the precomputed update indices from the Shape Integration Module with a moving average update: $F^{i,*}_t = \frac{W^{i,*}_{t-1}F^{i,*}_{t-1} + f^{i,T}_t}{W^{i,*}_{t-1} + 1} .$
For all update locations the grid $F^i_t$ is replaced with $F^{i, *}_t$.
\boldparagraph{(c) Weighting Network.}
The task of the weighting network $\mathcal{G}$ is to predict the optimal fusion strategy of the surface hypotheses $V^i_t$.
The input to the network is prepared by first concatenating the features $F^i_t$ and the $\mathop{\mathrm{tanh}}$-transformed weight counters $W^i_t$ and second by concatenating the resulting vectors across the sensors.
Due to memory constraints, the entire scene cannot be fit onto the GPU, and hence we use a sliding-window approach at test time to feed $\mathcal{G}$ chunks of data.
First, the minimum bounding grid of the measured scene (\emph{i.e.}} \def\Ie{\emph{I.e.}~where $W^i_t > 0$) is extracted from the global grids.
Then, the extracted grid is parsed using chunks of size $d \times d \times d$ through $\mathcal{G} : \mathbb{R}^{d \times d \times d \times 2(n+1)} \rightarrow \alpha \in \mathbb{R}^{d \times d \times d \times 1}$ into $\alpha \in [0, 1]$.
To avoid edge effects, we use a stride of $d/2$ and update the central chunk of side length $d/2$.
The architecture of $\mathcal{G}$ combines 2 layers of 3D-convolutions with kernel size 3 and replication padding 1 interleaved with $\mathop{\mathrm{ReLU}}$ activations.
The first layer outputs $32$ and the second layer $16$ channels.
Finally, the $16$-dimensional features are decoded into the sensor weighting $\alpha$ by a $1\!\times\!1\!\times\!1$ convolution followed by a sigmoid activation.
\boldparagraph{(d) Fusion Module.} The task of the fusion module is to combine $\alpha$ with the shapes $S^i_t$.
In the following, we define the set of voxels where only sensor 1 integrates as $C^1 = \{W^1_t > 0, \ W^2_{t-1} = 0\}$, the set where only sensor 2 integrates as $C^2 = \{W^1_t = 0, \ W^2_{t-1} > 0\}$ and the set where both sensors integrate as $C^{12} = \{W^1_t > 0, \ W^2_{t-1} > 0\}$.
Let us also introduce $\alpha_1 = \alpha$ and $\alpha_2 = 1 - \alpha$.
The fusion module computes the fused grid $V_t$ as follows:
\begin{equation}
V_t =
\begin{cases}
\alpha_1 V_t^1 + \alpha_2V_{t-1}^2 & \mbox{if } C^{12} \\
V_t^1 & \mbox{if } C^1\\
V_{t-1}^2 & \mbox{if } C^2
\end{cases}
\label{eq:fusion}
\end{equation}
Depending on the voxel set, $V_t$ is computed either as a weighted average of the two surface hypotheses or by selecting one of them.
With only one sensor observation, a weighted average would corrupt the result.
Hence, the single observed surface is selected.
At test time, we additionally apply a learned \textbf{Outlier Filter} which utilizes
$\alpha_i$ and
$W^i_t$.
The filter is formulated for sensors 1 and 2 as
\begin{comment}
\begin{equation}
\hat{W}_t^1 =
\begin{cases}
0 & , \ C^1, \ \alpha_1 < 0.5\\
W_t^1 & , \ C^1, \ \alpha_1 > 0.5
\end{cases}
\label{eq:outlier_filter_1}
\end{equation}
\end{comment}
\begin{equation}
\hat{W}_t^1 = \1{C^1, \ \alpha_1 > 0.5}W_t^1 ,\quad \hat{W}_{t-1}^2 = \1{C^2, \ \alpha_2 > 0.5}W_{t-1}^2 ,
\label{eq:outlier_filter}
\end{equation}
where $\1{.}$ denotes the indicator function\footnote{See supplementary material for a definition.}.
When only one sensor is observed at a certain voxel, we remove the observation if $\alpha_i$, which could be interpreted as a confidence, is below $0.5$.
This is done by setting the weight counter to $0$.
\ifeccv
\else
\begin{figure}[t]
\centering
\footnotesize
\setlength{\tabcolsep}{0pt}
\renewcommand{\arraystretch}{1}
\begin{tabular}{cccc}
& \multicolumn{3}{l}{\rule[3pt]{0.8cm}{0.4pt} 25 \rule[3pt]{1.85cm}{0.4pt} 75 \rule[3pt]{2.2cm}{0.4pt} 120 $\xrightarrow{\hspace*{0.125cm}}$ Time} \\
\rotatebox[origin=c]{90}{\makecell{Online \\ Fusion}} & \includegraphics[align=c, width=.24\linewidth]{figures/overview/25frames_fused_small.jpg} & \includegraphics[align=c, width=.32\linewidth]{figures/overview/75frames_fused_small.jpg} & \includegraphics[align=c, width=.345\linewidth]{figures/overview/120frames_fused_small.jpg} \\
\rotatebox[origin=c]{90}{\makecell{Sensor \\ Weighting}} & \includegraphics[align=c, width=.24\linewidth]{figures/overview/25frames_weighting_small.jpg} & \includegraphics[align=c, width=.32\linewidth]{figures/overview/75frames_weighting_small.jpg} & \includegraphics[align=c, width=.345\linewidth]{figures/overview/120frames_weighting_small.jpg} \\
& \multicolumn{3}{c}{\includegraphics[align=c, width=0.91\linewidth]{figures/colorbars/overview_colorbar.PNG}}
\end{tabular}
\begin{tabular}{cccccccc}
& ToF & Stereo & Stereo & ToF & Stereo & ToF & ToF \\
\rotatebox[origin=c]{90}{\makecell{Depth \\ Stream}} & \includegraphics[align=c, width=.13\linewidth]{figures/overview/tof_0.jpg} & \includegraphics[align=c, width=.13\linewidth]{figures/overview/stereo_26.jpg} & \includegraphics[align=c, width=.13\linewidth]{figures/overview/stereo_49.jpg}& \includegraphics[align=c, width=.13\linewidth]{figures/overview/tof_65.jpg} & \includegraphics[align=c, width=.13\linewidth]{figures/overview/stereo_85.jpg} & \includegraphics[align=c, width=.13\linewidth]{figures/overview/tof_103.jpg} & \includegraphics[align=c, width=.13\linewidth]{figures/overview/tof_120.jpg}\\
$t=$ & $0$ & $26$ & $49$ & $65$ & $85$ & $103$ & $120$ \\
\end{tabular}
\caption{\textbf{Overview.} Left to right: Sequential fusion of a multi-sensor noisy depth stream. Our method fuses each depth frame at time $t$ to produce a sensor weighting which fuses the sensors in a spatially dependent manner.}
\label{fig:overview}
\end{figure}
\fi
\boldparagraph{Loss Function.}
The full pipeline is trained end-to-end using the overall loss
\begin{equation}
\mathcal{L} = \mathcal{L}_{f} + \lambda_1 \sum_{i=1}^2\mathcal{L}_{C^i}^{in} + \lambda_2 \sum_{i=1}^2\mathcal{L}_{C^i}^{out}.
\label{eq:loss}
\end{equation}
The term $\mathcal{L}_{f}$ computes the mean $L_1$ error to the ground truth TSDF masked by $C^{12}$ \eqref{eq:lf}.
To supervise the voxel sets $C^{1}$ and $C^{2}$, we introduce two additional terms, which penalize $L_1$ deviations from the optimal $\alpha$.
The purpose of these terms is to provide a training signal for the outlier filter.
If the $L_1$ TSDF error is smaller than some threshold $\eta$, the observation is deemed to be an inlier, and the corresponding confidence $\alpha_i$ should be $1$, otherwise $0$.
The loss is computed as the mean $L_1$ error to the optimal $\alpha$:
\ifeccv
\begin{align}
\mathcal{L}_{f} = \frac{1}{N_{C^{12}}}\sum \1{C^{12}}|V_t & - V^{GT}|_1, \
\mathcal{L}_{C^i}^{in} = \frac{1}{N_{C^i}^{in}} \sum \1{C^i, \ |V_t-V^{GT}|_1 < \eta}| \alpha_i - 1|_1, \nonumber \\
%
\mathcal{L}_{C^i}^{out} &= \frac{1}{N_{C^i}^{out}} \sum \1{C^i, \ |V_t-V^{GT}|_1 > \eta}| \alpha_i|_1,
\label{eq:lf}
\end{align}
\else
\begin{align}
\mathcal{L}_{f} &= \frac{1}{N_{C^{12}}}\sum \1{C^{12}}|V_t - V^{GT}|_1, \label{eq:lf} \\
%
\mathcal{L}_{C^i}^{in} &= \frac{1}{N_{C^i}^{in}} \sum \1{C^i, \ |V_t-V^{GT}|_1 < \eta}| \alpha_i - 1|_1, \label{eq:lin} \\
%
\mathcal{L}_{C^i}^{out} &= \frac{1}{N_{C^i}^{out}} \sum \1{C^i, \ |V_t-V^{GT}|_1 > \eta}| \alpha_i|_1,
\label{eq:lout}
\end{align}
\fi
where the normalization factors are defined as
\begin{align}
N_{C^{12}} = \sum &\1{C^{12}}, \ N_{C^i}^{in} = \sum \1{C^i, \ |V_t - V^{GT}|_1 < \eta}, \nonumber \\ N_{C^i}^{out} &= \sum \1{C^i, \ |V_t - V^{GT}|_1 > \eta} .
\end{align}
\boldparagraph{Training Forward Pass.}
After the integration of a new depth frame $D^i_t$ into the shape and feature grids, the update indices from the Shape Integration Module are used to compute the minimum bounding box of update voxels in $S^i_t$ and $F^i_t$.
The update box varies in size between frames and cannot always fit on the GPU.
Due to this and for the sake of training efficiency, we extract a $d \times d \times d$ chunk from within the box volume.
The chunk location is randomly selected using a uniform distribution along each axis of the bounding box.
If the bounding box volume is smaller along any dimension than $d$, the chunk shrinks to the minimum size along the affected dimension.
To maximize the number of voxels that are used to train the networks $F^i$, we sample a chunk until we find one with at least 2000 update indices.
At most, we make 600 attempts.
If not enough valid indices are found, the next frame is integrated.
The update indices in the chunk are finally used to mask the loss.
We randomly reset the shape and feature grids with a probability of 0.01 at each frame integration to improve training robustness.
\section{Experiments}
We first describe our experimental setup and then evaluate our method against state-of-the-art online depth fusion methods on the Replica dataset, the real-world CoRBS and the Scene3D datasets.
All reported results are averages over the respective test scenes.
We provide further experiments and details in the supplementary material.
\boldparagraph{Implementation Details.}
We use $\omega_\textrm{max} = 500$ and extract $T = 11$ points along the update band $l = 0.1$ m. We store $n = 5$ features at each voxel location and use a chunk side length of $d = 64$.
For the loss~\eqref{eq:loss} $\lambda_1 = 1/60$, $\lambda_2 = 1/600 $ and $\eta = 0.04$ m.
In total, the networks of our model comprise $27.7$K parameters, where $24.3$K are designated to $\mathcal{G}$ and the remaining parameters are split equally between $\mathcal{F}^1$ and $\mathcal{F}^2$.
For our method and all baselines, the image size is $W=H = 256$, the voxel size is $0.01$ m and we mask the 10 pixel border of all depth maps to avoid edge artifacts, \emph{i.e.}} \def\Ie{\emph{I.e.}~pixels belonging to the mask are not integrated into 3D.
Since our TSDF updates cannot be larger than $0.05$ m, we truncate the ground truth TSDF grid at $l/2 = 0.05$ m.
\boldparagraph{Evaluation Metrics.} The TSDF grids are evaluated using the Mean Absolute Distance
(MAD), Mean Squared Error (MSE), Intersection over Union (IoU) and Accuracy (Acc.).
The meshes, produced by marching cubes~\cite{lorensen1987marching} from the TSDF grids, are evaluated using the F-score which is the harmonic mean of the Precision (P) and Recall (R).
\boldparagraph{Baseline Methods.}
Since there is no other multi-sensor online 3D reconstruction method that addresses the same problem, we define our own baselines by generalizing single sensor fusion pipelines to multiple sensors.
TSDF Fusion~\cite{curless1996volumetric} is the gold standard for fast, dense mapping of posed depth maps.
It generalizes to the multi-sensor setting effortlessly by integrating all depth frames into the same TSDF grid at runtime.
RoutedFusion~\cite{Weder2020RoutedFusionLR} extends TSDF Fusion by learning the TSDF mapping.
We generalize RoutedFusion to multiple sensors by feeding all depth frames into the same TSDF grid, but each sensor is assigned a separate fusion network to account for sensor-dependent noise\footnote{Additionally, we tweak the original implementation to get rid of outliers. See supplementary material.}.
NeuralFusion~\cite{weder2021neuralfusion} extends RoutedFusion for better outlier handling, but despite efforts and help from the authors, the network did not converge during training due to heavy loss oscillations caused by integrating different sensors. DI-Fusion~\cite{huang2021di} learns the scene representation and predicts the signed distance value as well as the uncertainty $\sigma$ per voxel. We use the provided model from the authors and integrate all frames from both sensors into the same volumetric grid.
In the following, we refer to each multi-sensor baseline by using the corresponding single sensor name.
For additional comparison, when time synchronized sensors with ground truth depth are available, we train a so-called ``Early Fusion'' baseline by fusing the 2D depth frames of both sensors.
The fusion is performed with a modified version of the 2D denoising network proposed by Weder~\emph{et al.}~\cite{Weder2020RoutedFusionLR} followed by TSDF Fusion to attain the 3D model (see supplementary material).
This baseline should be interpreted as a light-weight alternative to our proposed SenFuNet, but assumes synchronized sensors, which SenFuNet does not.
Finally, for the single-sensor results, we evaluate the TSDF grids $V^i_t$.
To make the comparisons fair, we do not use weight counter thresholding as post-outlier filter for any method. For DI-Fusion, we filter outliers by thresholding the learned voxel uncertainty. The default value provided in the implementation is used.
\subsection{Experiments on the Replica Dataset}
The Replica dataset~\cite{straub2019replica} comprises high-quality 3D reconstructions of a variety of indoor scenes. We collect data from Replica to create a multi-sensor dataset suitable for depth map fusion.
To prepare ground truth signed distance grids, we first make the 3D meshes watertight using screened Poisson surface reconstruction~\cite{kazhdan2013screened}.
The meshes are then converted to signed distance grids using a modified version of the mesh-to-sdf library\footnote{\normalfont\url{https://github.com/marian42/mesh_to_sdf}} to accommodate non-cubic voxel grids.
Ground truth depth and an RGB stereo pair are extracted using AI Habitat~\cite{habitat19iccv} along random trajectories.
In total, we collect $92698$ frames.
We use 7 train and 3 test scenes.
We simulate a depth sensor by adding noise to the ground truth depth of the left stereo view.
Correspondingly, from the RGB stereo pairs a left stereo view
depth map can be predicted using (optionally multi-view) stereo algorithms.
In the following, we construct two sensor combinations and evaluate our model.
\ifeccv
\begin{table}[tb]
\setlength{\belowcaptionskip}{0pt}
\begin{subtable}{.495\columnwidth}
\centering
\resizebox{\columnwidth}{!}
{
\setlength{\tabcolsep}{2pt}
\renewcommand{\arraystretch}{1.05}
\begin{tabular}{l|lllllll}
\cellcolor{gray} & \cellcolor{gray}MSE$\downarrow$ & \cellcolor{gray}MAD$\downarrow$ & \cellcolor{gray}IoU$\uparrow$ & \cellcolor{gray}Acc.$\uparrow$ & \cellcolor{gray}F$\uparrow$ & \cellcolor{gray}P$\uparrow$ & \cellcolor{gray}R$\uparrow$ \\
\multirow{-2}{*}{\cellcolor{gray} \backslashbox[28.3mm]{Model}{Metric}} & \cellcolor{gray}*e-04 & \cellcolor{gray}*e-02 & \cellcolor{gray}[0,1] & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ \\\hline
\multicolumn{8}{c}{\emph{Single Sensor}} \\ \hline
PSMNet~\cite{chang2018pyramid} & 7.30 & 1.95 & 0.664 & 83.23 & 56.20 & 43.10 & 81.34 \\
ToF~\cite{handa2014benchmark} & 7.48 & 1.99 & 0.664 & 83.65 & 58.52 & 45.84 & 84.85 \\
\hline
\multicolumn{8}{c}{\emph{Multi-Sensor Fusion}} \\ \hline
TSDF Fusion~\cite{curless1996volumetric} & 8.20 & 2.11 & 0.669 & 84.09 & 49.58 & 35.33 & \textbf{85.44} \\
RoutedFusion~\cite{Weder2020RoutedFusionLR} & 5.62 & 1.66 & 0.735 & 87.22 & 61.08 & 49.51 & 79.92 \\
DI-Fusion~\cite{huang2021di} $\sigma$=0.15 & - & - & - & - & 48.39 & 34.24 & 85.29 \\
\textbf{SenFuNet{} (Ours)} & \textbf{4.65} & \textbf{1.54} & \textbf{0.753} & \textbf{88.05} & \textbf{69.29} & \textbf{62.05} & 79.81 \\
\end{tabular}
}
\subcaption{Without depth denoising.}
\label{tab:tof_psmnet_no_denoising}
\end{subtable}
\begin{subtable}{0.495\columnwidth}
\resizebox{\columnwidth}{!}
{
\setlength{\tabcolsep}{2pt}
\renewcommand{\arraystretch}{1.05}
\begin{tabular}{l|lllllll}
\cellcolor{gray} & \cellcolor{gray}MSE$\downarrow$ & \cellcolor{gray}MAD$\downarrow$ & \cellcolor{gray}IoU$\uparrow$ & \cellcolor{gray}Acc.$\uparrow$ & \cellcolor{gray}F$\uparrow$ & \cellcolor{gray}P$\uparrow$ & \cellcolor{gray}R$\uparrow$ \\
\multirow{-2}{*}{\cellcolor{gray} \backslashbox[28.3mm]{Model}{Metric}}& \cellcolor{gray}*e-04 & \cellcolor{gray}*e-02 & \cellcolor{gray}[0,1] & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ \\\hline
\multicolumn{8}{c}{\emph{Single Sensor}} \\ \hline
PSMNet~\cite{chang2018pyramid} & 6.35 & 1.77 & 0.673 & 84.54 & 60.28 & 48.26 & 80.41 \\
ToF~\cite{handa2014benchmark} & 5.08 & 1.58 & 0.709 & 87.32 & 68.93 & 59.01 & 83.08 \\
\hline
\multicolumn{8}{c}{\emph{Multi-Sensor Fusion}} \\ \hline
TSDF Fusion~\cite{curless1996volumetric} & 6.40 & 1.80 & 0.681 & 85.31 & 52.93 & 38.95 & \bf 84.60 \\
RoutedFusion~\cite{Weder2020RoutedFusionLR} & 6.04 & 1.68 & 0.644 & 85.10 & 62.67 & 51.75 & 79.52 \\
Early Fusion & 6.40 & 1.40 & 0.760 & 89.02 & 74.60 & 67.46 & 83.47 \\
DI-Fusion~\cite{huang2021di} $\sigma$=0.15 & - & - & - & - & 55.66 & 41.49 & 85.33 \\
\textbf{SenFuNet{} (Ours)} & \textbf{3.49} & \textbf{1.31} & \textbf{0.761} & \textbf{89.61} & \textbf{76.47} & \textbf{73.58} & 79.77 \\
\end{tabular}
}
\subcaption{With depth denoising}
\label{tab:tof_psmnet_denoising}
\end{subtable}
\caption{\textbf{Replica Dataset. ToF\bplus{}PSMNet Fusion.} (a) Our method outperforms the baselines as well as both of the sensor inputs and sets a new state-of-the-art for multi-sensor online depth fusion.
(b) The denoising network mitigates outliers along planar regions, compare to Tab.~\ref{tab:tof_psmnet_no_denoising}. Our method even outperforms the Early Fusion baseline, which assumes synchronized sensors.}
\label{tab:tof_psmnet}
\end{table}
\else
\begin{table}[tb]
\centering
\resizebox{\columnwidth}{!}
{
\begin{tabular}{l|lllllll}
\cellcolor{gray} & \cellcolor{gray}MSE$\downarrow$ & \cellcolor{gray}MAD$\downarrow$ & \cellcolor{gray}IoU$\uparrow$ & \cellcolor{gray}Acc.$\uparrow$ & \cellcolor{gray}F$\uparrow$ & \cellcolor{gray}P$\uparrow$ & \cellcolor{gray}R$\uparrow$ \\
\multirow{-2}{*}{\cellcolor{gray} \backslashbox[28.3mm]{Model}{Metric}} & \cellcolor{gray}*e-04 & \cellcolor{gray}*e-02 & \cellcolor{gray}[0,1] & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ \\\hline
\multicolumn{8}{c}{\emph{Single Sensor}} \\ \hline
PSMNet~\cite{chang2018pyramid} & 7.30 & 1.95 & 0.664 & 83.23 & 56.20 & 43.10 & 81.34 \\
ToF~\cite{handa2014benchmark} & 7.48 & 1.99 & 0.664 & 83.65 & 58.52 & 45.84 & 84.85 \\
\hline
\multicolumn{8}{c}{\emph{Multi-Sensor Fusion}} \\ \hline
TSDF Fusion~\cite{curless1996volumetric} & 8.20 & 2.11 & 0.669 & 84.09 & 49.58 & 35.33 & \textbf{85.44} \\
RoutedFusion~\cite{Weder2020RoutedFusionLR} & 5.62 & 1.66 & 0.735 & 87.22 & 61.08 & 49.51 & 79.92 \\
DI-Fusion~\cite{huang2021di} $\sigma$=0.15 & - & - & - & - & 48.39 & 34.24 & 85.29 \\
\textbf{SenFuNet{} (Ours)} & \textbf{4.65} & \textbf{1.54} & \textbf{0.753} & \textbf{88.05} & \textbf{69.29} & \textbf{62.05} & 79.81 \\
\end{tabular}
}
\caption{\textbf{Replica Dataset. ToF\bplus{}PSMNet Fusion without denoising.} Our method outperforms the baselines as well as both of the sensor inputs and sets a new state-of-the-art for multi-sensor online depth fusion.}
\label{tab:tof_psmnet_no_denoising}
\end{table}
\fi
\ifeccv
\begin{figure*}[t]
\centering
{\tiny
\setlength{\tabcolsep}{1pt}
\renewcommand{\arraystretch}{1}
\newcommand{0.4}{0.153}
\begin{tabular}{ccccccccc}
\rotatebox[origin=c]{90}{Office 4} &
\includegraphics[align=c, width=0.4\linewidth]{figures/replica/tof_psmnet/office_4/model_small.jpg} &
\includegraphics[align=c, width=0.4\linewidth]{figures/replica/tof_psmnet_wo_routing/office_4/tof_small.jpg} &
\includegraphics[align=c, width=0.4\linewidth]{figures/replica/tof_psmnet_wo_routing/office_4/psmnet_stereo_small.jpg} &
\includegraphics[align=c, width=0.4\linewidth]{figures/replica/tof_psmnet_wo_routing/office_4/tsdf_middle_small.jpg} &
\includegraphics[align=c, width=0.4\linewidth]{figures/replica/tof_psmnet_wo_routing/office_4/fused.jpg} &
\includegraphics[align=c, width=0.4\linewidth]{figures/replica/tof_psmnet_wo_routing/office_4/weighting.jpg} & \multirow{12}{*}[20.0pt]{\includegraphics[width=.0372\linewidth]{figures/colorbars/office_4_colorbar.png}} \\
\rotatebox[origin=c]{90}{Office 4} &
\includegraphics[align=c, width=0.4\linewidth]{figures/replica/tof_psmnet/office_4/model_small.jpg} &
\includegraphics[align=c, width=0.4\linewidth]{figures/replica/tof_psmnet/office_4/tof_small.jpg} &
\includegraphics[align=c, width=0.4\linewidth]{figures/replica/tof_psmnet/office_4/psmnet_stereo_small.jpg} &
\includegraphics[align=c, width=0.4\linewidth]{figures/replica/tof_psmnet/office_4/tsdf_middle_small.jpg} &
\includegraphics[align=c, width=0.4\linewidth]{figures/replica/tof_psmnet/office_4/fused.jpg} &
\includegraphics[align=c, width=0.4\linewidth]{figures/replica/tof_psmnet/office_4/weighting.jpg} & \\
& Model & ToF~\cite{handa2014benchmark} & PSMNet~\cite{chang2018pyramid} & TSDF Fusion~\cite{curless1996volumetric} & SenFuNet{} (Ours) & Our sensor weight & \\
\end{tabular}
}
\caption{\textbf{Replica Dataset.} Our method fuses the sensors consistently better than the baseline methods.
Concretely, our method learns to detect and remove outliers much more effectively (best viewed on screen). \textbf{Top row:} ToF\bplus{}PSMNet Fusion without denoising.
See also Tab.~\ref{tab:tof_psmnet_no_denoising}.
\textbf{Bottom row:} ToF\bplus{}PSMNet Fusion with denoising. See also Tab.~\ref{tab:tof_psmnet_denoising}.}
\label{fig:tof_psmnet}
\end{figure*}
\else
\begin{figure*}[t]
\centering
{\footnotesize
\setlength{\tabcolsep}{1pt}
\renewcommand{\arraystretch}{1}
\newcommand{0.4}{0.153}
\begin{tabular}{ccccccccc}
\rotatebox[origin=c]{90}{Office 4} &
\includegraphics[align=c, width=0.4\linewidth]{figures/replica/tof_psmnet/office_4/model_small.jpg} &
\includegraphics[align=c, width=0.4\linewidth]{figures/replica/tof_psmnet_wo_routing/office_4/tof_small.jpg} &
\includegraphics[align=c, width=0.4\linewidth]{figures/replica/tof_psmnet_wo_routing/office_4/psmnet_stereo_small.jpg} &
\includegraphics[align=c, width=0.4\linewidth]{figures/replica/tof_psmnet_wo_routing/office_4/tsdf_middle_small.jpg} &
\includegraphics[align=c, width=0.4\linewidth]{figures/replica/tof_psmnet_wo_routing/office_4/fused.jpg} &
\includegraphics[align=c, width=0.4\linewidth]{figures/replica/tof_psmnet_wo_routing/office_4/weighting.jpg} & \multirow{12}{*}[29.0pt]{\includegraphics[width=.0372\linewidth]{figures/colorbars/office_4_colorbar.png}} \\
\rotatebox[origin=c]{90}{Office 4} &
\includegraphics[align=c, width=0.4\linewidth]{figures/replica/tof_psmnet/office_4/model_small.jpg} &
\includegraphics[align=c, width=0.4\linewidth]{figures/replica/tof_psmnet/office_4/tof_small.jpg} &
\includegraphics[align=c, width=0.4\linewidth]{figures/replica/tof_psmnet/office_4/psmnet_stereo_small.jpg} &
\includegraphics[align=c, width=0.4\linewidth]{figures/replica/tof_psmnet/office_4/tsdf_middle_small.jpg} &
\includegraphics[align=c, width=0.4\linewidth]{figures/replica/tof_psmnet/office_4/fused.jpg} &
\includegraphics[align=c, width=0.4\linewidth]{figures/replica/tof_psmnet/office_4/weighting.jpg} & \\
& Model & ToF~\cite{handa2014benchmark} & PSMNet~\cite{chang2018pyramid} & TSDF Fusion~\cite{curless1996volumetric} & SenFuNet{} (Ours) & Our sensor weighting & \\
\end{tabular}
}
\caption{\textbf{Replica Dataset. Top row: ToF\bplus{}PSMNet Fusion without denoising.} Our method fuses the sensors consistently better than the best baseline method.
In particular, our method learns to detect and remove outliers much more effectively.
See also Tab.~\ref{tab:tof_psmnet_no_denoising}.
\textbf{Bottom row: ToF\bplus{}PSMNet Fusion with denoising.} Our method fuses the sensors consistently better than TSDF Fusion.
In particular, our method learns to detect and remove outliers much more effectively (best viewed on screen). See also Tab.~\ref{tab:tof_psmnet_denoising}.}
\label{fig:tof_psmnet}
\end{figure*}
\fi
\boldparagraph{ToF\bplus{}PSMNet Fusion.} We simulate a ToF sensor by adding realistic noise to the ground truth depth maps\footnote{\label{footnote:tof}\normalfont\url{http://redwood-data.org/indoor/dataset.html}}~\cite{handa2014benchmark}.
To balance the two sensors, we increase the noise level by a factor $5$ compared to the original implementation.
We simulate another depth sensor from the RGB stereo pair using PSMNet~\cite{chang2018pyramid}.
We train the network on the Replica train set and keep it fixed while training our pipeline.
Tab.~\ref{tab:tof_psmnet_no_denoising} shows that our method outperforms both TSDF Fusion, RoutedFusion and DI-Fusion on all metrics except Recall with at least 13 $\%$ on the F-score.
Additionally, the F-score improves with a minimum of 18 $\%$ compared to the input sensors.
Specifically, note the absence of outliers (colored yellow) in Fig.~\ref{fig:tof_psmnet} \emph{Top row} when comparing our method to TSDF Fusion.
Also note the sensor weighting \emph{e.g.}} \def\Eg{\emph{E.g.}~we find lots of noise on the right wall of the ToF scene and thus, our method puts more weight on the stereo sensor in this region.
Weder~\emph{et al.}~\cite{Weder2020RoutedFusionLR} showed that a 2D denoising network (called routing network in the paper) that preprocesses the depth maps can improve performance when noise is present in planar regions.
To this end, we train our own denoising network on the Replica train set and train a new model which applies a fixed denoising network.
According to Tab.~\ref{tab:tof_psmnet_denoising}, this yields a gain of 10 $\%$ on the F-score of the fused model compared to without using a denoising network, see also Fig.~\ref{fig:tof_psmnet} \emph{Bottom row}.
Early Fusion is a strong alternative to our method when the sensors are synchronized.
We want to highlight, however, that the resource overhead of our method is worthwhile since we outperform Early Fusion even in the synchronized setting.
\ifeccv
\else
\begin{table}[tb]
\centering
\resizebox{\columnwidth}{!}
{
\begin{tabular}{l|lllllll}
\cellcolor{gray} & \cellcolor{gray}MSE$\downarrow$ & \cellcolor{gray}MAD$\downarrow$ & \cellcolor{gray}IoU$\uparrow$ & \cellcolor{gray}Acc.$\uparrow$ & \cellcolor{gray}F$\uparrow$ & \cellcolor{gray}P$\uparrow$ & \cellcolor{gray}R$\uparrow$ \\
\multirow{-2}{*}{\cellcolor{gray} \backslashbox[28.3mm]{Model}{Metric}}& \cellcolor{gray}*e-04 & \cellcolor{gray}*e-02 & \cellcolor{gray}[0,1] & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ \\\hline
\multicolumn{8}{c}{\emph{Single Sensor}} \\ \hline
PSMNet~\cite{chang2018pyramid} & 6.35 & 1.77 & 0.673 & 84.54 & 60.28 & 48.26 & 80.41 \\
ToF~\cite{handa2014benchmark} & 5.08 & 1.58 & 0.709 & 87.32 & 68.93 & 59.01 & 83.08 \\
\hline
\multicolumn{8}{c}{\emph{Multi-Sensor Fusion}} \\ \hline
TSDF Fusion~\cite{curless1996volumetric} & 6.40 & 1.80 & 0.681 & 85.31 & 52.93 & 38.95 & 84.60 \\
RoutedFusion~\cite{Weder2020RoutedFusionLR} & 6.04 & 1.68 & 0.644 & 85.10 & 62.67 & 51.75 & 79.52 \\
Early Fusion & 6.40 & 1.40 & 0.760 & 89.02 & 74.60 & 67.46 & 83.47 \\
DI-Fusion~\cite{huang2021di} $\sigma$=0.15 & - & - & - & - & 55.66 & 41.49 & \textbf{85.33} \\
\textbf{SenFuNet{} (Ours)} & \textbf{3.49} & \textbf{1.31} & \textbf{0.761} & \textbf{89.61} & \textbf{76.47} & \textbf{73.58} & 79.77 \\
\end{tabular}
}
\caption{\textbf{Replica Dataset. ToF\bplus{}PSMNet Fusion with denoising.} The denoising network mitigates outliers along planar regions, compare to Tab.~\ref{tab:tof_psmnet_no_denoising}. Our method even outperforms the Early Fusion baseline, which assumes synchronized sensors.}
\label{tab:tof_psmnet_denoising}
\end{table}
\fi
\ifeccv
\begin{wraptable}[8]{R}{0.5\linewidth}
\vspace{-\intextsep}
\centering
\renewcommand{\arraystretch}{1.05}
\resizebox{\linewidth}{!}
{
\begin{tabular}{l|lllllll}
\cellcolor{gray} & \cellcolor{gray}MSE$\downarrow$ & \cellcolor{gray}MAD$\downarrow$ & \cellcolor{gray}IoU$\uparrow$ & \cellcolor{gray}Acc.$\uparrow$ & \cellcolor{gray}F$\uparrow$ & \cellcolor{gray}P$\uparrow$ & \cellcolor{gray}R$\uparrow$ \\
\multirow{-2}{*}{\cellcolor{gray} \backslashbox[28mm]{Model}{Metric}} & \cellcolor{gray}*e-04 & \cellcolor{gray}*e-02 & \cellcolor{gray}[0,1] & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$\%$ & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ \\\hline
Early Fusion & 7.66 & 1.99 & 0.642 & 84.65 & 61.34 & 48.47 & \textbf{83.63} \\
SenFuNet (Ours) & 4.21 & 1.45 & 0.755 & 88.26 & 73.04 & 69.13 & 78.43 \\
SenFuNet (Ours)* & \textbf{3.15} & \textbf{1.23} & \textbf{0.760} & \textbf{89.52} & \textbf{79.26} & \textbf{79.91} & 78.79 \\
\end{tabular}
}
\caption{\textbf{Time Asynchronous Evaluation.} SenFuNet outperforms Early Fusion for sensors with different sampling frequencies. *With depth denoising.}
\label{tab:synchronization}
\end{wraptable}
\boldparagraph{Time Asynchronous Evaluation.}
RGB cameras often have higher frame rates than ToF sensors which makes Early Fusion more challenging as one sensor might lack new data.
We simulate this setting by giving the PSMNet sensor twice the sampling rate of the ToF sensor, \emph{i.e.}} \def\Ie{\emph{I.e.}~we drop every second ToF frame.
To provide a corresponding ToF frame for Early Fusion, we reproject the latest observed ToF frame into the current view of the PSMNet sensor.
As demonstrated in Tab.~\ref{tab:synchronization} the gap between our SenFuNet late fusion approach and Early Fusion becomes even larger (cf. Tab.~\ref{tab:tof_psmnet} (b)).
\fi
\ifeccv
\begin{table}[tb]
\setlength{\belowcaptionskip}{0pt}
\begin{subtable}{.495\columnwidth}
\centering
\resizebox{\columnwidth}{!}
{
\setlength{\tabcolsep}{2pt}
\renewcommand{\arraystretch}{1.05}
\begin{tabular}{l|lllllll}
\cellcolor{gray} & \cellcolor{gray}MSE$\downarrow$ & \cellcolor{gray}MAD$\downarrow$ & \cellcolor{gray}IoU$\uparrow$ & \cellcolor{gray}Acc.$\uparrow$ & \cellcolor{gray}F$\uparrow$ & \cellcolor{gray}P$\uparrow$ & \cellcolor{gray}R$\uparrow$ \\
\multirow{-2}{*}{\cellcolor{gray} \backslashbox[28.3mm]{Model}{Metric}} & \cellcolor{gray}*e-04 & \cellcolor{gray}*e-02 & \cellcolor{gray}[0,1] & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ \\\hline
\multicolumn{8}{c}{\emph{Single Sensor}} \\ \hline
PSMNet~\cite{chang2018pyramid} & 7.30 & 1.95 & 0.664 & 83.23 & 56.20 & 43.10 & 81.34 \\
SGM~\cite{hirschmuller2007stereo} & 8.90 & 2.17 & 0.610 & 81.48 & 57.71 & 44.40 & 84.08 \\
\hline
\multicolumn{8}{c}{\emph{Multi-Sensor Fusion}} \\ \hline
TSDF Fusion~\cite{curless1996volumetric} & 9.17 & 2.24 & 0.634 & 82.62 & 47.75 & 33.39 & 85.10 \\
RoutedFusion~\cite{Weder2020RoutedFusionLR} & 7.11 & 1.82 & 0.671 & 84.63 & 60.31 & 48.47 & 80.21 \\
DI-Fusion~\cite{huang2021di} $\sigma$=0.15 & - & - & - & - & 47.29 & 32.92 & \textbf{85.14} \\
SenFuNet{} (Ours) & \textbf{4.77} & \textbf{1.56}& \textbf{0.738} & \textbf{87.62} & \textbf{69.83} & \textbf{63.20} & 79.12 \\
\end{tabular}
}
\subcaption{Without depth denoising}
\label{tab:sgm_psmnet_no_denoising}
\end{subtable}
\begin{subtable}{0.495\columnwidth}
\resizebox{\columnwidth}{!}
{
\setlength{\tabcolsep}{2pt}
\renewcommand{\arraystretch}{1.05}
\begin{tabular}{l|lllllll}
\cellcolor{gray} & \cellcolor{gray}MSE$\downarrow$ & \cellcolor{gray}MAD$\downarrow$ & \cellcolor{gray}IoU$\uparrow$ & \cellcolor{gray}Acc.$\uparrow$ & \cellcolor{gray}F$\uparrow$ & \cellcolor{gray}P$\uparrow$ & \cellcolor{gray}R$\uparrow$ \\
\multirow{-2}{*}{\cellcolor{gray} \backslashbox[28.3mm]{Model}{Metric}}& \cellcolor{gray}*e-04 & \cellcolor{gray}*e-02 & \cellcolor{gray}[0,1] & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ \\\hline
\multicolumn{8}{c}{\emph{Single Sensor}} \\ \hline
PSMNet~\cite{chang2018pyramid} & 6.35 & 1.77 & 0.673 & 84.54 & 60.28 & 48.26 & 80.41 \\
SGM~\cite{hirschmuller2007stereo} & 6.60 & 1.80 & 0.659 & 84.29 & 60.79 & 49.78 & 78.13 \\
\hline
\multicolumn{8}{c}{\emph{Multi-Sensor Fusion}} \\ \hline
TSDF Fusion~\cite{curless1996volumetric} & 7.28 & 1.93 & 0.669 & 84.74 & 54.09 & 40.45 & 81.65 \\
RoutedFusion~\cite{Weder2020RoutedFusionLR} & 8.09 & 2.05 & 0.580 & 80.18 & 59.88 & 47.13 & 82.12 \\
Early Fusion & 4.99 & 1.51 & 0.707 & 86.99 & 69.40 & 61.07 & 80.36 \\
DI-Fusion~\cite{huang2021di} $\sigma$=0.15 & - & - & - & - & 52.65 & 38.50 & \textbf{83.62} \\
SenFuNet{} (Ours) & \textbf{4.04} & \textbf{1.41} & \textbf{0.737} & \textbf{88.11} & \textbf{71.18} & \textbf{66.81} & 76.27 \\
\end{tabular}
}
\subcaption{With depth denoising}
\label{tab:sgm_psmnet_denoising}
\end{subtable}
\caption{\textbf{Replica Dataset. SGM\bplus{}PSMNet Fusion.} Our method does not assume a particular sensor pairing and works well for all tested sensors. The gain from the denoising network is marginal with a 2$\%$ F-score improvement since there are few outliers on planar regions of the stereo depth maps and the denoising network over-smooths the depth along discontinuities.
Note that our method outperforms Early Fusion (which generally assumes synchronized sensors) on most metrics even without depth denoising.
}
\label{tab:sgm_psmnet}
\end{table}
\fi
\boldparagraph{SGM\bplus{}PSMNet Fusion.} Our method does not assume a particular sensor pairing.
We show this, by replacing the ToF sensor with a stereo sensor acquired using semi-global matching \cite{hirschmuller2007stereo}.
In Tab.~\ref{tab:sgm_psmnet}, we show state-of-the-art sensor fusion performance both with and without a denoising network.
The denoising network tends to over-smooth depth discontinuities, which negatively affects performance when few outliers exist. Additionally, even without using a denoising network, we outperform Early Fusion on most metrics. TSDF Fusion, RoutedFusion and DI-Fusion aggregate outliers across the sensors leading to worse performance than the single sensor results.
\ifeccv
\else
\begin{table}[tb]
\centering
\resizebox{\columnwidth}{!}
{
\setlength{\tabcolsep}{3pt}
\renewcommand{\arraystretch}{1.05}
\begin{tabular}{l|lllllll}
\cellcolor{gray} & \cellcolor{gray}MSE$\downarrow$ & \cellcolor{gray}MAD$\downarrow$ & \cellcolor{gray}IoU$\uparrow$ & \cellcolor{gray}Acc.$\uparrow$ &\cellcolor{gray} F$\uparrow$ &\cellcolor{gray} P$\uparrow$ & \cellcolor{gray}R$\uparrow$ \\
\multirow{-2}{*}{\cellcolor{gray} \backslashbox[45mm]{Model}{Metric}} & \cellcolor{gray}*e-04 & \cellcolor{gray}*e-02 & \cellcolor{gray}[0,1] &\cellcolor{gray} $[\%]$ & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ \\\hline
\multicolumn{8}{c}{\emph{Single Sensor}} \\ \hline
PSMNet~\cite{chang2018pyramid} w.\phantom{o} denoising & 6.35 & 1.77 & 0.673 & 84.54 & 60.28 & 48.26 & 80.41 \\
PSMNet~\cite{chang2018pyramid} w/o denoising & 7.30 & 1.95 & 0.664 & 83.23 & 56.20 & 43.10 & 81.34 \\
\hline
SGM~\cite{hirschmuller2007stereo} w.\phantom{o} denoising & 6.60 & 1.80 & 0.659 & 84.29 & 60.79 & 49.78 & 78.13 \\
SGM~\cite{hirschmuller2007stereo} w/o denoising & 8.90 & 2.17 & 0.610 & 81.48 & 57.71 & 44.40 & 84.08 \\
\hline
\multicolumn{8}{c}{\emph{Multi-Sensor Fusion}} \\ \hline
TSDF Fusion~\cite{curless1996volumetric} w.\phantom{o} denoising & 7.28 & 1.93 & 0.669 & 84.74 & 54.09 & 40.45 & 81.65 \\
TSDF Fusion~\cite{curless1996volumetric} w/o denoising & 9.17 & 2.24 & 0.634 & 82.62 & 47.75 & 33.39 & 85.10 \\
RoutedFusion~\cite{Weder2020RoutedFusionLR} w.\phantom{o} denoising & 8.09 & 2.05 & 0.580 & 80.18 & 59.88 & 47.13 & 82.12 \\
RoutedFusion~\cite{Weder2020RoutedFusionLR} w/o denoising & 7.11 & 1.82 & 0.671 & 84.63 & 60.31 & 48.47 & 80.21 \\
Early Fusion & 4.99 & 1.51 & 0.707 & 86.99 & 69.40 & 61.07 & 80.36 \\
DI-Fusion~\cite{huang2021di} w.\phantom{o} denoising $\sigma$=0.15 & - & - & - & - & 52.65 & 38.50 & 83.62 \\
DI-Fusion~\cite{huang2021di} w/o denoising $\sigma$=0.15 & - & - & - & - & 47.29 & 32.92 & \textbf{85.14} \\
\hline
\textbf{SenFuNet{} (Ours) w.\phantom{o} denoising} & \textbf{4.04} & \textbf{1.41} & \textbf{0.737} & \textbf{88.11} & \textbf{71.18} & \textbf{66.81} & 76.27 \\
\textcolor{Cerulean}{SenFuNet{} (Ours) w/o denoising} & \textcolor{Cerulean}{4.77} & \textcolor{Cerulean}{1.56} & \textcolor{Cerulean}{0.738} & \textcolor{Cerulean}{87.62} & \textcolor{Cerulean}{69.83} & \textcolor{Cerulean}{63.20} & \textcolor{Cerulean}{79.12} \\
\end{tabular}
}
\caption{\textbf{Replica Dataset. SGM\bplus{}PSMNet Fusion.} Our method does not assume a particular sensor pairing and works well for all tested sensors.
The gain from the denoising network is marginal with a 2$\%$ F-score improvement since there are few outliers on planar regions of the stereo depth maps and the denoising network over-smooths the depth along discontinuities.
Note that our method outperforms Early Fusion, which assumes synchronized sensors, on most metrics even without denoising.}
\label{tab:sgm_psmnet}
\end{table}
\fi
\ifeccv
\else
\begin{figure*}[t]
\centering
\scriptsize
\setlength{\tabcolsep}{1pt}
\renewcommand{\arraystretch}{1}
\begin{tabular}{ccccccccc}
& Frame & ToF & MVS\cite{schonberger2016pixelwise} & TSDF Fusion\cite{curless1996volumetric} & SenFuNet (Ours) & Our sensor weighting & &\\
\rotatebox[origin=c]{90}{Copy room} &
\includegraphics[align=c, width=.153\linewidth]{figures/scene3d/copyroom_mvs_tof/model_small.jpg} &
\includegraphics[align=c, width=.153\linewidth]{figures/scene3d/copyroom_mvs_tof/tof_small.jpg} &
\includegraphics[align=c, width=.153\linewidth]{figures/scene3d/copyroom_mvs_tof/mvs_small.jpg} &
\includegraphics[align=c, width=.153\linewidth]{figures/scene3d/copyroom_mvs_tof/tsdf_small.jpg} &
\includegraphics[align=c, width=.153\linewidth]{figures/scene3d/copyroom_mvs_tof/fused_small.jpg} &
\includegraphics[align=c, width=.153\linewidth]{figures/scene3d/copyroom_mvs_tof/weighting_small.jpg} & \multirow{11}{*}[34pt]{\includegraphics[width=.038\linewidth]{figures/colorbars/multi_agent_colorbar.png}} \\
%
\rotatebox[origin=c]{90}{Copy room} &
\includegraphics[align=c, width=.153\linewidth]{figures/scene3d/copyroom_collab/model_small.jpg} &
\includegraphics[trim={0cm 0cm 0cm 0cm}, clip=true, align=c, width=.153\linewidth]{figures/scene3d/copyroom_collab/tof_1_small.jpg} &
\includegraphics[trim={0cm 0cm 0cm 0cm}, clip=true,align=c, width=.153\linewidth]{figures/scene3d/copyroom_collab/tof_2_small.jpg} &
\includegraphics[trim={0cm 0cm 0cm 0cm}, clip=true,align=c, width=.153\linewidth]{figures/scene3d/copyroom_collab/tsdf_small.jpg}&
\includegraphics[trim={0cm 0cm 0cm 0cm}, clip=true,align=c, width=.153\linewidth]{figures/scene3d/copyroom_collab/fused_small.jpg} &
\includegraphics[trim={0cm 0cm 0cm 0cm}, clip=true,align=c, width=.153\linewidth]{figures/scene3d/copyroom_collab/weighting_small.jpg} & \\
& Frame & ToF 1 & ToF 2 & TSDF Fusion\cite{curless1996volumetric} & SenFuNet (Ours) & Our sensor weighting & &\\
\end{tabular}
\caption{\textbf{Scene3D Dataset. Top row: ToF\bplus{}MVS Fusion.} Our method effectively fuses the ToF and MVS sensors.
Note specifically the absence of yellow outliers in the corner of the bookshelf.
See also Tab.~\ref{tab:scene3d_tof_mvs}.
\textbf{Bottom row: Multi-Agent ToF Reconstruction.} Our method is flexible and can perform Multi-Agent reconstruction.
Note that our model learns where to trust each agent at different spatial locations for maximum completeness while also being noise aware. See for instance the left bottom corner of the bookshelf where both agents integrate, but the noise-free agent is given a higher weighting (best viewed on screen).
See also Tab.~\ref{tab:scene3d_collab}.}
\label{fig:scene3d}
\end{figure*}
\fi
\ifeccv
\else
\begin{figure}[tb]
\centering
\resizebox{\columnwidth}{!}
{
\setlength{\tabcolsep}{6pt}
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{l|lllllll}
\cellcolor{gray} & \cellcolor{gray}MSE$\downarrow$ & \cellcolor{gray}MAD$\downarrow$ & \cellcolor{gray}IoU$\uparrow$ & \cellcolor{gray}Acc.$\uparrow$ & \cellcolor{gray}F$\uparrow$ & \cellcolor{gray}P$\uparrow$ & \cellcolor{gray}R$\uparrow$ \\
\multirow{-2}{*}{\cellcolor{gray} \backslashbox[28mm]{Model}{Metric}}& \cellcolor{gray}*e-04 & \cellcolor{gray}*e-02 & \cellcolor{gray}[0,1] & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ \\\hline
\multicolumn{8}{c}{\emph{Single Sensor}} \\ \hline
MVS~\cite{schonberger2016pixelwise} & 15.39 & 3.25 & 0.263 & 74.84 & 17.73 & 9.77 & 95.96 \\
ToF & 7.11 & \textbf{1.52} & 0.486 & 83.13 & 72.98 & 57.85 & 98.82 \\
\hline
\multicolumn{8}{c}{\emph{Multi-Sensor Fusion}} \\ \hline
TSDF Fusion~\cite{curless1996volumetric} & 15.09 & 3.15 & 0.290 & 75.96 & 18.13 & 9.98 & \textbf{99.19} \\
RoutedFusion~\cite{Weder2020RoutedFusionLR} & 23.09 & 4.40 & 0.222 & 49.42 & 2.36 & 1.27 & 16.43 \\
DI-Fusion~\cite{huang2021di} $\sigma$=0.15 & - & - & - & - & 28.19 & 16.71 & 90.15 \\
\textbf{SenFuNet{} (Ours)} & \textbf{6.53} & 1.55 & \textbf{0.510} & \textbf{84.88} & \textbf{74.56} & \textbf{59.74} & 99.16 \\
\end{tabular}
}
{\scriptsize
\setlength{\tabcolsep}{1pt}
\renewcommand{\arraystretch}{1}
\newcommand{0.4}{0.13}
\begin{tabular}{cccccccc}
\rotatebox[origin=c]{90}{Human} &
\includegraphics[trim={0cm 0cm 0cm 0cm}, clip=true, align=c, width=0.115\linewidth]{figures/corbs/model_cropped.jpg} &
\includegraphics[trim={2.5cm 0.5cm 2.5cm 0cm}, clip=true, align=c, width=0.4\linewidth]{figures/corbs/tof_small.jpg} &
\includegraphics[trim={2.5cm 0.5cm 2.5cm 0cm}, clip=true, align=c, width=0.4\linewidth]{figures/corbs/mvs_small.jpg} &
\includegraphics[trim={2.5cm 0.5cm 2.5cm 0cm}, clip=true, align=c, width=0.4\linewidth]{figures/corbs/tsdf_small.jpg} &
\includegraphics[trim={2.5cm 0.5cm 2.5cm 0cm}, clip=true, align=c, width=0.4\linewidth]{figures/corbs/fused_small.jpg} &
\includegraphics[trim={2.5cm 0.5cm 2.5cm 0cm}, clip=true, align=c, width=0.4\linewidth]{figures/corbs/weighting_small.jpg} &
\includegraphics[trim={0cm 0cm 0cm 0cm}, clip=true, align=c, width=0.4\linewidth]{figures/colorbars/corbs_colorbar.png}\\
& Frame & ToF & MVS~\cite{schonberger2016pixelwise} & TSDF & SenFuNet{} & Our sensor & \\
& & & & Fusion~\cite{curless1996volumetric} & (Ours) & weighting & \\
\end{tabular}
}
\caption{\textbf{CoRBS Dataset. ToF\bplus{}MVS Fusion.} Our model can find synergies between very imbalanced sensors. The numerical results show that our fused model is better than the individual depth sensor inputs and significantly better than any of the baseline methods.
Contrary to our method, the baseline methods cannot handle the high degree of outliers present from the MVS sensor.}
\label{fig:corbs}
\end{figure}
\fi
\subsection{Experiments on the CoRBS Dataset}
The real-world CoRBS dataset~\cite{wasenmuller2016corbs} provides a selection of reconstructed objects with very accurate ground truth 3D and camera trajectories along with a consumer-grade RGBD camera.
We apply our method to the dataset by training a model on the desk scene and testing it on the human scene.
The procedure to create the ground truth signed distance grids is identical to the Replica dataset.
We create an additional depth sensor along the ToF depth using multi-view stereo (MVS) with COLMAP~\cite{schonberger2016pixelwise}\footnote{Unfortunately, no suitable public real 3D dataset exists, which comprises binocular stereo pairs, and an active depth sensor, as well as ground truth geometry.}.
Fig.~\ref{fig:corbs} shows that our model can fuse very imbalanced sensors while the baseline methods fail severely.
Even if one sensor (MVS) is significantly worse, our method still improves on most metrics. This confirms that our method learns to meaningfully fuse the sensors even if one sensor adds very little.
\ifeccv
\begin{figure}[b]
\setlength{\belowcaptionskip}{0pt}
\centering
\begin{subfigure}{0.44\columnwidth}
\resizebox{\columnwidth}{!}
{
\setlength{\tabcolsep}{6pt}
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{l|lllllll}
\cellcolor{gray} & \cellcolor{gray}MSE$\downarrow$ & \cellcolor{gray}MAD$\downarrow$ & \cellcolor{gray}IoU$\uparrow$ & \cellcolor{gray}Acc.$\uparrow$ & \cellcolor{gray}F$\uparrow$ & \cellcolor{gray}P$\uparrow$ & \cellcolor{gray}R$\uparrow$ \\
\multirow{-2}{*}{\cellcolor{gray} \backslashbox[28mm]{Model}{Metric}}& \cellcolor{gray}*e-04 & \cellcolor{gray}*e-02 & \cellcolor{gray}[0,1] & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ \\\hline
\multicolumn{8}{c}{\emph{Single Sensor}} \\ \hline
MVS~\cite{schonberger2016pixelwise} & 15.39 & 3.25 & 0.263 & 74.84 & 17.73 & 9.77 & 95.96 \\
ToF & 7.11 & \textbf{1.52} & 0.486 & 83.13 & 72.98 & 57.85 & 98.82 \\
\hline
\multicolumn{8}{c}{\emph{Multi-Sensor Fusion}} \\ \hline
TSDF Fusion~\cite{curless1996volumetric} & 15.09 & 3.15 & 0.290 & 75.96 & 18.13 & 9.98 & \textbf{99.19} \\
RoutedFusion~\cite{Weder2020RoutedFusionLR} & 23.09 & 4.40 & 0.222 & 49.42 & 2.36 & 1.27 & 16.43 \\
DI-Fusion~\cite{huang2021di} $\sigma$=0.15 & - & - & - & - & 28.19 & 16.71 & 90.15 \\
\textbf{SenFuNet{} (Ours)} & \textbf{6.53} & 1.55 & \textbf{0.510} & \textbf{84.88} & \textbf{74.56} & \textbf{59.74} & 99.16 \\
\end{tabular}
}
\subcaption{}
\label{fig:corbs_metric}
\end{subfigure}
\begin{subfigure}{0.55\columnwidth}
\resizebox{\columnwidth}{!}
{
\setlength{\tabcolsep}{1pt}
\renewcommand{\arraystretch}{1}
\newcommand{0.4}{0.4}
{\Large
\begin{tabular}{cccccccc}
\rotatebox[origin=c]{90}{Human} &
\includegraphics[trim={0cm 0cm 0cm 0cm}, clip=true, align=c, width=0.35\linewidth]{figures/corbs/model_cropped.jpg} &
\includegraphics[trim={2.5cm 0.5cm 2.5cm 0cm}, clip=true, align=c, width=0.4\linewidth]{figures/corbs/tof_small.jpg} &
\includegraphics[trim={2.5cm 0.5cm 2.5cm 0cm}, clip=true, align=c, width=0.4\linewidth]{figures/corbs/mvs_small.jpg} &
\includegraphics[trim={2.5cm 0.5cm 2.5cm 0cm}, clip=true, align=c, width=0.4\linewidth]{figures/corbs/tsdf_small.jpg} &
\includegraphics[trim={2.5cm 0.5cm 2.5cm 0cm}, clip=true, align=c, width=0.4\linewidth]{figures/corbs/fused_small.jpg} &
\includegraphics[trim={2.5cm 0.5cm 2.5cm 0cm}, clip=true, align=c, width=0.4\linewidth]{figures/corbs/weighting_small.jpg} &
\includegraphics[trim={0cm 0cm 0cm 0cm}, clip=true, align=c, width=0.4\linewidth]{figures/colorbars/corbs_colorbar.png}\\
& Frame & ToF & MVS~\cite{schonberger2016pixelwise} & TSDF & SenFuNet{} & Our sensor & \\
& & & & Fusion~\cite{curless1996volumetric} & (Ours) & weighting & \\
\end{tabular}
}
}
\subcaption{}
\label{fig:corbs_visual}
\end{subfigure}
\caption{\textbf{CoRBS Dataset. ToF\bplus{}MVS Fusion.} Our model can find synergies between very imbalanced sensors. (a) The numerical results show that our fused model is better than the individual depth sensor inputs and significantly better than any of the baseline methods.
(b) Contrary to our method, the baseline methods cannot handle the high degree of outliers
from the MVS sensor.}
\label{fig:corbs}
\end{figure}
\fi
\ifeccv
\begin{figure*}[ht]
\centering
\tiny
\setlength{\tabcolsep}{1pt}
\renewcommand{\arraystretch}{1}
\begin{tabular}{ccccccccc}
& Frame & ToF & MVS\cite{schonberger2016pixelwise} & TSDF Fusion\cite{curless1996volumetric} & SenFuNet (Ours) & Our sensor weight & &\\
\rotatebox[origin=c]{90}{Copy room} &
\includegraphics[align=c, width=.153\linewidth]{figures/scene3d/copyroom_mvs_tof/model_small.jpg} &
\includegraphics[align=c, width=.153\linewidth]{figures/scene3d/copyroom_mvs_tof/tof_small.jpg} &
\includegraphics[align=c, width=.153\linewidth]{figures/scene3d/copyroom_mvs_tof/mvs_small.jpg} &
\includegraphics[align=c, width=.153\linewidth]{figures/scene3d/copyroom_mvs_tof/tsdf_small.jpg} &
\includegraphics[align=c, width=.153\linewidth]{figures/scene3d/copyroom_mvs_tof/fused_small.jpg} &
\includegraphics[align=c, width=.153\linewidth]{figures/scene3d/copyroom_mvs_tof/weighting_small.jpg} & \multirow{11}{*}[25pt]{\includegraphics[width=.038\linewidth]{figures/colorbars/multi_agent_colorbar.png}} \\
%
\rotatebox[origin=c]{90}{Copy room} &
\includegraphics[align=c, width=.153\linewidth]{figures/scene3d/copyroom_collab/model_small.jpg} &
\includegraphics[trim={0cm 0cm 0cm 0cm}, clip=true, align=c, width=.153\linewidth]{figures/scene3d/copyroom_collab/tof_1_small.jpg} &
\includegraphics[trim={0cm 0cm 0cm 0cm}, clip=true,align=c, width=.153\linewidth]{figures/scene3d/copyroom_collab/tof_2_small.jpg} &
\includegraphics[trim={0cm 0cm 0cm 0cm}, clip=true,align=c, width=.153\linewidth]{figures/scene3d/copyroom_collab/tsdf_small.jpg}&
\includegraphics[trim={0cm 0cm 0cm 0cm}, clip=true,align=c, width=.153\linewidth]{figures/scene3d/copyroom_collab/fused_small.jpg} &
\includegraphics[trim={0cm 0cm 0cm 0cm}, clip=true,align=c, width=.153\linewidth]{figures/scene3d/copyroom_collab/weighting_small.jpg} & \\
& Frame & ToF 1 & ToF 2 & TSDF Fusion\cite{curless1996volumetric} & SenFuNet (Ours) & Our sensor weight & &\\
\end{tabular}
\caption{ \textbf{Top row:} Our method effectively fuses the ToF and MVS sensors.
Note specifically the absence of yellow outliers in the corner of the bookshelf.
See also Tab.~\ref{tab:scene3d_tof_mvs}.
\textbf{Bottom row:} Multi-Agent ToF Reconstruction. Our method is flexible and can perform Multi-Agent reconstruction.
Note that our model learns where to trust each agent at different spatial locations for maximum completeness, while also being noise aware. See for instance the left bottom corner of the bookshelf where both agents integrate, but the noise-free agent is given a higher weighting. The above scene is taken from the Scene3D Dataset \cite{zhou2013dense} (best viewed on screen).
See also Tab.~\ref{tab:scene3d_collab}.}
\label{fig:scene3d}
\end{figure*}
\fi
\ifeccv
\begin{wraptable}[12]{R}{0.60\linewidth}
\vspace{-\intextsep}
\setlength{\belowcaptionskip}{0pt}
\begin{subtable}{.49\linewidth}
\centering
\resizebox{\linewidth}{!}
{
\setlength{\tabcolsep}{2pt}
\renewcommand{\arraystretch}{1.05}
\begin{tabular}{l|lll}
\cellcolor{gray}Model &\cellcolor{gray} F$\uparrow$ $[\%]$ & \cellcolor{gray}P$\uparrow$ $[\%]$ & \cellcolor{gray}R$\uparrow$ $[\%]$ \\ \hline
\multicolumn{4}{c}{\emph{Single Sensor}} \\ \hline
MVS~\cite{schonberger2016pixelwise} & 44.26 & 32.08 & 71.33 \\
ToF & 90.73 & 85.53 & 96.61 \\
\hline
\multicolumn{4}{c}{\emph{Multi-Sensor Fusion}} \\ \hline
TSDF Fusion~\cite{curless1996volumetric} & 54.77 & 38.11 & 97.29 \\
RoutedFusion~\cite{Weder2020RoutedFusionLR} & 53.98 & 37.73 & 94.86 \\
DI-Fusion~\cite{huang2021di} $\sigma$=0.15 & 48.84 & 32.50 & \textbf{98.24} \\
\textbf{SenFuNet{} (Ours)} & \textbf{93.39} & \textbf{91.28} & 95.60 \\[-10pt]
\end{tabular}
}
\subcaption{}
\label{tab:scene3d_tof_mvs}
\end{subtable}
\begin{subtable}{0.49\linewidth}
\resizebox{\linewidth}{!}
{
\setlength{\tabcolsep}{2pt}
\renewcommand{\arraystretch}{1.05}
\begin{tabular}{l|lll}
\cellcolor{gray}Model & \cellcolor{gray}F$\uparrow$ $[\%]$ & \cellcolor{gray}P$\uparrow$ $[\%]$ & \cellcolor{gray}R$\uparrow$ $[\%]$ \\ \hline
\multicolumn{4}{c}{\emph{Single Sensor}} \\ \hline
ToF 1 & 84.68 & 89.92 & 80.01\\
ToF 2 & 77.77 & 79.65 & 75.97 \\
\hline
\multicolumn{4}{c}{\emph{Multi-Sensor Fusion}} \\ \hline
TSDF Fusion~\cite{curless1996volumetric} & 90.73 & 85.53 & 96.61 \\
RoutedFusion\cite{Weder2020RoutedFusionLR} & 84.11 & 74.16 & 97.14 \\
DI-Fusion~\cite{huang2021di} $\sigma$=0.15 & 86.31 & 77.27 & \textbf{97.74} \\
\textbf{SenFuNet{} (Ours)} & \textbf{93.73} & \textbf{91.56} & 96.00 \\[-10pt]
\end{tabular}
}
\subcaption{}
\label{tab:scene3d_collab}
\end{subtable}
\caption{\textbf{Scene3D Dataset.} (a) \textbf{ToF\bplus{}MVS Fusion.} Our method outperforms the baselines on real-world data on a room-sized scene.
(b) \textbf{Multi-Agent ToF Reconstruction.} Our method is flexible and can perform collaborative sensor fusion when multiple sensors with different camera trajectories are provided.}
\label{tab:scene3d}
\end{wraptable}
\else
\begin{table}
\setlength{\belowcaptionskip}{0pt}
\begin{subtable}{.495\columnwidth}
\centering
\resizebox{\columnwidth}{!}
{
\setlength{\tabcolsep}{2pt}
\renewcommand{\arraystretch}{1.05}
\begin{tabular}{l|lll}
\cellcolor{gray}Model &\cellcolor{gray} F$\uparrow$ $[\%]$ & \cellcolor{gray}P$\uparrow$ $[\%]$ & \cellcolor{gray}R$\uparrow$ $[\%]$ \\ \hline
\multicolumn{4}{c}{\emph{Single Sensor}} \\ \hline
MVS~\cite{schonberger2016pixelwise} & 44.26 & 32.08 & 71.33 \\
ToF & 90.73 & 85.53 & 96.61 \\
\hline
\multicolumn{4}{c}{\emph{Multi-Sensor Fusion}} \\ \hline
TSDF Fusion~\cite{curless1996volumetric} & 54.77 & 38.11 & 97.29 \\
RoutedFusion~\cite{Weder2020RoutedFusionLR} & 53.98 & 37.73 & 94.86 \\
DI-Fusion~\cite{huang2021di} $\sigma$=0.15 & 48.84 & 32.50 & \textbf{98.24} \\
\textbf{SenFuNet{} (Ours)} & \textbf{93.39} & \textbf{91.28} & 95.60 \\[-10pt]
\end{tabular}
}
\subcaption{}
\label{tab:scene3d_tof_mvs}
\end{subtable}
\begin{subtable}{0.495\columnwidth}
\resizebox{\columnwidth}{!}
{
\setlength{\tabcolsep}{2pt}
\renewcommand{\arraystretch}{1.05}
\begin{tabular}{l|lll}
\cellcolor{gray}Model & \cellcolor{gray}F$\uparrow$ $[\%]$ & \cellcolor{gray}P$\uparrow$ $[\%]$ & \cellcolor{gray}R$\uparrow$ $[\%]$ \\ \hline
\multicolumn{4}{c}{\emph{Single Sensor}} \\ \hline
ToF 1 & 84.68 & 89.92 & 80.01\\
ToF 2 & 77.77 & 79.65 & 75.97 \\
\hline
\multicolumn{4}{c}{\emph{Multi-Sensor Fusion}} \\ \hline
TSDF Fusion~\cite{curless1996volumetric} & 90.73 & 85.53 & 96.61 \\
RoutedFusion\cite{Weder2020RoutedFusionLR} & 84.11 & 74.16 & 97.14 \\
DI-Fusion~\cite{huang2021di} $\sigma$=0.15 & 86.31 & 77.27 & \textbf{97.74} \\
\textbf{SenFuNet{} (Ours)} & \textbf{93.73} & \textbf{91.56} & 96.00 \\[-10pt]
\end{tabular}
}
\subcaption{}
\label{tab:scene3d_collab}
\end{subtable}
\caption{\textbf{Scene3D Dataset.} \textbf{(a) ToF\bplus{}MVS Fusion.} Our method outperforms the baselines on real-world data on a room-sized scene.
\textbf{(b) Multi-Agent ToF Reconstruction.} Our method is flexible and can perform collaborative sensor fusion when multiple sensors with different camera trajectories are available.}
\label{tab:scene3d}
\end{table}
\fi
\subsection{Experiments on the Scene3D Dataset}
We demonstrate that our framework can fuse imbalanced sensors on room-sized real-world scenes using the RGBD Scene3D dataset~\cite{zhou2013dense}. The Scene3D dataset comprises a collection of 3D models of indoor and outdoor scenes.
We train our model on the stonewall scene and test it on the copy room scene. To create the ground truth training grid, we follow the steps outlined previously except that it was not necessary to make the mesh watertight.
We fuse every 10th frame during train and test time. Equivalent to the study on CoRBS, we create an MVS depth sensor using COLMAP and perform ToF and MVS fusion.
We only integrate MVS depth in the interval $[0.5, 3.0]$ m.
Tab.~\ref{tab:scene3d_tof_mvs} along with Fig.~\ref{fig:scene3d} \emph{Top row} shows that our method yields a fused result better than the individual input sensors and the baseline methods. Further, Fig.~\ref{fig:teaser} shows our method in comparison with TSDF Fusion~\cite{curless1996volumetric} and RoutedFusion~\cite{Weder2020RoutedFusionLR} on the lounge scene.
\boldparagraph{Multi-Agent ToF Fusion.} Our method is not exclusively applicable to sensor fusion, but more flexible.
We demonstrate this by formulating a Multi-Agent reconstruction problem, which assumes that two identical ToF sensors with different camera trajectories are provided.
The task is to fuse the reconstructions from the two agents.
This requires an understanding of when to perform sensor selection for increased completeness and smooth fusion when both sensors have registered observations.
Note that this formulation is different from typical works on collaborative 3D reconstruction \emph{e.g.}} \def\Eg{\emph{E.g.}~\cite{golodetz2018collaborative} where the goal is to align 3D reconstruction fragments to produce a complete model.
In our Multi-Agent setting, the alignment is given and the task is instead to perform data fusion on the 3D fragments. No modification of our method is required to perform this task.
We set $\lambda_1 = 1/1200$ and $\lambda_2 = 1/12000$ and split the original trajectory into 100 frame chunks that are divided between the agents.
Tab.~\ref{tab:scene3d_collab} and Fig.~\ref{fig:scene3d} \emph{Bottom row} show that our method effectively fuses the incoming data streams and yields a 4 $\%$ F-score gain on the TSDF Fusion baseline.
\subsection{More Statistical Analysis}
\boldparagraph{Performance over Camera Trajectory.} To show that our fused output is not only better at the end of the fusion process, we visualize the quantitative performance across the accumulated trajectory. In Fig.~\ref{fig:plot_trajectory_fusion}, we evaluate the office 0 scene on the sensors $\{$ToF, PSMNet$\}$ with depth denoising. Our fused model consistently improves on the inputs.
\ifeccv
\begin{figure}[bt]
\setlength{\belowcaptionskip}{0pt}
\begin{subfigure}{.50\linewidth}
\centering
\setlength{\tabcolsep}{1pt}
\renewcommand{\arraystretch}{1}
\begin{tabular}{cc}
\includegraphics[align=c, width=.49\linewidth]{figures/replica/plot_metric_over_trajectory/MAD_plot.pdf} & \includegraphics[align=c, width=.49\linewidth]{figures/replica/plot_metric_over_trajectory/IoU_plot.pdf} \\
\end{tabular}
\subcaption{}
\label{fig:plot_trajectory_fusion}
\end{subfigure}
\begin{subfigure}{0.50\linewidth}
\tiny
\setlength{\tabcolsep}{1pt}
\renewcommand{\arraystretch}{1}
\begin{tabular}{cccc}
& \hspace{0.2cm} Without Outlier Filter & \hspace{0.2cm} With Outlier Filter &\\
\raisebox{0.2cm}[0pt][0pt]{\rotatebox[origin=c]{90}{\makecell{Feature Space}}} & \includegraphics[align=c, width=.42\linewidth]{figures/replica/feature_space/hotel_0_tsne_visualization_fused_color_error.jpg} & \includegraphics[align=c, width=.42\linewidth]{figures/replica/feature_space/hotel_0_tsne_visualization_fused_color_error_remove_outliers.jpg} & \raisebox{0.2cm}[0pt][0pt]{\includegraphics[align=c,width=.075\linewidth]{figures/colorbars/colorbar_outlier_filter.png}} \\
\end{tabular}
\subcaption{}
\label{fig:outlier_filter}
\end{subfigure}
\caption{(a) \textbf{Performance over Camera Trajectory.} Our fused output
outperforms the single sensor reconstructions ($V_{t}^{i}$) for all frames along the camera trajectory. The above results are evaluated on the Replica \emph{office 0} scene using $\{$ToF, PSMNet$\}$ with depth denoising. Note that the results get slightly worse after 300 frames. This is due to additional noise from the depth sensors when viewing the scene from further away.
(b) \boldparagraph{Effect of Learned Outlier Filter.} The learned filter is crucial for robust outlier handling. Erroneous outlier predictions shown in yellow are effectively removed by our approach while keeping the correct green-colored predictions.
}
\end{figure}
\else
\begin{figure}[bt]
\centering
\setlength{\tabcolsep}{1pt}
\renewcommand{\arraystretch}{1}
\begin{tabular}{cc}
\includegraphics[align=c, width=.50\linewidth]{figures/replica/plot_metric_over_trajectory/MAD_plot.pdf} & \includegraphics[align=c, width=.50\linewidth]{figures/replica/plot_metric_over_trajectory/IoU_plot.pdf} \\
\end{tabular}
\caption{\textbf{Performance over Camera Trajectory.} The fused output of our method outperforms the single input reconstructions for all frames along the trajectory.
Experiment done on the Replica \emph{office 0} scene on the sensors $\{$ToF, PSMNet$\}$ with denoising. Note: The metrics worsen slightly after $\sim$300 frames. This is trajectory dependent - the scene is viewed from further away.}
\label{fig:plot_trajectory_fusion}
\end{figure}
\fi
\ifeccv
\begin{wraptable}[6]{R}{0.5\linewidth}
\vspace{-\intextsep}
\centering
\resizebox{\linewidth}{!}
{
\footnotesize
\setlength{\tabcolsep}{9.6pt}
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{l|lllll}
\cellcolor{gray} \# Layers &\cellcolor{gray} 0 &\cellcolor{gray} 1 &\cellcolor{gray} 2 &\cellcolor{gray} 3 &\cellcolor{gray} 4 \\
\hline
F1 & 48.57 & 68.47 & \textbf{69.86} & 69.45 & 68.21\\
\end{tabular}
}
\caption{\boldparagraph{Architecture Ablation.} We vary the number of 3D convolutional layers with kernel size 3.
Performance is optimal at 2 layers, equivalent to a receptive field of $9 \!\times\! 9 \!\times\! 9$.}
\label{tab:ablation_layers_weighting_net}
\end{wraptable}
\fi
\boldparagraph{Architecture Ablation.} We perform a network ablation on the Replica dataset with the sensors $\{$SGM, PSMNet$\}$ without depth denoising.
In Tab.~\ref{tab:ablation_layers_weighting_net}, we investigate the number of layers with kernel size 3 in the Weighting Network $\mathcal{G}$.
Two layers yield optimal performance which amounts to a receptive field of $9 \!\times\! 9 \!\times\! 9$.
This is realistic given that the support for a specific sensor is local by nature.
\ifeccv
\else
\begin{table}
\centering
\footnotesize
\setlength{\tabcolsep}{9.6pt}
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{l|lllll}
\cellcolor{gray} \# Layers &\cellcolor{gray} 0 &\cellcolor{gray} 1 &\cellcolor{gray} 2 &\cellcolor{gray} 3 &\cellcolor{gray} 4 \\
\hline
F1 & 48.57 & 68.47 & \textbf{69.86} & 69.45 & 68.21\\
\end{tabular}
\caption{\boldparagraph{Architecture Ablation.} We vary the number of 3D convolutional layers with kernel size 3.
Performance is optimal at 2 layers, equivalent to a receptive field of $9 \!\times\! 9 \!\times\! 9$.}
\label{tab:ablation_layers_weighting_net}
\end{table}
\fi
\boldparagraph{Generalization Capability.}
Tab.~\ref{tab:overfitting} shows our model's generalization ability for $\{$SGM, PSMNet$\}$ fusion when evaluated against a model trained and tested on the \emph{office 0} scene.
Our model generalizes well and performs almost on par with one which is only trained on the \emph{office 0} scene.
The generalization capability is not surprising since $\mathcal{G}$ has a limited receptive field of $9 \!\times\! 9 \!\times\! 9$.
\ifeccv
\begin{wraptable}[8]{R}{0.5\linewidth}
\centering
\renewcommand{\arraystretch}{1.05}
\resizebox{\linewidth}{!}
{
\begin{tabular}{l|lllllll}
\cellcolor{gray}Train Set & \cellcolor{gray}MSE$\downarrow$ & \cellcolor{gray}MAD$\downarrow$ & \cellcolor{gray}IoU$\uparrow$ & \cellcolor{gray}Acc.$\uparrow$ & \cellcolor{gray}F$\uparrow$ & \cellcolor{gray}P$\uparrow$ & \cellcolor{gray}R$\uparrow$ \\
\cellcolor{gray}& \cellcolor{gray}*e-04 & \cellcolor{gray}*e-02 & \cellcolor{gray}[0,1] & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$\%$ & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ \\\hline
Full Train Set & \textbf{4.51} & 1.54 & 0.748 & 87.78 & 68.70 & \textbf{59.49} & 81.28 \\
Office 0 & 4.54 & \textbf{1.53} & \textbf{0.752} & \textbf{87.97} & \textbf{69.23} & 59.45 & \textbf{82.86} \\
\end{tabular}
}
\caption{\textbf{Generalization Capability.} Our model generalizes well when evaluated on the \emph{office 0} scene. We conclude this by training an additional model which is trained and validated on \emph{office 0}.}
\label{tab:overfitting}
\end{wraptable}
\else
\begin{table}
\centering
\renewcommand{\arraystretch}{1.05}
\resizebox{\columnwidth}{!}
{
\begin{tabular}{l|lllllll}
\cellcolor{gray}Train Set & \cellcolor{gray}MSE$\downarrow$ & \cellcolor{gray}MAD$\downarrow$ & \cellcolor{gray}IoU$\uparrow$ & \cellcolor{gray}Acc.$\uparrow$ & \cellcolor{gray}F$\uparrow$ & \cellcolor{gray}P$\uparrow$ & \cellcolor{gray}R$\uparrow$ \\
\cellcolor{gray}& \cellcolor{gray}*e-04 & \cellcolor{gray}*e-02 & \cellcolor{gray}[0,1] & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$\%$ & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ \\\hline
Full Train Set & \textbf{4.51} & 1.54 & 0.748 & 87.78 & 68.70 & \textbf{59.49} & 81.28 \\
Office 0 & 4.54 & \textbf{1.53} & \textbf{0.752} & \textbf{87.97} & \textbf{69.23} & 59.45 & \textbf{82.86} \\
\end{tabular}
}
\caption{\textbf{Generalization Capability.} Our model generalizes well when evaluated on the \emph{office 0} scene. We conclude this by training an additional model which is trained and validated on the \emph{office 0}.}
\label{tab:overfitting}
\end{table}
\fi
\boldparagraph{Effect of Learned Outlier Filter.} To show the effectiveness of the filter, we study the feature space on the input side of $\mathcal{G}$.
Specifically, we consider the \emph{hotel 0} scene and sensors $\{$ToF, PSMNet$\}$ with depth denoising.
First, we concatenate both feature grids and flatten the resulting grid.
Then, we reduce the observations of the 12-dimensional feature space to a 2-dimensional representation using tSNE~\cite{van2008visualizing}.
We then colorize each point with the corresponding signed distance error at the original voxel position.
We repeat the visualization with and without the learned Outlier Filter.
Fig.~\ref{fig:outlier_filter} shows that the filter effectively removes outliers while keeping good predictions.
\ifeccv
\else
\begin{figure}[t]
\centering
\scriptsize
\setlength{\tabcolsep}{1pt}
\renewcommand{\arraystretch}{1}
\begin{tabular}{cccc}
& \hspace{0.2cm} Without Outlier Filter & \hspace{0.2cm} With Outlier Filter &\\
\raisebox{0.2cm}[0pt][0pt]{\rotatebox[origin=c]{90}{\makecell{Feature Space}}} & \includegraphics[align=c, width=.42\linewidth]{figures/replica/feature_space/hotel_0_tsne_visualization_fused_color_error.jpg} & \includegraphics[align=c, width=.42\linewidth]{figures/replica/feature_space/hotel_0_tsne_visualization_fused_color_error_remove_outliers.jpg} & \raisebox{0.2cm}[0pt][0pt]{\includegraphics[align=c,width=.075\linewidth]{figures/colorbars/colorbar_outlier_filter.png}} \\
\end{tabular}
\caption{\boldparagraph{Effect of Learned Outlier Filter.} The filter is crucial for achieving robust outlier handling.
A lot of the erroneous yellow-colored predictions are removed by the filter, while keeping the correct green-colored predictions.}
\label{fig:outlier_filter}
\end{figure}
\fi
\ifeccv
\boldparagraph{Loss Ablation.} Tab.~\ref{tab:single_sensor_supervision} shows the performance difference when the model is trained only with the term $\mathcal{L}_f$ compared to the full loss \eqref{eq:loss}. We perform $\{$SGM, PSMNet$\}$ fusion on the Replica dataset. The extra terms of the full loss clearly help improve overall performance and specifically to filter outliers.
\begin{wraptable}[10]{R}{0.5\linewidth}
\centering
\vspace{-\intextsep}
\resizebox{1.0\linewidth}{!}
{
\begin{tabular}{l|lllllll}
\cellcolor{gray}Loss $\mathcal{L}$ \cellcolor{gray}& \cellcolor{gray}MSE$\downarrow$ & \cellcolor{gray}MAD$\downarrow$ & \cellcolor{gray}IoU$\uparrow$ & \cellcolor{gray}Acc.$\uparrow$ & \cellcolor{gray}F$\uparrow$ & \cellcolor{gray}P$\uparrow$ & \cellcolor{gray}R$\uparrow$ \\
\cellcolor{gray} & \cellcolor{gray}*e-04 & \cellcolor{gray}*e-02 & \cellcolor{gray}[0,1] & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ \\\hline
Only $\mathcal{L}_f$ & 6.04 & 1.78 & 0.710 & 86.20 & 62.65 & 50.88 & \textbf{82.17} \\
Full Loss & \textbf{4.77} & \textbf{1.56} & \textbf{0.738} & \textbf{87.62} & \textbf{69.83} & \textbf{63.20} & 79.12 \\
\end{tabular}
}
\caption{\textbf{Loss Ablation.} When only the term $\mathcal{L}_f$ is used, we observe a significant performance drop compared to the full loss. Note, however, that only with the term $\mathcal{L}_f$, our model still improves on the single sensor input metrics compared to Tab.~\ref{tab:sgm_psmnet_no_denoising}. This shows all terms are helpful during training.}
\label{tab:single_sensor_supervision}
\end{wraptable}
\fi
\boldparagraph{Limitations.}
Our framework currently supports two sensors.
Its extension to a $k$-sensor setting is straightforward, but the memory footprint grows linearly with the number of sensors.
On the other hand, few devices have more than two or three depth sensors.
While our method generates better reconstructions on average, specific local regions may not improve.
Further, our method has difficulties handling overlapping outliers from both sensors.
For qualitative examples, see supplementary material.
Ideally, outliers could be filtered and the sensors fused with a learned scene representation as in~\cite{weder2021neuralfusion}, but our efforts to make~\cite{weder2021neuralfusion} work with multiple sensors suggests that this is a harder learning problem which deserves attention in future work.
\section{Conclusion}
In this work, we propose a machine learning approach for online multi-sensor dense 3D reconstruction using depth maps.
We show that a fusion decision in the 3D domain rather than directly on 2D depth maps generally improves surface accuracy and outlier robustness.
This also holds when 2D fusion is straightforward, \emph{i.e.}} \def\Ie{\emph{I.e.}~for time synchronized sensors, equal sensor resolution and calibration.
The experiments further demonstrate that our model handles various sensors, can scale to room-sized real-world scenes and consistently produce a fused result that is quantitatively and
qualitatively better than the single sensor inputs as well as the baselines methods we compare to.
\begin{comment}
\todo{should we have acknowledgements? -> only for Arxiv/camera ready not for the review version}
\end{comment}
\section*{A. Videos}
\label{sec:videos}
\ifeccv
\else
\addcontentsline{toc}{section}{\nameref{sec:videos}}
\fi
We provide an introductory video that presents an overview of our method as well as a summary of the most important results.
Additionally, a selection of short videos is attached showing the online reconstruction process for various sensors and scenes.
\section*{B. Method}
\label{sec:method}
\ifeccv
\else
\addcontentsline{toc}{section}{\nameref{sec:method}}
\fi
In the following, we provide more details about our proposed method, specifically the Shape Integration Module which updates the shape grid $S^i_t$.
\boldparagraph{Shape Integration Module.}
The shape integration module takes as input a depth map $D^i_t$ from sensor $i \in \{0, 1\}$ at time $t \in \mathbb{N}$, with known camera calibration $P^{c \rightarrow w}_t \in \mathbb{SE}(3)$ from camera to world space and intrinsics $K_t \in \mathbb{R}^{3\times3}$ and performs a full perspective unprojection to attain a point cloud $\textbf{X}_w$ in the world coordinate space. Each 3D point $\textbf{x}_w$ $\in$ $\textbf{X}_w$ is computed by transforming each pixel $(u,\ v)$ of the depth map into the camera space $\textbf{x}_c$ according to \eqref{eq:im_cam},
\begin{equation}
\textbf{x}_{c} = D_t^i(u, v)K_t^{-1}\begin{bmatrix}
u \\
v \\
1
\end{bmatrix}
\label{eq:im_cam}
\end{equation}
and then into the world camera space according to \eqref{eq:cam_world}.
\begin{equation}
\textbf{x}_w = P^{c \rightarrow w}\begin{bmatrix}
\textbf{x}_c \\
1
\end{bmatrix}
\label{eq:cam_world}
\end{equation}
Along each ray from the camera center, centered at $\textbf{x}_w$, we sample T points evenly over a predetermined distance $l$. The distance $l$ and the number of points $T$ are selected based on the noise level of the depth sensor and could be interpreted similarly as the truncation distance in standard TSDF Fusion~\cite{curless1996volumetric}. The distance between sampled points should ideally be the same as the voxel side length.
After computing $T$ points along each ray, we convert the coordinates to the voxel space and extract a local shape grid $S^{i, *}_{t-1}$ from $S^i_{t-1}$ through nearest neighbor extraction.
To incrementally update $S^{i, *}_{t-1}$, we follow the moving average update scheme of TSDF Fusion,
\begin{equation}
V^{i,*}_t = \frac{W^{i,*}_{t-1}V^{i,*}_{t-1} + v^i_t}{W^{i,*}_{t-1} + 1},
\label{eq:v_update}
\end{equation}
where $v^i_t$ is the TSDF update. The local weight grid $W^{i, *}_t$ is updated as
\begin{equation}
W^{i,*}_t = \max(\omega_{\textrm{max}}, W^{i,*}_{t-1} + 1).
\label{eq:w_update}
\end{equation}
The weights are clipped at $\omega_{\textrm{max}}$ to prevent numerical instabilities.
\boldparagraph{Denoising Network.}
We use the identical denoising network as described by Weder~\emph{et al.}~ \cite{Weder2020RoutedFusionLR} (called routing network) except that we change the loss hyperparameter to $\lambda=0.06$.
\boldparagraph{Indicator Function.}
We define the indicator function $\1{A}(x)$ for a voxel index $x = (x_1, \ x_2, \ x_3)$, where $x_i \in \mathbb{N}$ as
\begin{equation}
\1{A}(x) = \begin{cases}
0 & \text{if}\ x \notin A\\
1 & \text{if}\ x \in A,
\end{cases}
\label{eq:indicator_function}
\end{equation}
and for two sets $A$ and $B$
\begin{equation}
\1{A, \ B}(x) = \begin{cases}
0 & \text{if}\ x \notin A\cap B\\
1 & \text{if}\ x \in A\cap B.
\end{cases}
\label{eq:indicator_function_and}
\end{equation}
In the main paper, we omit $x$ for brevity.
\section*{C. Implementation Details}
\label{sec:imp_details}
\ifeccv
\else
\addcontentsline{toc}{section}{\nameref{sec:imp_details}}
\fi
We use PyTorch 1.7.1 and Python 3.8.5 to implement the pipeline.
Training is done with the Adam optimizer using an Nvidia TITAN RTX with 24 GB of GPU memory.
We use a learning rate of $1e$-$04$ and otherwise the default Adam hyperparameters \emph{betas} $= (0.9, 0.999)$, \emph{eps} = $1e$-$08$ and \emph{weight$\_$decay} = 0.
We use a batch size of 1 due to the online nature of the pipeline, but accumulate the gradients over 20 frames before updating all network weights.
We shuffle the frames during train time and for training efficiency, we integrate every 10th frame during validation. We record a runtime of $\sim$15 fps on our unoptimized implementation on the Human CoRBS scene. For our largest scenes (\emph{e.g.}} \def\Eg{\emph{E.g.}~\emph{Hotel 0}), our integration frame rate is between 1-2 fps. The bottleneck is cpu-gpu communication where our implementation loads the full voxel grid to the gpu for fast updates and then loads the grid back to the cpu to allow for scene reconstruction of multiple scenes in parallel.
We train our network until convergence which takes between 12-24 hours on the Replica dataset and a few hours on the CoRBS and Scene3D datasets.
\section*{D. Evaluation Metrics}
\label{sec:metrics}
\ifeccv
\else
\addcontentsline{toc}{section}{\nameref{sec:metrics}}
\fi
We use the following seven metrics to quantify the reconstruction performance.
\boldparagraph{Voxel Grid Metrics.} We use four metrics on the TSDF voxel grid. We mask the evaluation so that only voxels with a non-zero weight $W_k$ are considered. Mean Absolute Distance (MAD): Computed as the mean of the $L_1$ error to the ground truth signed distance grid MAD$ = \frac{1}{N}\sum_{k=0}^N |V_k - V_k^{GT}|_1$, where N is the total number of valid voxels. Mean Squared Error (MSE): Computed as the mean squared error to the ground truth signed distance grid MSE$ = \frac{1}{N}\sum_{k=0}^N (V_k - V_k^{GT})^2$, where N is the total number of valid voxels. Intersection over Union (IoU): Computed on the occupancy grid of the voxel grid as IoU$ = \frac{tn}{tn + fp + fn}$ and Accuracy as Acc$ = \frac{tn + tp}{tp + tn + fp + fn}$,
where
\begin{align}
tn &= \sum \{\sign(V) < 0 \ \text{and} \ \sign(V^{GT}) < 0\} \\
tp &= \sum \{\sign(V) >= 0 \ \text{and} \ \sign(V^{GT}) >= 0\} \\
fp &= \sum \{\sign(V) >= 0 \ \text{and} \ \sign(V^{GT}) < 0\} \\
fn &= \sum \{\sign(V) < 0 \ \text{and} \ \sign(V^{GT}) >= 0\}
\end{align}
\boldparagraph{Mesh Metrics.} We run marching cubes~\cite{lorensen1987marching} on the predicted voxel grid $V$ and the ground truth voxel grid $V^{GT}$ and compare the two meshes.
The F-score is defined as the harmonic mean between Recall (R) and Precision (P), $F = 2\frac{PR}{P+R}$.
Precision is defined as the percentage of vertices on the predicted mesh $V_m$ which lie within some distance $\tau$ from a vertex on the ground truth mesh $V_m^{GT}$.
Vice versa, Recall is defined as the percentage of points on the ground truth mesh $V_m^{GT}$ which lie within the same distance $\tau$ from a vertex on the predicted mesh $V_m$. In all our experiments, we use a distance threshold $\tau = 0.02$ m.
We use the provided evaluation script of the Tanks and Temples dataset~\cite{knapitsch2017tanks}, but modify it to our needs.
For a more accurate evaluation, we do not downsample or crop the meshes and we do not utilize the automatic alignment procedure since our meshes are already aligned.
\section*{E. Replica Dataset Collection}
\label{sec:replica}
\ifeccv
\else
\addcontentsline{toc}{section}{\nameref{sec:replica}}
\fi
Due to the lack of available data for the study of multi-sensor depth fusion, we construct our own 2D dataset from the 3D Replica dataset~\cite{straub2019replica}, which comprises 18 high-quality scenes.
To compute the ground truth signed distance value at each voxel grid point, we require a well-defined normal direction.
This is not provided by the non-watertight Replica meshes.
Thus, we apply screened Poisson surface reconstruction with CloudCompare~\cite{girardeau2016cloudcompare}, with an octree depth of 12.
Otherwise, the default settings are used.
Additionally, we found that the Poisson surface reconstructions are not clean enough to produce high-quality signed distance grids.
Thus, each watertight mesh is cleaned with the Meshlab~\cite{cignoni2008meshlab} filter function \texttt{remove isolated pieces with respect to face number}. We set the face number threshold to 100.
The signed distance grids are then computed from the meshes with a modified version of the mesh-to-sdf library\footnote{{\normalfont\url{https://github.com/marian42/mesh_to_sdf}}} to accommodate non-cubic voxel grids.
Using the Habitat AI platform~\cite{habitat19iccv}, we define an agent that moves around in the watertight 3D scenes by traversing the scenes manually using the keyboard.
We sample three trajectories per scene with diverse starting points and capture the scene content to simulate a realistic capturing scenario \emph{i.e.}} \def\Ie{\emph{I.e.}~for instance a human moving a mobile device.
For each step that the agent takes, it moves $0.05$ m along the x- or y-axis.
When the agent rotates, it rotates $2.5$ degrees.
The step sizes were chosen so that the dataset also can be used to evaluate multi-sensor tracking methods in the future.
The agent is equipped with two pairs of RGBD cameras. Both cameras are located at a fixed height of $1.5$ m above the floor and the baseline between the cameras is $0.1$ m. We use an identical resolution of 512$\times$512 for both cameras and a field of view of 90 degrees.
The full dataset comprises $92698$ frames, of which we use a subset to produce a training, validation and testing set. For example, a dense voxel grid of the $apartment \ 0$ scene did not fit on the GPU, and the variations of the \emph{frl apartment} scenes did not provide any further diversity in the data. The train set consists of the scenes $\{$\emph{apartment 1, frl apartment 0, office 1, room 2, office 3, room 0}$\}$ and contains $22358$ frames, the validation set consists of the scene $\{$\emph{frl apartment 1}$\}$ and contains $1958$ frames and the test set consists of the scenes $\{$\emph{office 0, office 4, hotel 0}$\}$ and contains $1891$ frames. When training our model, we use all three trajectories from each train scene. During validation, only trajectory $1$ is used.
During testing, we use trajectory 3 for \emph{hotel 0} and \emph{office 4} and trajectory 1 for \emph{office 0}. As an example, in Fig.~\ref{fig:trajectories}, we visualize a top-down view of the three manually traversed trajectories for the \emph{room 0} scene. The trajectory is visualized in red and the navigable floor is white.
\begin{figure}[t]
\centering
\scriptsize
\setlength{\tabcolsep}{1pt}
\renewcommand{\arraystretch}{1}
\begin{tabular}{cccc}
\rotatebox[origin=c]{90}{\makecell{Room 0}} &
\includegraphics[align=c, width=.315\linewidth]{figures/replica/top_down/topdownview1.jpg} &
\includegraphics[align=c, width=.315\linewidth]{figures/replica/top_down/topdownview2.jpg} &
\includegraphics[align=c,width=.315\linewidth]{figures/replica/top_down/topdownview3.jpg} \\
& Trajectory 1 & Trajectory 2 & Trajectory 3 \\
\end{tabular}
\caption{\boldparagraph{Trajectory Visualization.} Top-down visualization of the three manually traversed trajectories for the \emph{room 0} scene. Navigable space is colored white.}
\label{fig:trajectories}
\end{figure}
\section*{F. Scene3D Dataset}
\label{sec:scene3d}
\ifeccv
\else
\addcontentsline{toc}{section}{\nameref{sec:scene3d}}
\fi
The Scene3D dataset comprises multiple scenes, but we only evaluate the Copy room scene. We found that the ground truth meshes of all other scenes were not complete enough, which made the evaluation inaccurate.
\section*{G. Baselines}
\label{sec:baselines}
\ifeccv
\else
\addcontentsline{toc}{section}{\nameref{sec:baselines}}
\fi
\boldparagraph{RoutedFusion.}
We found that the original implementation of RoutedFusion generates significantly more outliers than TSDF Fusion~\cite{curless1996volumetric} when no weight thresholding is applied as post-outlier filter. This is due to the trilinear interpolation extraction step of RoutedFusion compared to the nearest neighbor extraction of TSDF Fusion. Trilinear interpolation updates eight grid points per sampled point along the ray instead of one. This exposes more outliers as marching cubes~\cite{lorensen1987marching} requires that all eight grid points have a non-zero weight for a surface to be drawn. Fig.~\ref{fig:routedfusion_masking} compares the meshes of two masks (the sets of non-zero weights) on the same RoutedFusion model. The trilinear interpolation mask is the standard output of RoutedFusion while the nearest neighbor mask is taken from running TSDF Fusion on the same scene. Given the significantly better result with the nearest neighbor mask, we report all results in the paper using the nearest neighbor mask.
\boldparagraph{Early Fusion.}
We use the denoising network described by Weder~\emph{et al.} \cite{Weder2020RoutedFusionLR} (called routing network) as basis for our Early Fusion baseline. We increase the number of input channels by one so that the sensor depth maps can be fused and we use the loss hyperparameter $\lambda=0.06$.
\boldparagraph{DI-Fusion.} For a fair comparison between all models, we make the following modifications to the original implementation of DI-Fusion~\cite{huang2021di}. 1) We turn off the camera tracker and provide the ground truth camera poses. 2) We turn off the heuristic pre-outlier filter used on the incoming depth maps. 3) We turn off the heuristic weight counter thresholding post-outlier filter. 4) We use a voxel size of 2 cm, but sample the grid at a simulated resolution of 1 cm. This is achieved by setting the \texttt{resolution} variable in the config file to 2. We note that the implementation does not support \texttt{resolution < 2}. Furthermore, the implementation does not allow for convenient access the dense voxel grid and thus, we only report the mesh metrics.
\begin{figure}[t]
\centering
\scriptsize
\setlength{\tabcolsep}{1pt}
\renewcommand{\arraystretch}{1}
\begin{tabular}{cccc}
\rotatebox[origin=c]{90}{\makecell{Lounge}} &
\includegraphics[align=c, width=.34\linewidth]{figures/scene3d/lounge/model.jpg} &
\includegraphics[align=c, width=.31\linewidth]{figures/scene3d/lounge/routedfusion_nn_mask_trilinear_train.jpg} &
\includegraphics[align=c,width=.31\linewidth]{figures/scene3d/lounge/routedfusion_tri.jpg} \\
& Frame & Nearest & Trilinear \\
& & Neighbor Mask & Interpolation Mask \\
\end{tabular}
\caption{\boldparagraph{RoutedFusion Masking.} The trilinear interpolation mask of RoutedFusion~\cite{Weder2020RoutedFusionLR} results in significantly more outliers than using the nearest neighbor mask. Trilinear interpolation updates eight grid points per sampled point along the ray instead of two. This exposes more outliers as marching cubes~\cite{lorensen1987marching} requires that all eight grid points have a non-zero weight for a surface to be drawn.}
\label{fig:routedfusion_masking}
\end{figure}
\section*{H. Depth Sensor Details}
\label{sec:sensors}
\ifeccv
\else
\addcontentsline{toc}{section}{\nameref{sec:sensors}}
\fi
\boldparagraph{Synthesized ToF Depth.}
The noise model\footnote{\label{footnote:tof_supp}\normalfont\url{http://redwood-data.org/indoor/dataset.html}}~\cite{handa2014benchmark} incorporates disparity-based quantization, high-frequency noise, and a model of low-frequency distortion estimated on a real depth camera. We increase the noise level of the depth by increasing the standard deviation of the high-frequency noise by a factor of $5$. We also multiply the standard deviation of the pixel shuffling with the same factor. Other works have previously used this model for the evaluation of dense surface reconstruction methods~\cite{choi2015robust,zhou2014simultaneous,zhou2013elastic}.
\boldparagraph{PSMNet Stereo Depth.}
We first pretrain PSMNet~\cite{chang2018pyramid} on the SceneFlow dataset according to the documentation provided by Chang \emph{et al.}.
The model is then fine-tuned on our Replica dataset.
During training we use the default parameters.
\boldparagraph{SGM Stereo Depth.}
We generate depth maps with Semi-Global Matching~\cite{hirschmuller2007stereo} implemented in OpenCV~\cite{opencv_library}.
We set the number of disparities $numDisparities=64$ and use the full variant of the algorithm which considers 8 directions instead of 5 by setting $mode=MODE\_HH$.
Otherwise, we use the default settings.
\boldparagraph{COLMAP MVS Depth.}
Dense depth maps are computed with known camera poses and intrinsics using the sequential matcher with a 10 image overlap.
\section*{I. More Experiments}
\label{sec:exp}
\ifeccv
\else
\addcontentsline{toc}{section}{\nameref{sec:exp}}
\fi
\ifeccv
\else
\boldparagraph{Loss Ablation.} In Tab.~\ref{tab:single_sensor_supervision}, we show the performance difference when the model is trained only with the term $\mathcal{L}_f$ compared to the full loss (\textcolor{red}{4}) in the main paper. The terms (\textcolor{red}{6}) and (\textcolor{red}{7}) in the main paper clearly help improve overall performance and specifically to filter outliers.
\fi
\ifeccv
\else
\begin{table}
\centering
\resizebox{0.75\columnwidth}{!}
{
\begin{tabular}{l|lllllll}
\cellcolor{gray}Loss $\mathcal{L}$ \cellcolor{gray}& \cellcolor{gray}MSE$\downarrow$ & \cellcolor{gray}MAD$\downarrow$ & \cellcolor{gray}IoU$\uparrow$ & \cellcolor{gray}Acc.$\uparrow$ & \cellcolor{gray}F$\uparrow$ & \cellcolor{gray}P$\uparrow$ & \cellcolor{gray}R$\uparrow$ \\
\cellcolor{gray} & \cellcolor{gray}*e-04 & \cellcolor{gray}*e-02 & \cellcolor{gray}[0,1] & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ \\\hline
Only $\mathcal{L}_f$ & 6.04 & 1.78 & 0.710 & 86.20 & 62.65 & 50.88 & \textbf{82.17} \\
Full Loss & \textbf{4.77} & \textbf{1.56} & \textbf{0.738} & \textbf{87.62} & \textbf{69.83} & \textbf{63.20} & 79.12 \\
\end{tabular}
}
\caption{\textbf{Loss Ablation.} When only the term $\mathcal{L}_f$ is used, we observe a significant performance drop compared to the full loss. Note, however, that only with the term $\mathcal{L}_f$, our model still improves on the single sensor input metrics compared to Tab.~\textcolor{red}{3} in the main paper. This shows all terms are helpful during training.}
\label{tab:single_sensor_supervision}
\end{table}
\fi
\ifeccv
\boldparagraph{Time Asynchronous Evaluation.} RGB cameras often have higher frame rates than ToF sensors which makes Early Fusion more challenging as one sensor might lack new data. Tab.~2 in the main paper provides performance results when the ToF sensor has half the sampling rate compared to the PSMNet sensor. In Tab.\ref{tab:asynchronous}, we show that the performance gap to our method grows even more when the sampling rate is decreased to a third (of the PSMNet sensor) for the ToF sensor. The drop in performance compared to our method can be attributed to the reprojection of the ToF frames. The reprojection step introduces occlusions and pixel quantization errors. The performance of our method actually slightly improves on some metrics. This can be explained by the fact that the completeness of the scene is saturated and dropping ToF frames removes noise and outliers that would otherwise have been integrated. See also Fig.\ref{fig:asynch} for a visualization. We do not retrain any model for this experiment.
\begin{wraptable}[8]{R}{0.63\linewidth}
\centering
\renewcommand{\arraystretch}{1.05}
\resizebox{\linewidth}{!}
{
\begin{tabular}{c|l|lllllll}
\cellcolor{gray} Sampling & \cellcolor{gray} & \cellcolor{gray}MSE$\downarrow$ & \cellcolor{gray}MAD$\downarrow$ & \cellcolor{gray}IoU$\uparrow$ & \cellcolor{gray}Acc.$\uparrow$ & \cellcolor{gray}F$\uparrow$ & \cellcolor{gray}P$\uparrow$ & \cellcolor{gray}R$\uparrow$ \\
\cellcolor{gray} Rate ToF & \multirow{-2}{*}{\cellcolor{gray} \backslashbox[28mm]{Model}{Metric}} & \cellcolor{gray}*e-04 & \cellcolor{gray}*e-02 & \cellcolor{gray}[0,1] & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$\%$ & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ \\\hline
1/2 & Early Fusion & 7.66 & 1.99 & 0.642 & 84.65 & 61.34 & 48.47 & \textbf{83.63} \\
1/3 & Early Fusion & 8.52 & 2.15 & 0.610 & 83.60 & 55.52 & 41.63 & 83.40 \\
1/2 & SenFuNet (Ours) & 4.21 & 1.45 & \textbf{0.755} & \textbf{88.26} & 73.04 & 69.13 & 78.43 \\
1/3 & SenFuNet (Ours) & \textbf{4.00} & \textbf{1.41} & 0.751 & 88.14 & \textbf{74.93} & \textbf{73.57} & 77.05 \\
\end{tabular}
}
\caption{\textbf{Time Asynchronous Evaluation.} The gap between SenFuNet and Early Fusion increases further when the ToF camera has a sampling rate of one third compared to the PSMNet sensor.}
\label{tab:asynchronous}
\end{wraptable}
\fi
\boldparagraph{Performance over Camera Trajectory.} To show that our fused output is not only better at the end of the fusion process, we visualize the quantitative performance across the accumulated trajectory for the \emph{office 0} scene for the model $\{$ToF, PSMNet$\}$ with denoising in Fig.~\ref{fig:plot_trajectory_fusion_supp}. Our fused model consistently improves on the inputs. Note that we only show the metrics IoU and MAD in the main paper.
\begin{figure}[t]
\centering
\setlength{\tabcolsep}{1pt}
\renewcommand{\arraystretch}{1}
\begin{tabular}{cc}
\includegraphics[align=c, width=.5\linewidth]{figures/replica/plot_metric_over_trajectory/Accuracy_plot.pdf} & \includegraphics[align=c, width=.5\linewidth]{figures/replica/plot_metric_over_trajectory/MSE_plot.pdf} \\
\end{tabular}
\caption{\textbf{Performance over Camera Trajectory.} The fused output of our method outperforms the single-input reconstructions for all frames along the trajectory.
The experiment was done on the \emph{office 0} scene for the sensors $\{$ToF, PSMNet$\}$ with denoising. Note that the results get slightly worse after 300 frames. This is due to additional noise from the depth sensors when viewing the scene from further away.}
\label{fig:plot_trajectory_fusion_supp}
\end{figure}
\boldparagraph{Weight Ablation.}
In Tab.~\ref{tab:ablation_tanh_transform} we evaluate our full model against two models which only use the weight ($W_t^i$) as input to the Weighting Network $\mathcal{G}$. We perform $\{$SGM, PSMNet$\}$ fusion without denoising on the Replica dataset. We observe that our full model provides a gain on the model which only uses the $\mathop{\mathrm{tanh}}$-transformed weights. This justifies our Feature Network. We also show that a normalization of the weight with a $\mathop{\mathrm{tanh}}$-transformation improves performance over no normalization.
\ifeccv
\else
\begin{table}
\centering
\footnotesize
\setlength{\tabcolsep}{15pt}
\begin{tabular}{l|lllll}
\cellcolor{gray}Model & \cellcolor{gray}F$\uparrow$ $[\%]$\\
\hline
Only Weights & 66.69 \\
Only \emph{tanh}(Weights) & 68.92 \\
Full Model & \textbf{69.83} \\
\end{tabular}
\caption{\boldparagraph{Weight Counter Ablation.} Our full model outperforms models which only use the weight ($W_t^i$) as input to the Weighting Network $\mathcal{G}$. Normalization of the weight with a \emph{tanh}-transformation improves performance over no normalization.}
\label{tab:ablation_tanh_transform}
\end{table}
\fi
\boldparagraph{Architecture Ablation.}
For the architecture ablations, we perform $\{$SGM, PSMNet$\}$ fusion without denoising on the Replica dataset. For all experiments, unless otherwise specified, we use $4$ layers of 3D-convolutions with kernel size 3 in $\mathcal{G}$, $6$ network blocks in the Feature Networks $\mathcal{F}^i$, and store feature vectors of dimension $n = 5$ in the feature grids $F^i_t$.
In Tab.~\ref{tab:ablation_blocks_feature_net}, we investigate the effect of using different number of network blocks in the feature network.
Performance is maximized when using $5$ blocks.
In Tab.~\ref{tab:ablation_feature_dims} we vary the number of feature dimensions that is stored in the grids. $4$ dimensions yield optimal performance.
Finally, in Tab.~\ref{tab:ablation_bypass_feature_net}, we study the importance of the Feature Network by testing it against a model which bypasses the network and hence unprojects the 2D features without any 2D processing.
For this experiment, both models use $4$ feature dimensions to make the comparison fair. Note that no weights were used as input to the Weighting Network.
We gain performance by using the feature network, which justifies our design choice.
\ifeccv
\else
\begin{table}
\centering
\resizebox{\columnwidth}{!}
{
\begin{tabular}{l|llllll}
\cellcolor{gray}Nbr Blocks & \cellcolor{gray}1 & \cellcolor{gray}2 & \cellcolor{gray}3 & \cellcolor{gray}4 & \cellcolor{gray}5 & \cellcolor{gray}6 \\
\hline
F$\uparrow$ $[\%]$ & 67.45 & 67.68 & 66.79 & 67.59 & \textbf{68.39} & 68.21 \\
\end{tabular}
}
\caption{\boldparagraph{Ablation Study.} We change the number of blocks for the feature network.
Optimal performance is achieved with $5$ blocks.}
\label{tab:ablation_blocks_feature_net}
\end{table}
\fi
\ifeccv
\begin{table}[ht]
\begin{minipage}[t]{0.495\linewidth}
\centering
\footnotesize
\setlength{\tabcolsep}{15pt}
\begin{tabular}{l|lllll}
\cellcolor{gray}Model & \cellcolor{gray}F$\uparrow$ $[\%]$\\
\hline
Only Weights & 66.69 \\
Only \emph{tanh}(Weights) & 68.92 \\
Full Model & \textbf{69.83} \\
\end{tabular}
\caption{\boldparagraph{Weight Counter Ablation.} Our full model outperforms models which only use the weight ($W_t^i$) as input to the Weighting Network $\mathcal{G}$. Normalization of the weight with a \emph{tanh}-transformation improves performance over no normalization.}
\label{tab:ablation_tanh_transform}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[t]{0.495\linewidth}
\centering
\resizebox{\columnwidth}{!}
{
\begin{tabular}{l|llllll}
\cellcolor{gray}Nbr Blocks & \cellcolor{gray}1 & \cellcolor{gray}2 & \cellcolor{gray}3 & \cellcolor{gray}4 & \cellcolor{gray}5 & \cellcolor{gray}6 \\
\hline
F$\uparrow$ $[\%]$ & 67.45 & 67.68 & 66.79 & 67.59 & \textbf{68.39} & 68.21 \\
\end{tabular}
}
\caption{\boldparagraph{Ablation Study.} We change the number of blocks for the feature network.
Optimal performance is achieved with $5$ blocks.}
\label{tab:ablation_blocks_feature_net}
\end{minipage}
\begin{minipage}[t]{0.495\linewidth}
\centering
\footnotesize
\setlength{\tabcolsep}{10pt}
\resizebox{\columnwidth}{!}
{
\begin{tabular}{l|llll}
\cellcolor{gray}n & \cellcolor{gray}2 & \cellcolor{gray}3 & \cellcolor{gray}4 & \cellcolor{gray}5 \\
\hline
F$\uparrow$ $[\%]$ & 68.86 & 68.20 & \textbf{68.87} & 68.21 \\
\end{tabular}
}
\caption{\boldparagraph{Ablation Study.} We alter the dimension of the feature vector.
The performance is maximized at $n = 4$.}
\label{tab:ablation_feature_dims}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[t]{0.495\linewidth}
\centering
\footnotesize
\setlength{\tabcolsep}{15pt}
\resizebox{1.0\columnwidth}{!}
{
\begin{tabular}{l|lllll}
\cellcolor{gray}Model & \cellcolor{gray}F$\uparrow$ $[\%]$ \\
\hline
Without Feature Net & 67.44 \\
With Feature Net & \textbf{68.87} \\
\end{tabular}
}
\caption{\boldparagraph{Architecture Ablation.} We demonstrate the difference in performance with and without the feature network.}
\label{tab:ablation_bypass_feature_net}
\end{minipage}
\end{table}
\fi
\ifeccv
\else
\begin{table}
\centering
\footnotesize
\setlength{\tabcolsep}{10pt}
\begin{tabular}{l|llll}
\cellcolor{gray}n & \cellcolor{gray}2 & \cellcolor{gray}3 & \cellcolor{gray}4 & \cellcolor{gray}5 \\
\hline
F$\uparrow$ $[\%]$ & 68.86 & 68.20 & \textbf{68.87} & 68.21 \\
\end{tabular}
\caption{\boldparagraph{Ablation Study.} We alter the dimension of the feature vector.
The performance is maximized at $n = 4$.}
\label{tab:ablation_feature_dims}
\end{table}
\fi
\ifeccv
\else
\begin{table}
\centering
\footnotesize
\setlength{\tabcolsep}{15pt}
\begin{tabular}{l|lllll}
\cellcolor{gray}Model & \cellcolor{gray}F$\uparrow$ $[\%]$ \\
\hline
Without Feature Net & 67.44 \\
With Feature Net & \textbf{68.87} \\
\end{tabular}
\caption{\boldparagraph{Architecture Ablation.} We demonstrate the difference in performance with and without the feature network.}
\label{tab:ablation_bypass_feature_net}
\end{table}
\fi
The ablations suggest that the best performance is achieved with $n = 4$, the number of blocks in the feature networks is $5$ and when we use $2$ 3D-convolutional layers with kernel size 3.
Empirically, we found that $n = 5$, $6$ feature network blocks and $2$ kernel size 3 3D convolutions gave marginally better results and this is what we report throughout the paper.
\boldparagraph{Effect of Weight Thresholding.}
RoutedFusion~\cite{Weder2020RoutedFusionLR} applies weight thresholding to filter outliers \emph{i.e.}} \def\Ie{\emph{I.e.}~a TSDF surface observation needs to have a weight larger than some threshold to not be removed during post-processing. Weight thresholding was not applied to any model in the main paper to avoid the need of tuning an additional parameter and to make the comparison fair between the models. For example, on the Replica scenes, the optimal weight threshold is typically within the range 1-10, while for the CoRBS human scene it is around 500. Tab.~\ref{tab:weight_thresholding} shows the effect when weight thresholding is applied on the Replica test set on the sensors $\{$ToF, PSMNet$\}$ with denoising. Regardless of the weight threshold, our method outperforms RoutedFusion.
\boldparagraph{DI-Fusion Ablation.}
Tab.~\ref{tab:di_fusion} shows the performance of DI-Fusion~\cite{huang2021di} for $\sigma=\{0.15, 0.06\}$ compared to our method.
The results when $\sigma=0.15$ is reported in the main paper and this number is also specified in the implementation provided by the authors.
$\sigma=0.06$ is suggested for one experiment in the DI-Fusion paper.
SenFuNet outperforms DI-Fusion for both choices of the $\sigma$ threshold.
In general, a high $\sigma$ yields high recall, but poor precision and vice versa.
This is, however, not true for the Human scene where SenFuNet outperforms DI-Fusion on all metrics.
On the copyroom scene, SenFuNet outperforms DI-Fusion both in terms of the F-score and precision.
On the Replica dataset, SenFuNet attains around 10 percent points higher F-score compared to the best DI-Fusion model.
\ifeccv
\else
\begin{table}
\centering
\resizebox{\columnwidth}{!}
{
\setlength{\tabcolsep}{2pt}
\renewcommand{\arraystretch}{1.05}
\begin{tabular}{l|lll|lll}
\cellcolor{gray}Model & \cellcolor{gray}F$\uparrow$ $[\%]$ & \cellcolor{gray}P$\uparrow$ $[\%]$ & \cellcolor{gray}R$\uparrow$ $[\%]$ & \cellcolor{gray}F$\uparrow$ $[\%]$ & \cellcolor{gray}P$\uparrow$ $[\%]$ & \cellcolor{gray}R$\uparrow$ $[\%]$ \\ \hline
\multicolumn{1}{c}{\emph{ToF\bplus{}MVS}} & \multicolumn{3}{c}{\emph{Copyroom}} & \multicolumn{3}{c}{\emph{Human}}\\ \hline
DI-Fusion~\cite{huang2021di} $\sigma$=0.15 & 86.31 & 77.27 & \textbf{97.74} & 28.19 & 16.71 & 90.15 \\
DI-Fusion~\cite{huang2021di} $\sigma$=0.06 & 79.21 & 91.03 & 70.11 & 13.52 & 18.75 & 10.56 \\
\textbf{SenFuNet{} (Ours)} & \textbf{93.73} & \textbf{91.56} & 96.00 & \textbf{74.56} & \textbf{59.74} & \textbf{99.16} \\
\hline
\multicolumn{1}{c}{\emph{Replica: ToF\bplus{}PSMNet}} & \multicolumn{3}{c}{\emph{w/o denoising}} & \multicolumn{3}{c}{\emph{w. denoising}}\\ \hline
DI-Fusion~\cite{huang2021di} $\sigma$=0.15 & 48.39 & 34.24 & \textbf{85.29} & 55.66 & 41.49 & \textbf{85.33} \\
DI-Fusion~\cite{huang2021di} $\sigma$=0.06 & 60.30 & \textbf{72.49} & 51.88 & 63.02 & \textbf{75.14} & 54.46 \\
\textbf{SenFuNet{} (Ours)} & \textbf{69.29} & 62.05 & 79.81 & \textbf{76.47} & 73.58 & 79.77 \\
\hline
\multicolumn{1}{c}{\emph{Replica: SGM\bplus{}PSMNet}} & \multicolumn{3}{c}{\emph{w/o denoising}} & \multicolumn{3}{c}{\emph{w. denoising}}\\ \hline
DI-Fusion~\cite{huang2021di} $\sigma$=0.15 & 47.29 & 32.92 & \textbf{85.14} & 52.65 & 38.50 & \textbf{83.62}\\
DI-Fusion~\cite{huang2021di} $\sigma$=0.06 & 59.24 & \textbf{71.73} & 50.61 & 63.39 & \textbf{75.59} & 54.73\\
\textbf{SenFuNet{} (Ours)} & \textbf{69.83} & 63.20 & 79.12 & \textbf{71.18} & 66.81 & 76.27\\
\end{tabular}
}
\caption{\textbf{DI-Fusion Ablation.} SenFuNet outperforms DI-Fusion for various choices of the $\sigma$ threshold. In general, a high $\sigma$ yields high recall, but poor precision and vice versa. This is, however, not true for the Human scene where SenFuNet outperforms DI-Fusion on all metrics. On the Replica dataset, SenFuNet attains around a 10 higher F-score compared to the best DI-Fusion model.}
\label{tab:di_fusion}
\end{table}
\fi
\ifeccv
\begin{table}[ht]
\begin{minipage}[c]{0.5\linewidth}
\centering
\resizebox{\columnwidth}{!}
{
\setlength{\tabcolsep}{1pt}
\renewcommand{\arraystretch}{1.05}
\begin{tabular}{l|lll|lll}
\cellcolor{gray}Model & \cellcolor{gray}F$\uparrow$ $[\%]$ & \cellcolor{gray}P$\uparrow$ $[\%]$ & \cellcolor{gray}R$\uparrow$ $[\%]$ & \cellcolor{gray}F$\uparrow$ $[\%]$ & \cellcolor{gray}P$\uparrow$ $[\%]$ & \cellcolor{gray}R$\uparrow$ $[\%]$ \\ \hline
\multicolumn{1}{c}{\emph{ToF\bplus{}MVS}} & \multicolumn{3}{c}{\emph{Copyroom}} & \multicolumn{3}{c}{\emph{Human}}\\ \hline
DI-Fusion~\cite{huang2021di} $\sigma$=0.15 & 86.31 & 77.27 & \textbf{97.74} & 28.19 & 16.71 & 90.15 \\
DI-Fusion~\cite{huang2021di} $\sigma$=0.06 & 79.21 & 91.03 & 70.11 & 13.52 & 18.75 & 10.56 \\
\textbf{SenFuNet{} (Ours)} & \textbf{93.73} & \textbf{91.56} & 96.00 & \textbf{74.56} & \textbf{59.74} & \textbf{99.16} \\
\hline
\multicolumn{1}{c}{\emph{Replica: ToF\bplus{}PSMNet}} & \multicolumn{3}{c}{\emph{w/o denoising}} & \multicolumn{3}{c}{\emph{w. denoising}}\\ \hline
DI-Fusion~\cite{huang2021di} $\sigma$=0.15 & 48.39 & 34.24 & \textbf{85.29} & 55.66 & 41.49 & \textbf{85.33} \\
DI-Fusion~\cite{huang2021di} $\sigma$=0.06 & 60.30 & \textbf{72.49} & 51.88 & 63.02 & \textbf{75.14} & 54.46 \\
\textbf{SenFuNet{} (Ours)} & \textbf{69.29} & 62.05 & 79.81 & \textbf{76.47} & 73.58 & 79.77 \\
\hline
\multicolumn{1}{c}{\emph{Replica: SGM\bplus{}PSMNet}} & \multicolumn{3}{c}{\emph{w/o denoising}} & \multicolumn{3}{c}{\emph{w. denoising}}\\ \hline
DI-Fusion~\cite{huang2021di} $\sigma$=0.15 & 47.29 & 32.92 & \textbf{85.14} & 52.65 & 38.50 & \textbf{83.62}\\
DI-Fusion~\cite{huang2021di} $\sigma$=0.06 & 59.24 & \textbf{71.73} & 50.61 & 63.39 & \textbf{75.59} & 54.73\\
\textbf{SenFuNet{} (Ours)} & \textbf{69.83} & 63.20 & 79.12 & \textbf{71.18} & 66.81 & 76.27\\
\end{tabular}
}
\caption{\textbf{DI-Fusion Ablation.} SenFuNet outperforms DI-Fusion for various choices of the $\sigma$ threshold. In general, a high $\sigma$ yields high recall, but poor precision and vice versa. This is, however, not true for the Human scene where SenFuNet outperforms DI-Fusion on all metrics. On the Replica dataset, SenFuNet attains around a 10 higher F-score compared to the best DI-Fusion model.}
\label{tab:di_fusion}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[b]{0.48\linewidth}
\centering
\footnotesize
\setlength{\tabcolsep}{1pt}
\resizebox{\columnwidth}{!}
{
\begin{tabular}{l|lllllll}
\cellcolor{gray}Weight Threshold & \cellcolor{gray}1 & \cellcolor{gray}3 & \cellcolor{gray}5 & \cellcolor{gray}7 & \cellcolor{gray}9 & \cellcolor{gray}11 \\
\hline
RoutedFusion~\cite{Weder2020RoutedFusionLR} F$\uparrow$ $[\%]$& 62.12 & 71.40 & 74.12 & 75.07 & 75.11 & 74.99 \\
Ours F$\uparrow$ $[\%]$ & \textbf{77.24} & \textbf{79.35} & \textbf{79.67} & \textbf{79.39} & \textbf{78.47} & \textbf{77.40} \\
\end{tabular}
}
\caption{\boldparagraph{Weight Thresholding.} Our model outperforms RoutedFusion on the Replica test set also when weight thresholding is applied.}
\label{tab:weight_thresholding}
\begin{minipage}[t]{\linewidth}
\centering
\footnotesize
\setlength{\tabcolsep}{1pt}
\resizebox{\columnwidth}{!}
{
\begin{tabular}{l|lllllll}
\cellcolor{gray} & \cellcolor{gray}MSE$\downarrow$ & \cellcolor{gray}MAD$\downarrow$ & \cellcolor{gray}IoU$\uparrow$ & \cellcolor{gray}Acc.$\uparrow$ & \cellcolor{gray}F$\uparrow$ & \cellcolor{gray}P$\uparrow$ & \cellcolor{gray}R$\uparrow$ \\
\multirow{-2}{*}{\cellcolor{gray} \backslashbox[23mm]{Model}{Metric}} & \cellcolor{gray}*e-04 & \cellcolor{gray}*e-02 & \cellcolor{gray}[0,1] & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ \\\hline
\multicolumn{8}{c}{\emph{Single Sensor}} \\ \hline
ToF~\cite{handa2014benchmark} & 7.48 & 1.99 & 0.664 & 83.65 & 58.52 & 45.84 & \textbf{84.85} \\
ToF denoising~\cite{handa2014benchmark} & 5.08 & 1.58 & 0.709 & 87.32 & 68.93 & 59.01 & 83.08 \\
\hline
\multicolumn{8}{c}{\emph{Multi-Sensor Fusion}} \\ \hline
Ours & \textbf{3.19} & \textbf{1.26} & \textbf{0.791} & \textbf{90.99} & \textbf{78.87} & \textbf{76.56} & 81.68 \\
\end{tabular}
}
\caption{\textbf{ToF\bplus{}ToF denoising Fusion.} Our method learns to combine the preprocessed depth and the raw depth in a fashion that improves the overall reconstruction.}
\label{tab:tof_tof_denoising}
\end{minipage}
\end{minipage}
\end{table}
\fi
\ifeccv
\else
\begin{table}
\centering
\setlength{\tabcolsep}{3pt}
\resizebox{\columnwidth}{!}
{
\begin{tabular}{l|lllllll}
\cellcolor{gray}Weight Threshold & \cellcolor{gray}1 & \cellcolor{gray}3 & \cellcolor{gray}5 & \cellcolor{gray}7 & \cellcolor{gray}9 & \cellcolor{gray}11 \\
\hline
RoutedFusion~\cite{Weder2020RoutedFusionLR} F$\uparrow$ $[\%]$& 62.12 & 71.40 & 74.12 & 75.07 & 75.11 & 74.99 \\
Ours F$\uparrow$ $[\%]$ & \textbf{77.24} & \textbf{79.35} & \textbf{79.67} & \textbf{79.39} & \textbf{78.47} & \textbf{77.40} \\
\end{tabular}
}
\caption{\boldparagraph{Weight Thresholding.} Our model outperforms RoutedFusion on the Replica test set also when weight thresholding is applied.}
\label{tab:weight_thresholding}
\end{table}
\fi
\ifeccv
\begin{figure*}[t]
\centering
\tiny
\setlength{\tabcolsep}{1pt}
\renewcommand{\arraystretch}{1}
\begin{tabular}{ccccccc}
\rotatebox[origin=c]{90}{Office 0} & \includegraphics[align=c, width=.18\linewidth]{figures/replica/tof_tof_routing/view2/model.jpg} & \includegraphics[align=c, width=.18\linewidth]{figures/replica/tof_tof_routing/view2/tof.jpg} & \includegraphics[align=c, width=.18\linewidth]{figures/replica/tof_tof_routing/view2/tof_routing.jpg} & \includegraphics[align=c, width=.18\linewidth]{figures/replica/tof_tof_routing/view2/fused.jpg} & \includegraphics[align=c, width=.18\linewidth]{figures/replica/tof_tof_routing/view2/weighting.jpg} & \multirow[t]{12}{*}[25pt]{\includegraphics[align=t, width=.051\linewidth]{figures/colorbars/colorbar_tof_tof_routing.png}} \\
\rotatebox[origin=c]{90}{Office 0} & \includegraphics[align=c, width=.18\linewidth]{figures/replica/tof_psmnet/office_0/model_small.jpg} & \includegraphics[align=c, width=.18\linewidth]{figures/replica/tof_psmnet_wo_routing/office_0/tof_small.jpg} & \includegraphics[align=c, width=.18\linewidth]{figures/replica/tof_psmnet/office_0/tof_small.jpg} & \includegraphics[align=c, width=.18\linewidth]{figures/replica/tof_tof_routing/view1/fused.jpg} & \includegraphics[align=c, width=.18\linewidth]{figures/replica/tof_tof_routing/view1/weighting.jpg} & \\
& Model & ToF~\cite{handa2014benchmark} & ToF Denoising~\cite{handa2014benchmark} & Our Fused & Our Sensor Weight & \\
\end{tabular}
\caption{\boldparagraph{ToF\bplus{}ToF Denoising Fusion.} Our method fuses the raw ToF sensor with the depth denoising preprocessed ToF sensor such that the fused result is improved. Note for example that the outliers from outside the walls from the raw sensor are removed and so are the outliers around the table from the denoising ToF sensor. Fig.~\ref{fig:trajectory_office_0} provides the camera trajectory that was used for the evaluation. Note that the raw sensor is favored on surfaces that were viewed close to the camera while the denoising ToF sensor is favored when the measured depth is large.
See also Tab.~\ref{tab:tof_tof_denoising}.}
\label{fig:tof_tof_denoising}
\end{figure*}
\else
\begin{figure*}[t]
\centering
\scriptsize
\setlength{\tabcolsep}{1pt}
\renewcommand{\arraystretch}{1}
\begin{tabular}{ccccccc}
\rotatebox[origin=c]{90}{Office 0} & \includegraphics[align=c, width=.18\linewidth]{figures/replica/tof_tof_routing/view2/model.jpg} & \includegraphics[align=c, width=.18\linewidth]{figures/replica/tof_tof_routing/view2/tof.jpg} & \includegraphics[align=c, width=.18\linewidth]{figures/replica/tof_tof_routing/view2/tof_routing.jpg} & \includegraphics[align=c, width=.18\linewidth]{figures/replica/tof_tof_routing/view2/fused.jpg} & \includegraphics[align=c, width=.18\linewidth]{figures/replica/tof_tof_routing/view2/weighting.jpg} & \multirow[t]{12}{*}[35pt]{\includegraphics[align=t, width=.051\linewidth]{figures/colorbars/colorbar_tof_tof_routing.png}} \\
\rotatebox[origin=c]{90}{Office 0} & \includegraphics[align=c, width=.18\linewidth]{figures/replica/tof_psmnet/office_0/model_small.jpg} & \includegraphics[align=c, width=.18\linewidth]{figures/replica/tof_psmnet_wo_routing/office_0/tof_small.jpg} & \includegraphics[align=c, width=.18\linewidth]{figures/replica/tof_psmnet/office_0/tof_small.jpg} & \includegraphics[align=c, width=.18\linewidth]{figures/replica/tof_tof_routing/view1/fused.jpg} & \includegraphics[align=c, width=.18\linewidth]{figures/replica/tof_tof_routing/view1/weighting.jpg} & \\
& Model & ToF~\cite{handa2014benchmark} & ToF Denoising~\cite{handa2014benchmark} & Our Fused & Our Sensor Weighting & \\
\end{tabular}
\caption{\boldparagraph{ToF\bplus{}ToF denoising Fusion.} Our method fuses the raw ToF sensor with the denoising preprocessed ToF sensor such that the fused result is improved. Note for example that the outliers from outside the walls from the raw sensor are removed and so are the outliers around the table from the denoising ToF sensor. Fig.~\ref{fig:trajectory_office_0} provides the camera trajectory that was used for the evaluation. Note that the raw sensor is favored on surfaces that were viewed close to the camera while the denoising ToF sensor is favored when the measured depth is large.
See also Tab.~\ref{tab:tof_tof_denoising}.}
\label{fig:tof_tof_denoising}
\end{figure*}
\fi
\ifeccv
\else
\begin{table}
\centering
\resizebox{\columnwidth}{!}
{
\begin{tabular}{l|lllllll}
\cellcolor{gray} & \cellcolor{gray}MSE$\downarrow$ & \cellcolor{gray}MAD$\downarrow$ & \cellcolor{gray}IoU$\uparrow$ & \cellcolor{gray}Acc.$\uparrow$ & \cellcolor{gray}F$\uparrow$ & \cellcolor{gray}P$\uparrow$ & \cellcolor{gray}R$\uparrow$ \\
\multirow{-2}{*}{\cellcolor{gray} \backslashbox[23mm]{Model}{Metric}} & \cellcolor{gray}*e-04 & \cellcolor{gray}*e-02 & \cellcolor{gray}[0,1] & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ & \cellcolor{gray}$[\%]$ \\\hline
\multicolumn{8}{c}{\emph{Single Sensor}} \\ \hline
ToF~\cite{handa2014benchmark} & 7.48 & 1.99 & 0.664 & 83.65 & 58.52 & 45.84 & \textbf{84.85} \\
ToF denoising~\cite{handa2014benchmark} & 5.08 & 1.58 & 0.709 & 87.32 & 68.93 & 59.01 & 83.08 \\
\hline
\multicolumn{8}{c}{\emph{Multi-Sensor Fusion}} \\ \hline
Ours & \textbf{3.19} & \textbf{1.26} & \textbf{0.791} & \textbf{90.99} & \textbf{78.87} & \textbf{76.56} & 81.68 \\
\end{tabular}
}
\caption{\textbf{ToF\bplus{}ToF denoising Fusion.} Our method learns to combine the preprocessed depth and the raw depth in a fashion that improves the overall reconstruction.}
\label{tab:tof_tof_denoising}
\end{table}
\fi
\ifeccv
\begin{wrapfigure}[14]{R}{0.55\linewidth}
\vspace{-\intextsep}
\centering
\include
|
graphics[align=c, width=0.6\linewidth]{figures/replica/top_down/office_topdownview.jpg}
\caption{\boldparagraph{Camera Trajectory.} Top-down visualization of the camera trajectory used for the office 0 test scene in Fig.~\ref{fig:tof_tof_denoising}. Navigable space is colored white and the arrows show the camera direction.}
\label{fig:trajectory_office_0}
\end{wrapfigure}
\else
\begin{figure}[t]
\centering
\includegraphics[align=c, width=0.6\linewidth]{figures/replica/top_down/office_topdownview.jpg}
\caption{\boldparagraph{Camera Trajectory.} Top-down visualization of the camera trajectory used for the office 0 test scene in Fig.~\ref{fig:tof_tof_denoising}. Navigable space is colored white and the arrows show the camera direction.}
\label{fig:trajectory_office_0}
\end{figure}
\fi
\boldparagraph{ToF\bplus{}ToF denoising Fusion.} From the experiments with and without depth denoising in the main paper, we note that the depth denoising network brings advantages where planar noise is present, but disadvantages due to over-smoothing around depth discontinuities. A natural question arises: Can our framework combine the best of both worlds by fusing a raw ToF frame with a ToF frame preprocessed by a denoising network? Tab.~\ref{tab:tof_tof_denoising} along with Fig.~\ref{fig:tof_tof_denoising} show that our fused output significantly improves on the inputs. For example, the raw ToF contains many outliers behind the walls, while the ToF denoising sensor does not. Vice versa, the raw ToF does not contain outliers around the table and chairs, while the ToF denoising sensor does. As a result, our method selects the appropriate noise-free sensor where needed. Fig.~\ref{fig:trajectory_office_0} provides the camera trajectory that was used for the evaluation. Note that the raw sensor is favored on surfaces that are viewed close to the camera while the ToF denoising sensor is favored when the measured depth is large.
\ifeccv
\begin{figure}[t]
\centering
{\scriptsize
\setlength{\tabcolsep}{1pt}
\renewcommand{\arraystretch}{1}
\begin{tabular}{cccccc}
SR* & PSMNet 1, ToF 1 & PSMNet 1, ToF 1/2 & PSMNet 1, ToF 1/3 & PSMNet 1, ToF 1/3 & \\
\rotatebox[origin=c]{90}{Hotel 0} & \includegraphics[align=c, width=.22\linewidth]{figures/replica/asynch_exp/early_fusion_asynch_psmnet1fps_tof1fps.jpg} & \includegraphics[align=c, width=.22\linewidth]{figures/replica/asynch_exp/early_fusion_asynch_psmnet2fps_tof1fps.jpg} & \includegraphics[align=c, width=.22\linewidth]{figures/replica/asynch_exp/early_fusion_asynch_psmnet3fps_tof1fps.jpg} & \includegraphics[align=c, width=.22\linewidth]{figures/replica/asynch_exp/ours_asynch_psmnet3fps_tof1fps.jpg} & \multirow[t]{12}{*}[37pt]{\includegraphics[align=t, width=.056\linewidth]{figures/colorbars/colorbar_outlier_filter.png}} \\
& Early Fusion & Early Fusion & Early Fusion & SenFuNet (Ours) & \\
\end{tabular}
}
\caption{\boldparagraph{ToF\bplus{}PSMNet Fusion.} The performance gap to our method grows when asynchronous sensors are considered. The performance decreases further for the Early Fusion when the sampling rate is reduced to 1/3 compared to the PSMNet sensor while out method remains robust (best viewed on screen). SR* = Sampling Rate.}
\label{fig:asynch}
\end{figure}
\fi
\ifeccv
\begin{figure}[t]
\centering
{\scriptsize
\setlength{\tabcolsep}{1pt}
\renewcommand{\arraystretch}{1}
\begin{tabular}{cccccc}
\rotatebox[origin=c]{90}{Hotel 0} & \includegraphics[align=c, width=.23\linewidth]{figures/replica/all_methods/hotel_0/sgm_psmnet_no_routing/tsdf.jpg} & \includegraphics[align=c, width=.23\linewidth]{figures/replica/all_methods/hotel_0/sgm_psmnet_no_routing/routedfusion.jpg} & \includegraphics[align=c, width=.23\linewidth]{figures/replica/all_methods/hotel_0/sgm_psmnet_no_routing/difusion_std0.15.jpg} & \includegraphics[align=c, width=.23\linewidth]{figures/replica/all_methods/hotel_0/sgm_psmnet_no_routing/fused.jpg} & \multirow[t]{12}{*}[43pt]{\includegraphics[align=t, width=.064\linewidth]{figures/colorbars/colorbar_outlier_filter.png}} \\
\rotatebox[origin=c]{90}{Office 0} & \includegraphics[align=c, width=.23\linewidth]{figures/replica/all_methods/office_0/sgm_psmnet_no_routing/tsdf.jpg} & \includegraphics[align=c, width=.23\linewidth]{figures/replica/all_methods/office_0/sgm_psmnet_no_routing/routedfusion.jpg} & \includegraphics[align=c, width=.23\linewidth]{figures/replica/all_methods/office_0/sgm_psmnet_no_routing/difusion_std0.15.jpg} & \includegraphics[align=c, width=.23\linewidth]{figures/replica/all_methods/office_0/sgm_psmnet_no_routing/fused.jpg} & \\
& TSDF Fusion~\cite{curless1996volumetric} & RoutedFusion~\cite{Weder2020RoutedFusionLR} & DI-Fusion~\cite{huang2021di} $\sigma$=0.15 & SenFuNet (Ours) & \\
\end{tabular}
}
\caption{\boldparagraph{SGM\bplus{}PSMNet Fusion without denoising.} Our method fuses the sensors consistently better than the baseline methods.
In particular, our method learns to detect and remove outliers much more effectively (best viewed on screen).}
\label{fig:all_methods}
\end{figure}
\else
\begin{figure}[t]
\centering
{\scriptsize
\setlength{\tabcolsep}{1pt}
\renewcommand{\arraystretch}{1}
\begin{tabular}{ccccc}
\rotatebox[origin=c]{90}{Hotel 0} & \includegraphics[align=c, width=.30\linewidth]{figures/replica/all_methods/hotel_0/sgm_psmnet_no_routing/tsdf.jpg} & \includegraphics[align=c, width=.30\linewidth]{figures/replica/all_methods/hotel_0/sgm_psmnet_no_routing/routedfusion.jpg} & \includegraphics[align=c, width=.30\linewidth]{figures/replica/all_methods/hotel_0/sgm_psmnet_no_routing/fused.jpg} & \multirow[t]{12}{*}[37pt]{\includegraphics[align=t, width=.0835\linewidth]{figures/colorbars/colorbar_outlier_filter.png}} \\
\rotatebox[origin=c]{90}{Office 0} & \includegraphics[align=c, width=.30\linewidth]{figures/replica/all_methods/office_0/sgm_psmnet_no_routing/tsdf.jpg} & \includegraphics[align=c, width=.30\linewidth]{figures/replica/all_methods/office_0/sgm_psmnet_no_routing/routedfusion.jpg} & \includegraphics[align=c, width=.30\linewidth]{figures/replica/all_methods/office_0/sgm_psmnet_no_routing/fused.jpg} & \\
& TSDF Fusion~\cite{curless1996volumetric} & RoutedFusion~\cite{Weder2020RoutedFusionLR} & SenFuNet (Ours) & \\
\end{tabular}
}
\caption{\boldparagraph{SGM\bplus{}PSMNet Fusion without denoising.} Our method fuses the sensors consistently better than the baseline methods.
In particular, our method learns to detect and remove outliers much more effectively (best viewed on screen).}
\label{fig:all_methods}
\end{figure}
\fi
\ifeccv
\boldparagraph{Visualizations.}
In Fig.~\ref{fig:tof_psmnet_supp} we show qualitative results on the sensors $\{$ToF, PSMNet$\}$. As concluded from the experiment on the sensors $\{$ToF, ToF denoising$\}$, the ToF sensor (without denoising) is not favored when the depth is large. We observe the same prediction on the office 0 scene in Fig.~\ref{fig:tof_psmnet_supp}.
Fig.~\ref{fig:all_methods} shows a comparison between our method on the sensors $\{$SGM, PSMNet$\}$ without denoising and TSDF Fusion~\cite{curless1996volumetric}, RoutedFusion~\cite{Weder2020RoutedFusionLR} and DI-Fusion~\cite{huang2021di}. We achieve better surface reconstruction performance than RoutedFusion and DI-Fusion and better outlier handling than all baseline methods.
\begin{figure*}[h!]
\centering
{\tiny
\setlength{\tabcolsep}{1pt}
\renewcommand{\arraystretch}{1}
\begin{tabular}{ccccccccc}
\multirow[c]{2}{*}[13pt]{\rotatebox[origin=c]{90}{\small Without Denoising}} & \rotatebox[origin=c]{90}{Hotel 0} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet/hotel_0/model.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet_wo_routing/hotel_0/tof_small.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet_wo_routing/hotel_0/psmnet_stereo_small.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet_wo_routing/hotel_0/tsdf_middle_small.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet_wo_routing/hotel_0/fused.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet_wo_routing/hotel_0/weighting.jpg} & \multirow[t]{12}{*}[21pt]{\includegraphics[align=t, width=.036\linewidth]{figures/colorbars/office_4_colorbar.png}}\\ & \rotatebox[origin=c]{90}{Office 0} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet/office_0/model_small.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet_wo_routing/office_0/tof_small.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet_wo_routing/office_0/psmnet_stereo_small.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet_wo_routing/office_0/tsdf_middle_small.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet_wo_routing/office_0/fused.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet_wo_routing/office_0/weighting.jpg} & \\
\multirow[c]{2}{*}[7pt]{\rotatebox[origin=c]{90}{\small With Denoising}} & \rotatebox[origin=c]{90}{Hotel 0} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet/hotel_0/model.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet/hotel_0/tof.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet/hotel_0/psmnet_stereo.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet/hotel_0/tsdf_middle.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet/hotel_0/fused.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet/hotel_0/weighting.jpg} & \\
& \rotatebox[origin=c]{90}{Office 0} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet/office_0/model_small.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet/office_0/tof_small.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet/office_0/psmnet_stereo_small.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet/office_0/tsdf_middle_small.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet/office_0/fused.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet/office_0/weighting.jpg} & \\
& & Model & ToF~\cite{handa2014benchmark} & PSMNet~\cite{chang2018pyramid} & TSDF Fusion~\cite{curless1996volumetric} & SenFuNet (Ours) & Our Sensor Weight & \\
\end{tabular}
}
\caption{\boldparagraph{ToF\bplus{}PSMNet Fusion.} Our method fuses the sensors consistently better than TSDF Fusion~\cite{curless1996volumetric}.
In particular, our method learns to detect and remove outliers much more effectively (best viewed on screen).}
\label{fig:tof_psmnet_supp}
\end{figure*}
In Fig.~\ref{fig:all_methods_denoising}, we add depth denoising to the model and evaluate the baseline methods against our method again. Our model achieves better outlier handling overall and more precise surface reconstruction in most regions compared to the Early Fusion method.
For more visual results, we refer to the supplementary videos.
\else
\boldparagraph{Visualizations.}
In Fig.~\ref{fig:tof_psmnet_supp} we show qualitative results on the sensors $\{$ToF, PSMNet$\}$. As concluded from the experiment on the sensors $\{$ToF, ToF denoising$\}$, the ToF sensor (without denoising) is not favored when the depth is large. We observe the same prediction on the office 0 scene in Fig.~\ref{fig:tof_psmnet_supp}.
Fig.~\ref{fig:all_methods} shows a comparison between our method on the sensors $\{$SGM, PSMNet$\}$ without denoising and TSDF Fusion~\cite{curless1996volumetric} and RoutedFusion~\cite{Weder2020RoutedFusionLR}. We achieve better surface
\FloatBarrier
\begin{figure*}[h!]
\centering
{\scriptsize
\setlength{\tabcolsep}{1pt}
\renewcommand{\arraystretch}{1}
\begin{tabular}{ccccccccc}
\multirow[c]{2}{*}[0pt]{\rotatebox[origin=c]{90}{\small Without Denoising}} & \rotatebox[origin=c]{90}{Hotel 0} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet/hotel_0/model.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet_wo_routing/hotel_0/tof_small.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet_wo_routing/hotel_0/psmnet_stereo_small.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet_wo_routing/hotel_0/tsdf_middle_small.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet_wo_routing/hotel_0/fused.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet_wo_routing/hotel_0/weighting.jpg} & \multirow[t]{12}{*}[29pt]{\includegraphics[align=t, width=.036\linewidth]{figures/colorbars/office_4_colorbar.png}}\\ & \rotatebox[origin=c]{90}{Office 0} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet/office_0/model_small.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet_wo_routing/office_0/tof_small.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet_wo_routing/office_0/psmnet_stereo_small.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet_wo_routing/office_0/tsdf_middle_small.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet_wo_routing/office_0/fused.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet_wo_routing/office_0/weighting.jpg} & \\
\multirow[c]{2}{*}[0pt]{\rotatebox[origin=c]{90}{\small With Denoising}} & \rotatebox[origin=c]{90}{Hotel 0} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet/hotel_0/model.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet/hotel_0/tof.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet/hotel_0/psmnet_stereo.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet/hotel_0/tsdf_middle.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet/hotel_0/fused.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet/hotel_0/weighting.jpg} & \\
& \rotatebox[origin=c]{90}{Office 0} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet/office_0/model_small.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet/office_0/tof_small.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet/office_0/psmnet_stereo_small.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet/office_0/tsdf_middle_small.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet/office_0/fused.jpg} & \includegraphics[align=c, width=.15\linewidth]{figures/replica/tof_psmnet/office_0/weighting.jpg} & \\
& & Model & ToF~\cite{handa2014benchmark} & PSMNet~\cite{chang2018pyramid} & TSDF Fusion~\cite{curless1996volumetric} & SenFuNet (Ours) & Our Sensor Weighting & \\
\end{tabular}
}
\caption{\boldparagraph{ToF\bplus{}PSMNet Fusion.} Our method fuses the sensors consistently better than TSDF Fusion~\cite{curless1996volumetric}.
In particular, our method learns to detect and remove outliers much more effectively (best viewed on screen).}
\label{fig:tof_psmnet_supp}
\end{figure*}
\FloatBarrier
\noindent reconstruction performance than RoutedFusion and better outlier handling than both RoutedFusion and TSDF Fusion.
In Fig.~\ref{fig:all_methods_denoising}, we add denoising to the model and evaluate the baseline methods against our method again. Our model achieves better outlier handling overall and more precise surface reconstruction in most regions compared to the Early Fusion method.
For more visual results, we refer to the supplementary videos.
\fi
\section*{J. Limitations}
\label{sec:limits}
\ifeccv
\else
\addcontentsline{toc}{section}{\nameref{sec:limits}}
\fi
While our method generates better reconstructions on average, specific local regions may still not improve if the wrong sensor weighting is estimated. Fig.~\ref{fig:fail_cases} shows four failure cases of our method. The top left visualization of $\{$SGM, PSMNet$\}$ without depth denoising shows that the PSMNet surface is selected to a large degree. Our method typically selects the more smooth surface (PSMNet), when compared to a noisy surface (SGM), even though the noisier surface (SGM) may be better on average. The red rectangles on the bottom row and in the top right example show less severe failure cases when our method performs smoothing when selection would have resulted in a more accurate surface prediction. This typically happens around edges, but may in rare cases happen on planar regions containing repetitive textures, for example a tiled bathroom shower (see bottom left example).
Lastly, our method has difficulties handling overlapping outliers from both sensors \emph{i.e.}} \def\Ie{\emph{I.e.}~where both sensors have registered an outlier at the same voxel. See the orange rectangle in the top right example. This is due to the fact that the Outlier Filter can only be applied on voxels with a single sensor observation.
\ifeccv
\begin{figure*}[t]
\centering
{\scriptsize
\setlength{\tabcolsep}{1pt}
\renewcommand{\arraystretch}{1}
\begin{tabular}{ccccccc}
\rotatebox[origin=c]{90}{Hotel 0} & \includegraphics[align=c, width=.18\linewidth]{figures/replica/all_methods/hotel_0/sgm_psmnet_routing/tsdf.jpg} & \includegraphics[align=c, width=.18\linewidth]{figures/replica/all_methods/hotel_0/sgm_psmnet_routing/routedfusion.jpg} & \includegraphics[align=c, width=.18\linewidth]{figures/replica/all_methods/hotel_0/sgm_psmnet_routing/difusion_std0.15.jpg} & \includegraphics[align=c, width=.18\linewidth]{figures/replica/all_methods/hotel_0/sgm_psmnet_routing/early_fusion.jpg} & \includegraphics[align=c, width=.18\linewidth]{figures/replica/all_methods/hotel_0/sgm_psmnet_routing/fused.jpg} & \multirow[t]{12}{*}[31pt]{\includegraphics[align=t, width=.048\linewidth]{figures/colorbars/colorbar_outlier_filter.png}} \\
\rotatebox[origin=c]{90}{Office 0} & \includegraphics[align=c, width=.18\linewidth]{figures/replica/all_methods/office_0/sgm_psmnet_routing/tsdf.jpg} & \includegraphics[align=c, width=.18\linewidth]{figures/replica/all_methods/office_0/sgm_psmnet_routing/routedfusion.jpg} & \includegraphics[align=c, width=.18\linewidth]{figures/replica/all_methods/office_0/sgm_psmnet_routing/difusion_std0.15.jpg} & \includegraphics[align=c, width=.18\linewidth]{figures/replica/all_methods/office_0/sgm_psmnet_routing/early_fusion.jpg} & \includegraphics[align=c, width=.18\linewidth]{figures/replica/all_methods/office_0/sgm_psmnet_routing/fused.jpg} & \\
& TSDF Fusion~\cite{curless1996volumetric} & RoutedFusion~\cite{Weder2020RoutedFusionLR} & DI-Fusion~\cite{huang2021di} & Eary Fusion & SenFuNet (Ours) & \\
& & & $\sigma$=0.15 & & &
\end{tabular}
}
\caption{\boldparagraph{SGM\bplus{}PSMNet Fusion with denoising.} Our method fuses the sensors consistently better than the baseline methods. Compared to the Early Fusion baseline, our method removes more outliers and reconstructs most surfaces better (best viewed on screen).}
\label{fig:all_methods_denoising}
\end{figure*}
\else
\begin{figure*}[t]
\centering
{\scriptsize
\setlength{\tabcolsep}{1pt}
\renewcommand{\arraystretch}{1}
\begin{tabular}{cccccc}
\rotatebox[origin=c]{90}{Hotel 0} & \includegraphics[align=c, width=.23\linewidth]{figures/replica/all_methods/hotel_0/sgm_psmnet_routing/tsdf.jpg} & \includegraphics[align=c, width=.23\linewidth]{figures/replica/all_methods/hotel_0/sgm_psmnet_routing/routedfusion.jpg} & \includegraphics[align=c, width=.23\linewidth]{figures/replica/all_methods/hotel_0/sgm_psmnet_routing/early_fusion.jpg} & \includegraphics[align=c, width=.23\linewidth]{figures/replica/all_methods/hotel_0/sgm_psmnet_routing/fused.jpg} & \multirow[t]{12}{*}[62.5pt]{\includegraphics[align=t, width=.065\linewidth]{figures/colorbars/colorbar_outlier_filter.png}} \\
\rotatebox[origin=c]{90}{Office 0} & \includegraphics[align=c, width=.23\linewidth]{figures/replica/all_methods/office_0/sgm_psmnet_routing/tsdf.jpg} & \includegraphics[align=c, width=.23\linewidth]{figures/replica/all_methods/office_0/sgm_psmnet_routing/routedfusion.jpg} & \includegraphics[align=c, width=.23\linewidth]{figures/replica/all_methods/office_0/sgm_psmnet_routing/routedfusion.jpg} & \includegraphics[align=c, width=.23\linewidth]{figures/replica/all_methods/office_0/sgm_psmnet_routing/early_fusion.jpg} & \includegraphics[align=c, width=.23\linewidth]{figures/replica/all_methods/office_0/sgm_psmnet_routing/fused.jpg} & \\
& TSDF Fusion~\cite{curless1996volumetric} & RoutedFusion~\cite{Weder2020RoutedFusionLR} & Eary Fusion & SenFuNet (Ours) & \\
\end{tabular}
}
\caption{\boldparagraph{SGM\bplus{}PSMNet Fusion with denoising.} Our method fuses the sensors consistently better than the baseline methods. Compared to the Early Fusion baseline, our method removes more outliers and reconstructs most surfaces better (best viewed on screen).}
\label{fig:all_methods_denoising}
\end{figure*}
\fi
\ifeccv
\begin{figure*}[t]
\centering
\tiny
\setlength{\tabcolsep}{1pt}
\renewcommand{\arraystretch}{2.0}
\hspace*{-0.3cm}
\begin{tabular}{c|m{0.464\linewidth}<{\centering}|m{0.464\linewidth}<{\centering}|c}
\cline{2-3}
& \normalsize Without Denoising &
\normalsize With Denoising & \\ \cline{2-3}
\end{tabular}
\renewcommand{\arraystretch}{1}
\begin{tabular}{cccccccc}
\rotatebox[origin=c]{90}{Office 0} &
\includegraphics[align=c, width=.153\linewidth]{figures/replica/fail_cases/office_0/sgm_psmnet_no_routing/sgm.jpg} &
\includegraphics[align=c, width=.153\linewidth]{figures/replica/fail_cases/office_0/sgm_psmnet_no_routing/psmnet.jpg} &
\includegraphics[align=c, width=.153\linewidth]{figures/replica/fail_cases/office_0/sgm_psmnet_no_routing/fused.jpg} &
\includegraphics[align=c, trim={0 0 0.5cm 0}, clip, width=.153\linewidth]{figures/replica/fail_cases/office_0/tof_psmnet_routing/tof.jpg} &
\includegraphics[align=c, trim={0 0 0.5cm 0}, clip, width=.153\linewidth]{figures/replica/fail_cases/office_0/tof_psmnet_routing/psmnet.jpg} &
\includegraphics[align=c, trim={0 0 0.5cm 0}, clip, width=.153\linewidth]{figures/replica/fail_cases/office_0/tof_psmnet_routing/fused.jpg} &
\multirow[t]{12}{*}[25pt]{\includegraphics[align=t, width=.04\linewidth]{figures/colorbars/colorbar_outlier_filter.png}} \\
& SGM~\cite{hirschmuller2007stereo} & PSMNet~\cite{chang2018pyramid} & SenFuNet (Ours) & ToF~\cite{handa2014benchmark} & PSMNet~\cite{chang2018pyramid} & SenFuNet (Ours) \\
\rotatebox[origin=c]{90}{Hotel 0} & \includegraphics[align=c, width=.153\linewidth]{figures/replica/fail_cases/hotel_0/sgm_psmnet_no_routing/sgm.jpg} & \includegraphics[align=c, width=.153\linewidth]{figures/replica/fail_cases/hotel_0/sgm_psmnet_no_routing/psmnet.jpg} & \includegraphics[align=c, width=.153\linewidth]{figures/replica/fail_cases/hotel_0/sgm_psmnet_no_routing/fused.jpg} & \includegraphics[align=c, width=.153\linewidth]{figures/replica/fail_cases/hotel_0/sgm_psmnet_routing/sgm.jpg} & \includegraphics[align=c, width=.153\linewidth]{figures/replica/fail_cases/hotel_0/sgm_psmnet_routing/psmnet.jpg} & \includegraphics[align=c, width=.153\linewidth]{figures/replica/fail_cases/hotel_0/sgm_psmnet_routing/fused.jpg} & \\
& SGM~\cite{hirschmuller2007stereo} & PSMNet~\cite{chang2018pyramid} & SenFuNet (Ours) & SGM~\cite{hirschmuller2007stereo} & PSMNet~\cite{chang2018pyramid} & SenFuNet (Ours) & \\
\end{tabular}
\caption{\boldparagraph{Failure Cases.} While our method generates better reconstructions on average, specific local regions may not improve. Overlapping outliers, some edges and cases where one sensor looks noisy but is quantitatively good, are especially difficult to handle (best viewed on screen).}
\label{fig:fail_cases}
\end{figure*}
\else
\begin{figure*}[t]
\centering
\scriptsize
\setlength{\tabcolsep}{1pt}
\renewcommand{\arraystretch}{2.0}
\hspace*{-0.53cm}
\begin{tabular}{c|m{0.464\linewidth}<{\centering}|m{0.464\linewidth}<{\centering}|c}
\cline{2-3}
& \normalsize Without Denoising &
\normalsize With Denoising & \\ \cline{2-3}
\end{tabular}
\renewcommand{\arraystretch}{1}
\begin{tabular}{cccccccc}
\rotatebox[origin=c]{90}{Office 0} &
\includegraphics[align=c, width=.153\linewidth]{figures/replica/fail_cases/office_0/sgm_psmnet_no_routing/sgm.jpg} &
\includegraphics[align=c, width=.153\linewidth]{figures/replica/fail_cases/office_0/sgm_psmnet_no_routing/psmnet.jpg} &
\includegraphics[align=c, width=.153\linewidth]{figures/replica/fail_cases/office_0/sgm_psmnet_no_routing/fused.jpg} &
\includegraphics[align=c, trim={0 0 0.5cm 0}, clip, width=.153\linewidth]{figures/replica/fail_cases/office_0/tof_psmnet_routing/tof.jpg} &
\includegraphics[align=c, trim={0 0 0.5cm 0}, clip, width=.153\linewidth]{figures/replica/fail_cases/office_0/tof_psmnet_routing/psmnet.jpg} &
\includegraphics[align=c, trim={0 0 0.5cm 0}, clip, width=.153\linewidth]{figures/replica/fail_cases/office_0/tof_psmnet_routing/fused.jpg} &
\multirow[t]{12}{*}[35pt]{\includegraphics[align=t, width=.04\linewidth]{figures/colorbars/colorbar_outlier_filter.png}} \\
& SGM~\cite{hirschmuller2007stereo} & PSMNet~\cite{chang2018pyramid} & SenFuNet (Ours) & ToF~\cite{handa2014benchmark} & PSMNet~\cite{chang2018pyramid} & SenFuNet (Ours) \\
\rotatebox[origin=c]{90}{Hotel 0} & \includegraphics[align=c, width=.153\linewidth]{figures/replica/fail_cases/hotel_0/sgm_psmnet_no_routing/sgm.jpg} & \includegraphics[align=c, width=.153\linewidth]{figures/replica/fail_cases/hotel_0/sgm_psmnet_no_routing/psmnet.jpg} & \includegraphics[align=c, width=.153\linewidth]{figures/replica/fail_cases/hotel_0/sgm_psmnet_no_routing/fused.jpg} & \includegraphics[align=c, width=.153\linewidth]{figures/replica/fail_cases/hotel_0/sgm_psmnet_routing/sgm.jpg} & \includegraphics[align=c, width=.153\linewidth]{figures/replica/fail_cases/hotel_0/sgm_psmnet_routing/psmnet.jpg} & \includegraphics[align=c, width=.153\linewidth]{figures/replica/fail_cases/hotel_0/sgm_psmnet_routing/fused.jpg} & \\
& SGM~\cite{hirschmuller2007stereo} & PSMNet~\cite{chang2018pyramid} & SenFuNet (Ours) & SGM~\cite{hirschmuller2007stereo} & PSMNet~\cite{chang2018pyramid} & SenFuNet (Ours) & \\
\end{tabular}
\caption{\boldparagraph{Failure Cases.} While our method generates better reconstructions on average, specific local regions may not improve. Overlapping outliers, some edges and cases where one sensor looks noisy but is quantitatively good, are especially difficult to handle (best viewed on screen).}
\label{fig:fail_cases}
\end{figure*}
\fi
|
\section{Introduction}
As one of the efficient compression methods, pruning reduces the number of parameters by replacing model parameters of low importance with zeros \citep{optimalbrain}.
Since magnitude-based pruning has shown that pruning can be conducted with low computational complexity \citep{SHan_2015}, various practical pruning methods have been studied to achieve higher compression ratio \citep{suyog_prune, sparseVD, louizos2018learning, gale2019state}.
Recently, pruning has been extended to a deeper understanding of weight initialization.
Based on the \textit{Lottery Winning Ticket} hypothesis \citep{frankle2018lottery}, \citep{renda2020comparing} suggests a weight-rewinding method to explore sub-networks from full-trained models.
Furthermore, pruning methods at initialization steps without pre-trained models have also been proposed \citep{lee2018snip,Wang2020Picking}.
Despite a high compression ratio, fine-grained pruning that eliminates each parameter individually has practical issues to be employed in parallel computing platforms.
One of the popular formats to represent sparse matrices after pruning is the Compressed Sparse Row (CSR) whose structures are irregular.
For parallel computing, such irregular formats degrade inference performance that is dominated by matrix multiplications \citep{gale2020sparse}.
Algorithm 1 presents a conventional sparse matrix-vector multiplication (SpMV) algorithm using CSR format which involves irregular and data-dependent memory accesses.
Correspondingly, performance gain using a sparse matrix multiplication (based on CSR) is a lot smaller than the compression ratio of pruning \citep{scalpel2017}.
Structured pruning methods (e.g, block-based pruning \citep{Narang2017, zhou2021learning}, filter-based pruning \citep{li2016pruning}, and channel-based pruning \citep{he2017channel, Liu2017learning}) have been suggested to enhance parallelism by restricting the locations of pruned weights.
Those methods, however, induce degraded accuracy and/or lower pruning rate than fine-grained pruning.
In this paper, as an efficient method to compress sparse NNs pruned by fine-grained pruning, we consider weight encoding techniques.
As shown in Algorithm 2, encoded weights are multiplied by a fixed binary matrix ${\bm{M}}^\oplus$ to reconstruct the original weights.
We propose an encoding method and ${\bm{M}}^\oplus$ design methodology to compress sparse weights in a regular format.
It should be noted that \textbf{a sparse matrix multiplication can be even slower than a dense matrix multiplication unless pruning rate is high enough} \citep{scalpel2017, gale2020sparse} that does not happen with Algorithm 2 for memory-intensive workloads.
We study the maximum compression ratio of such encoding-based compression using entropy and propose a sequential fixed-to-fixed scheme that keeps high parallelism after fine-grained pruning.
We show that by our proposed fixed-to-fixed scheme, a compression ratio can approach the maximum (estimated by entropy) even under the variation of the unpruned weights in a block.
\SetKwComment{Comment}{$\triangleright$\ }{}
\SetCommentSty{textnormal}
\SetKw{KwBy}{by}
\begin{minipage}[t]{0.40\textwidth}
\begin{algorithm}[H]
\small
\vspace{0pt}
\caption{\small SpMV (CSR format)}
\DontPrintSemicolon
In: Dense vector ${\bm{x}}$, \;
\,\,\,\,\, CSR vectors $\bm{dat}$, $\bm{row}$, $\bm{col}$ \;
Out: Dense vector ${\bm{y}}$ \;
\For{$i \gets 1$ \KwTo $n$}{
\For{$j \gets \bm{row}_i$ \KwTo $\bm{row}_{i+1}$}{
${\bm{y}}_i \gets {\bm{y}}_i + \bm{dat}[j] \times {\bm{x}}[col[j]]$ \;
}
}
\textcolor{blue}{/*Irregular, data-dependent access*/}
\label{alg:1}
\end{algorithm}
\end{minipage}
\hfill
\begin{minipage}[t]{0.59\textwidth}
\begin{algorithm}[H]
\small
\vspace{0pt}
\caption{\small Proposed SpMV (using encoded weights)}
\DontPrintSemicolon
In: Dense vector ${\bm{x}} {\in}R^{m}$, Encoded vectors ${\bm{w}}^e_{1..n} {\in} R^{k}$ \;
\,\,\,\,\, Fixed matrix ${\bm{M}}^{\oplus}\in \{0,1\}^{k \times m}$, \textit{Mask} \,\,\,\, \textcolor{violet}{** $(k{\ll}m)$}\;
Out: Dense vector ${\bm{y}}$ \;
\For{$i \gets 1$ \KwTo $n$}{
${\bm{W}}_i \gets {\bm{w}}^e_{i} \times {\bm{M}}^{\oplus}$ over \textit{GF(2)} \;
${\bm{y}}_i = {\bm{W}}_i \cdot {\bm{x}}$ with \textit{Mask} (for zero skipping)
}
\textcolor{blue}{/* Decoding $m$ elements using $w^e_{i}$} \textcolor{blue}{(Regular access)*/} \;
\label{alg:2}
\end{algorithm}
\end{minipage}
\section{Fixed-to-Fixed Sparsity Encoding}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\textwidth]{EPS/sxor_iclr_figure.pdf}
\caption{Comparison between fixed-to-variable (e.g., CSR) sparsity format and fixed-to-fixed (proposed) sparsity format. (a): Memory bandwidth comparison. (b): Memory access patterns with irregular sparsity.}
\vspace{-0.3cm}
\label{fig:iclr_new}
\end{center}
\end{figure}
Data compression is a process of encoding original data into a smaller size.
If a fixed number of original bits are encoded into a fixed number of (smaller) bits, such a case is categorized into a ``fixed-to-fixed'' compression scheme.
Similarly, ``fixed-to-variable'', ``variable-to-fixed'', and ``variable-to-variable'' categories are available while variable lengths of original and/or encoded bits allow higher compression ratio than fixed ones (e.g., Huffman codes \citep{Huffman} as fixed-to-variable scheme, Lempel-Ziv (LZ)-based coding \citep{LZ_coding} as variable-to-fixed scheme, and Golomb codes \citep{Golomb} as variable-to-variable scheme).
Among those 4 categories, ``fixed-to-fixed'' compression is the best for NNs that rely on parallel computing systems.
Fixed-to-fixed compression schemes are, however, challenging when fine-grained pruning is employed in NNs because the number of unpruned weights in a fixed-size block varies.
Accordingly, most previous sparsity representations (such as CSR format) translate a fixed-size weight block into a variable-size block while such a translation would demand non-uniform memory accesses that lead to significantly degraded memory bandwidth utilization as shown in Figure~\ref{fig:iclr_new}.
Specifically, in the case of fixed-to-variable sparsity format (e.g., CSR) in Figure~\ref{fig:iclr_new}, we observe that memory bandwidth is low because fine-grained pruning induces a variable number of pruned weights for a certain block or row while memory is designed to access a fixed amount of consecutive data.
Since higher sparsity leads to higher relative standard deviation (i.e., coefficient of variation) on pruned weights in a block, low memory bandwidth is a significant issue even though the amount of data to be stored is reduced (see Appendix A).
As a result, for fixed-to-variable sparsity format, it is difficult to implement fine-grained pruning with parallel computing systems that require high memory bandwidth \citep{scalpel2017}.
On the other hand, fixed-to-fixed compression schemes in Figure~\ref{fig:iclr_new} can maintain the same memory bandwidth regardless of sparsity.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.50\textwidth]{EPS/sxor-intro-fig1.eps}
\caption{Fixed-to-fixed compression of a sparse weight matrix. Even when a block involves a varying number of unpruned weights, the size of each encoded block is fixed and determined by an average number of unpruned weights in blocks.}
\label{fig:intro_fixed_to_fixed_scheme}
\end{center}
\end{figure}
In this work, we propose a ``fixed-to-fixed'' compression scheme as shown in Figure~\ref{fig:intro_fixed_to_fixed_scheme} when the number of pruned weights in a block can vary.
A successful fixed-to-fixed compression of sparse NNs should consider the followings:
\begin{itemize}
\item \textbf{(High compression ratio)}
The maximum compression ratio is limited by the minimum entropy (that can be obtained by a fixed-to-variable scheme as we discuss in Appendix D). Suppose that a block to be encoded contains (fixed) $n_b$ bits among which (fixed) $n_u$ bits are unpruned. A fixed-to-fixed encoding scheme is required to support high compression close to $(n_b / n_u )$ (estimated by entropy). Fixed-to-fixed decoding, then, accepts (fixed) $N_{in}$ bits as an input and produces $N_{out}$ bits as an output while $N_{out}/N_{in}$ = $n_b / n_u$.
\item \textbf{(Variation tolerance)}
For a fine-grained pruning, $n_u$ is given as a random variable whose distribution is affected by pruning rate, $n_b$ size, a particular pruning method, and so on. Our goal is to maintain a fixed-to-fixed scheme with a high compression ratio even under $\mathrm{Var}[n_u]\neq 0$.
In Figure~\ref{fig:intro_fixed_to_fixed_scheme}, for example, 5 blocks of original data have various $n_u$ values while the size of an encoded block is fixed to be 4 (=$\mathbb{E}[n_u]$). We will discuss how to design a variation tolerant encoding.
\end{itemize}
\section{Random Number Generator}
\begin{figure}[t]
\begin{center}
\centerline{\includegraphics[width=0.50\columnwidth]{EPS/sxor-intro-fig2.eps}}
\caption{Encoding of weights using an XOR-gate decoder as a random number generator.}
\label{fig:encoding_matched}
\end{center}
\end{figure}
Before we discuss compression schemes, let us assume that a binary masking matrix is given to represent which weights are pruned or not (note such a binary masking matrix can be compressed significantly \citep{ourBMF}).
Then, a pruned weight can be described as a don't care value (\text{\ding{53}}) that is to be masked.
We also assume that 1) pruning each weight is performed independently with pruning rate $S$ and 2) unpruned weight is assigned to 0 or 1 with equal probability (such assumptions are not necessary when we demonstrate our experimental results in Section 5).
To obtain both ``high compression ratio'' and ``variation tolerance'' while a fixed-to-fixed compression scheme is considered, we adopt random number generator schemes that enable encoding/decoding of weights.
A random number generator accepts a fixed number of inputs and produces $n_b$ bits so as to implement a fixed-to-fixed compression scheme.
As shown in Figure~\ref{fig:encoding_matched}, a weight block is compared with every output of a random number generator.
If there is an output vector matching original (partially masked) weights, then a corresponding input vector of a random number generator can be an encoded input vector.
As an effort to increase the Hamming distance between any two outputs (i.e., the number of bit positions in which two bits are different), $2^{n_u}$ outputs out of $2^{n_b}$ possible candidates need to be randomly selected.
Note that random encoding has already been suggested by Claude Shannon to introduce channel capacity that is the fundamental theory in digital communication \citep{eccbook}.
Since then, practical error correction coding techniques have been proposed to implement random-like coding by taking into account efficient decoding (instead of using a large look-up table).
\begin{figure*}[t]
\begin{center}
\begin{subfigure}[t]{0.31\textwidth}
\centering
\includegraphics[width=\textwidth]{EPS/eff_novar.eps}
\caption{$n_u$ (=a number of unpruned weights in a block) is fixed to be $N_{in}$ (i.e., $\mathrm{Var}[n_u]{=}0$).}
\label{fig:E_for_fixed}
\end{subfigure}
\hspace{0.1cm}
\begin{subfigure}[t]{0.31\textwidth}
\centering
\includegraphics[width=\textwidth]{EPS/eff_normalvar.eps}
\caption{$n_u$ follows a binomial distribution $B(N_{out}, 1-S)$.}
\label{fig:E_for_varied}
\end{subfigure}
\hspace{0.1cm}
\begin{subfigure}[t]{0.31\textwidth}
\centering
\includegraphics[width=\textwidth]{EPS/eff_transform.eps}
\caption{$n_u$ is empirically obtained by pruning the first decoder layer of the Transformer using a magnitude-based pruning method.}
\label{fig:E_for_transformer}
\end{subfigure}
\end{center}
\caption{Encoding efficiency (\%) of random XOR-gate decoders. $S$ is pruning rate and $N_{out}$ is given as $\lfloor N_{in} \cdot (1/(1{-}S))\rfloor$.}
\label{fig:E_multiple_cases}
\end{figure*}
Similar to error correction coding that usually depends on linear operations over Galois Field with two elements, or $GF(2)$ \citep{eccbook}, for simple encoding of original data, recently, two compression techniques for sparse NNs have been proposed.
An XOR-gate decoder produces (a large number of) binary outputs using (a relatively much smaller number of) binary inputs while outputs are quantized weights \citep{kwon2020structured}.
Another example is to adopt a Viterbi encoding/decoding scheme \citep{Forney1973} to generate multiple bits using a single bit as an input \citep{viterbi_quantized}.
For a block that cannot be encoded into a compressed one by a random number generator, we can attach patch data to fix unmatched bits \citep{kwon2020structured} or re-train the model to improve the accuracy \citep{viterbi_quantized}.
To compare the random number generation capability of various block-level compression schemes, we introduce `encoding efficiency' given as a percentage.
\begin{equation}\label{eq:enc_efficiency}
E\, \textrm{\textbf{(Encoding Efficiency)}} = \frac{\textrm{\# of correctly matched bits}}{\textrm{\# of unpruned bits}} \times 100 (\%)
\end{equation}
Let $S$ be pruning rate ($0\le S \le 1$).
To measure encoding efficiency ($E$), we assume that the compression ratio of a random number generator (=the number of output bits / the number of input bits) is $1/(1-S)$.
We generate numerous randomly pruned (binary) weight blocks, and for each block, we investigate all of the possible outputs that a random number generator can produce.
If there is a block missing a matching output of a generator, then the maximum number of correctly matched bits is recorded for each block.
We repeat such an experiment for all of the blocks.
Note that $E$ cannot be higher than 100\% for any generators.
\subsection{Fixed Pruning Rate in a Block}
For simplicity, we assume that $n_u$ in a block is a fixed number.
Let us study $E$ when $n_u$ is fixed using an XOR-gate decoder introduced in \citep{kwon2020structured}.
For an XOR-gate decoder, when $N_{out}$ is the number of output bits and $N_{in}$ is the number of input bits, a matrix ${\bm{M}}^\oplus \in \{0,1\}^{N_{out} \times N_{in}}$ presents connectivity information between an input vector ${\bm{w}}^x (\in \{0,1\}^{N_{in}})$ and an output vector ${\bm{w}}^y (\in \{0,1\}^{N_{out}})$ such that we have ${\bm{w}}^y = {\bm{M}}^\oplus \cdot {\bm{w}}^x$ over $GF(2)$.
For example, if the second row of ${\bm{M}}^\oplus$ (with $N_{in}=4$ and $N_{out}=8$) is given as $[1 \, 0 \, 1 \, 1]$, then ${\bm{w}}^y_2 = {\bm{w}}^x_1 \oplus {\bm{w}}^x_3 \oplus {\bm{w}}^x_4$ (`$\oplus$' indicates a binary XOR operation or an addition over $GF(2)$ equivalently).
An element of ${\bm{M}}^\oplus$ is randomly filled with 0 or 1 as a simple random number generator design technique \citep{kwon2020structured}.
To measure $E$, let $N_{in}/N_{out} \approx 1-S$ such that $N_{out} = \lfloor N_{in} \cdot (1/(1{-}S))\rfloor$.
Correspondingly, for a certain $S$, a block size (=$N_{out}$) increases as $N_{in}$ increases.
When $n_b = N_{out}$ and $n_u = N_{in}$, Figure~\ref{fig:E_for_fixed} describes statistics of $E$ when random ${\bm{M}}^\oplus$ matrices are associated with random blocks.
From Figure~\ref{fig:E_for_fixed}, it is clear that increasing $N_{in}$ is the key to improving encoding efficiency.
Note that, however, increasing $N_{in}$ and $N_{out}$ complicates the decoding complexity (due to large ${\bm{M}}^\oplus$) and the encoding complexity as well (due to an exponentially large search space).
\subsection{Variable Pruning Rate in a Block}
Now we allow $n_u$ in a block to fluctuate.
For Figure~\ref{fig:E_for_varied}, we assume that pruning each weight is a Bernoulli event (with $S$ probability) such that $n_u$ in a block follows a binomial distribution $B(N_{out}, 1{-}S)$ (thus, $\mathbb{E} [n_u]=N_{out}(1{-}S)$ and $\mathrm{Var}[n_u] = N_{out}S(1{-}S)$).
$N_{in}$ is given as $\mathbb{E} [n_u]$ in Figure~\ref{fig:E_for_varied}.
Compared to Figure~\ref{fig:E_for_fixed}, $E$ becomes lower mainly because some blocks would have $n_u$ larger than $N_{in}$ (i.e., too many unpruned weight bits that a random number generator cannot target)
\begin{wrapfigure}{r}{0.5\textwidth}
\begin{center}
\centerline{\includegraphics[width=0.48\columnwidth]{EPS/sxor-intro-fig3.eps}}
\caption{Encoding of two blocks when a number of unpruned weights can vary in a block.}
\label{fig:encoding_two_varied_blocks}
\end{center}
\end{wrapfigure}
Note that the coefficient of variation of $n_u$ (=$\sqrt{\mathrm{Var}[n_u]} {/} \mathbb{E}[n_u]$ {=} $\sqrt{S/(N_{out}(1{-}S))}$) decreases as $N_{out}$ increases.
Indeed, the gap between $E$ in Figure~\ref{fig:E_for_fixed} and $E$ in Figure \ref{fig:E_for_varied} tends to be slightly reduced when $N_{in}$ (as well as corresponding $N_{out}$) increases.
To illustrate, as shown in Figure~\ref{fig:encoding_two_varied_blocks}, when there are two blocks of different $n_u$, concatenating two blocks into a block (thus, increasing $N_{out}$) can increase the probability of successful encoding due to efficient usage of $N_{in}$.
Increasing $N_{in}$ and $N_{out}$ by $n$ times, however, requires an XOR-gate decoder to be larger by $n^2$ times with only a marginal gain in $E$.
Another way to improve $E$ under the variation on $n_u$ is to decrease $N_{out}$ and increase $N_{in}$.
In this case, however, the compression ratio is reduced consequently.
We need a solution to design a fixed-to-fixed sparsity compression technique that can improve $E$ of a random number generator under the variation on $n_u$ without sacrificing compression ratio (i.e., $N_{in}/N_{out} = 1{-}S$).
To validate our observations obtained by synthetic random data, we compute $E$ of an XOR-gate decoder using the Transformer model \citep{transformer_original}.
The first fully-connected layer of the Transformer is pruned by a magnitude-based pruning method \citep{SHan_2015} with pruning rate $S$.
Interestingly, $E$ described in Figure~\ref{fig:E_for_transformer} is similar to that of Figure~\ref{fig:E_for_varied}.
As such, our assumption that pruning a weight is a Bernoulli event is valid for the context of magnitude-based pruning.
\section{Proposed Sequential encoding Scheme}
If $\mathrm{Var}[n_u]$ is non-zero and $N_{in}$ is fixed, then blocks of small $n_u$ ($< N_{in}$) would have many possible input vectors with matching output vectors while other blocks with large $n_u$ ($> N_{in}$) may have no any single possible input vector.
Such unbalance among encoding success rates over blocks can be mitigated if a part of input vectors associated with a block of small $n_u$ can be reused for the neighboring blocks of large $n_u$.
In other words, by sharing some parts of an input vector for multiple consecutive blocks (of diverse $n_u$ values), input search space size of each block can be balanced.
Reusing inputs is fulfilled by shift registers that have been also introduced to convolutional codes such as Viterbi codes \citep{eccbook}.
In this section, we propose sequential encoding techniques to address the limitations of previous fixed-to-fixed compression schemes (for example, XOR-gate-only decoder \citep{kwon2020structured} lacking tolerance for $n_u$ variation, and Viterbi encoders \citep{viterbi_quantized} that receive only one bit as an input (i.e., $N_{in}$ is restricted to be 1).
\paragraph{Weight manipulation}
Since our encoding/decoding techniques process data in a block-level (in the form of a 1-D vector), original sparse weight matrices (or tensors) need to be reshaped through grouping, flattening, and slicing.
Assuming that a number format has the precision of $n_w$ bits (e.g., $n_w$=32 for FP32), as the first step of weight manipulation, a weight matrix ${\bm{W}} \in \mathbb{R}^{m \times n}$ is grouped into $n_w$ binary matrices ${\bm{W}}^b_{1..n_w} {\in} \{0,1\}^{m \times n}$ (otherwise, $n_w$ successive bits are pruned or unpruned).
Then, each binary matrix (or tensor) ${\bm{W}}^b_i$ is flattened to be a 1-D vector and each vector is sliced into ${\bm{w}}^b_{i, 1..l}$ blocks when $l$ (=$ \lceil\frac{mn}{N_{out}}\rceil$) indicates the number of blocks in a 1-D (flattened) vector.
\begin{figure}[t]
\begin{center}
\centerline{\includegraphics[width=0.95\textwidth]{EPS/sxor-overview.eps}}
\caption{Proposed fixed-to-fixed sequential encoding/decoding scheme. Weight encoding is performed offline, and thus, complex computation for encoding is allowed. Encoded weights are decoded during inference through XOR-gate decoders that are best implemented by ASICs or FPGAs. Pruned weights are filled by random values during weight decoding.}
\label{fig:sequantial_overview}
\end{center}
\end{figure}
\paragraph{Decoding with an input sequence}
For an XOR-gate decoder (as a non-sequential decoder) at time index $t$, an output vector ${\bm{w}}^{b'}_t$ ($\in\{0,1\}^{N_{out}}$) is a function of an input vector ${\bm{w}}^e_t$ ($\in\{0,1\}^{N_{in}}$) such that ${\bm{w}}^e_t$ is utilized only for one time index.
In our proposed sequential encoding scheme, ${\bm{w}}^e_t$ is exploited for multiple time indices using shift registers.
Specifically, we copy ${\bm{w}}^e_t$ to a shift register whose outputs are connected to the inputs of the next shift register.
Let $N_s$ be the number of shift registers.
In the proposed scheme, an XOR-gate decoder receives inputs from $N_s$ shift registers as well as ${\bm{w}}^e_t$.
Then, as shown in Figure~\ref{fig:sequantial_overview}, a sequential decoder (consisting of an XOR-gate decoder and shift registers) produces ${
|
\bm{w}}^{b'}_t$ as a function of an input sequence of (${\bm{w}}^e_t$, ${\bm{w}}^e_{t-1}$, ...) while such a function can be described as ${\bm{w}}^{b'}_t = {\bm{M}}^\oplus ({\bm{w}}_{t}^e {}^{\frown} {\bm{w}}_{t-1}^e {}^{\frown} ... {}^{\frown} {\bm{w}}_{t-N_{s}}^e)$ over $GF(2)$ where $A {}^{\frown} B$ implies a concatenation of $A$ and $B$ and the sequence length is ($N_{s} + 1$).
Note that in Figure~\ref{fig:sequantial_overview}, an input vector ${\bm{w}}^e_t$ is reused $N_{s}$ times and an XOR-gate decoder accepts $(N_{s}+1) \cdot N_{in}$ bits to yield $N_{out}$ bits (hence, ${\bm{M}}^\oplus \in \{0,1\}^{N_{out} \times ((N_{s}+1) \cdot N_{in})}$).
Increasing $N_{s}$ enables 1) a larger XOR-gate decoder (without increasing $N_{in}$) that improves $E$ as demonstrated in Figure~\ref{fig:E_multiple_cases} and 2) multiple paths from an input vector to multiple output vectors (of various $n_u$) resulting in balanced encoding.
Note that our XOR-gate decoder (or effectively memory decompressor) is best implemented by ASICs or FPGAs where each XOR requires only 6 transistors \citep{digital_book}.
\begin{wrapfigure}{r}{0.5\textwidth}
\begin{center}
\centerline{\includegraphics[width=0.5\columnwidth]{EPS/sxor-ns-compare.eps}}
\caption{Sequential decoding example when $N_{s}$=1, $N_{in}$=3, and $N_{out}$=8. An input is utilized for ($N_{s}$+1) time indices through shift registers.}
\label{fig:seq_dec_example}
\end{center}
\vskip -0.2in
\end{wrapfigure}
\paragraph{Balanced encoding}
Figure~\ref{fig:seq_dec_example} illustrates a decoding process when $N_{s}$=1, $N_{in}$=3, and $N_{out}$=8.
Each weight vector to be encoded contains a different number of unpruned weights.
For ${\bm{w}}^{b'}_t$ at time index $t$ (in Figure~\ref{fig:seq_dec_example}), a less number of unpruned weights tends to enlarge search space for ${\bm{w}}^e_t$.
On the other hand for ${\bm{w}}^{b'}_{t+1}$ at time index $(t+1)$, a large number of unpruned weights would highly restrict search space for ${\bm{w}}^e_t$.
Correspondingly, compared to the case when ${\bm{w}}^e_t$ is associated with only one ${\bm{w}}^{b'}$ block (i.e., non-sequential encoding with $N_s$=0), search space for ${\bm{w}}^e_t$ can be balanced as more ${\bm{w}}^{b'}$ blocks of various $n_u$ are correlated with ${\bm{w}}^e_t$.
Note that as $\mathrm{Var}[n_u]$ of one block increases, according to the central limit theorem, $N_s$ is required to be larger to maintain the balance of encoding capability of each ${\bm{w}}^e_t$.
As we demonstrate in the next section, under the variation on $n_u$, even small non-zero $N_s$ can enhance $E$ substantially while increasing $N_{in}$ (and $N_{out}$ while $N_s$=0) improves $E$ only marginally (as described in Figure~\ref{fig:E_multiple_cases}).
\paragraph{Encoding algorithm}
In the case of a non-sequential encoding scheme, because of one-to-one correspondence between ${\bm{w}}^e_t$ and ${\bm{w}}^{b'}_t$ through ${\bm{M}}^\oplus$, encoding can be performed independently for each block (e.g., for a given masked ${\bm{w}}^b_t$, we select one matching ${\bm{w}}^{b'}_t$ out of all available ${\bm{w}}^{b'}_t$ sets by a decoder).
Such block-wise encoding, however, is not applicable to a sequential encoding scheme that needs to consider the whole sequence of ${\bm{w}}^b$ blocks to find an input sequence fed into a decoder.
Suppose that we find a particular output sequence matching $l$ blocks after exploring all feasible output sequences provided by a generator and the input sequence (${\bm{w}}_{1}^e, {\bm{w}}_{2}^e, ..., {\bm{w}}_{l}^e$), the time-complexity of encoding would be $\mathcal O(2^{N_{in} \cdot l})$.
Fortunately, sequential decoding operations shown in Figure~\ref{fig:seq_dec_example} can be models as a hidden Markov model, where each state is represented by concatenating ${\bm{w}}^e_t$, ${\bm{w}}^e_{t-1}$, ..., ${\bm{w}}^e_{t-N_s}$ and there are $2^{N_{in}}$ paths for the next state transitions \citep{viterbi1998intuitive}.
Consequently, the time- and space-complexity can be reduced to be $\mathcal O(2^{N_{in} \cdot (N_{s}+1)} \cdot l)$ by dynamic programming that computes the maximum-likelihood sequence in a hidden Markov model \citep{Forney1973}.
For details of our encoding algorithm, the reader is referred to Appendix E.
\paragraph{Lossless compression}
Any random number generators cannot produce outputs perfectly matching all unpruned weights (i.e., $E$ is always less than 100\%).
To correct unmatched outputs of a random number generator (by flipping $0 \leftrightarrow 1$) in order to enable lossless compression, the locations of all unmatched weight bits need to be recorded.
Note that if $E \approx 1$, the number of unmatched weight bits is a lot smaller than the number of encoded weight bits.
If that is the case, such correction information can be stored in a separate on-chip memory that can be independently accessed without disturbing decoding operations.
We propose a format representing such correction information in Appendix F where each unmatched weight bit requires $N_c$ bits in the correction format ($N_c$ is around 10 in Appendix E).
Taking into account a compression ratio of a generator given as $N_{out}$/$N_{in}$ and additional correction information (using $N_c$ bits per one unmatched weight bit), a binary weight matrix ${\bm{W}}^b \in \{0,1\}^{m \times n}$ can be compressed into ($\frac{N_{in}}{N_{out}}mn$+$N_{err}$) bits when $N_{err}$ = $N_c \times$(\# of unmatched bits) = $N_c mn (1-S)(1-E)$.
Subsequently, memory reduction can be expressed as
\begin{equation}\label{eq:mem_reduction}
\begin{split}
\textrm{Memory Save} &= 1-((N_{in} / N_{out})mn + N_{err})/(mn) \\
&= 1-(1-S)(1+(1-E)N_c),
\end{split}
\end{equation}
when $N_{in}/N_{out}$ is given as $(1-S)$.
Thus, memory reduction approaches $S$ when $E$ approaches 1.
For the overall design complexity analysis of the proposed compression, refer to Appendix G.
\section{Experimental Results}
In this section, we demonstrate the encoding capability of our proposed sequential encoding techniques using synthetic random data and NNs pruned by various pruning methods.
Even though various heuristic algorithms can be suggested, we adopt a simple dynamic programming technique for weight encoding that minimizes the number of unmatched weight bits.
For our experiments, $N_{in}$ is selected to be 8 such that we feed a decoder on a byte-level.
\begin{figure*}[t]
\begin{center}
\centerline{\includegraphics[width=1.0\textwidth]{EPS/pr90_8.eps}}
\caption{Impact of $N_s$ with various $N_{out}$ using 1M random bits, $N_{in}=8$, and $S$=0.9.}
\label{fig:nout_exp}
\end{center}
\vskip -0.2in
\end{figure*}
\subsection{Synthetic Random Data (\boldmath$N_{in}$=8)}
\paragraph{Setup}
We generate a random weight 1-D vector of 1M bits.
We also create a random masking data of 1M bits in which the percentage of zeros equals $S$.
${\bm{M}}^\oplus$ matrix (formulating the structure of an XOR-gate decoder) basically needs to be designed to maximize the randomness among outputs.
Measuring randomness, however, is challenging and such measurement may not be highly correlated to $E$.
Alternatively, we try numerous random ${\bm{M}}^\oplus$ matrices and choose a particular ${\bm{M}}^\oplus$ of the highest $E$.
Specifically, for a given set of $N_{in}$ and $N_{out}$, an element of ${\bm{M}}^\oplus\in\mathbb{R}^{N_{out} \times ((N_s+1)\cdot N_{in})}$ is randomly assigned to 0 or 1 with equal probability.
Then, $E$ of those random ${\bm{M}}^\oplus$ matrices is estimated by using given random binary weight vectors and masking data.
The best ${\bm{M}}^\oplus$ providing the highest $E$ (for a given set of $N_{in}$ and $N_{out}$) is then utilized for our experiments.
\paragraph{Compression capability}
Figure~\ref{fig:nout_exp} demonstrates the impact of $N_s$ on $E$ and corresponding memory reduction(\%) with various $N_{out}$ and 1M random bits when $N_{in}$=8 and $S$=0.9.
Regardless of $N_s$, as $N_{out}$ increases (i.e., the compression ratio (=$N_{out} / N_{in}$=$N_{out}/8$) of a decoder increases), the number of encoded bits is reduced while the number of unmatched (error) bits increases.
Because of such a trade-off between the number of encoded bits and the number of error bits, there exists a certain $N_{out}$ that maximizes the memory reduction.
Note that compared to a non-sequential encoding (of $N_s$=0), sequential encoding (even with small $N_s$) significantly reduces the number of error bits and maintains high $E$ (of almost 100\%) until $N_{out}$ reaches $N_{in} {\times} (1{/}(1{-}S))$=80.
Indeed, memory reduction becomes highest (=89.32\%) when $N_s$ is highest and $N_{out}$ is 80 in Figure~\ref{fig:nout_exp}.
As we discussed for lossless compression in Section 5, 89.32\% of memory reduction is close to $S$(=90\%) that is the maximum memory reduction obtained when $E \approx 1$.
In the remainder of this paper, $N_{out}$ is given as $N_{in} {\times} (1{/}(1{-}S))$ to maximize the memory reduction.
\begin{wraptable}{r}{0.6\textwidth}
\small
\caption{Memory reduction (\%) using 1M random bits and various $N_s$ and $S$ when $N_{in}=8$ and $N_{out}{=}N_{in} \cdot 1/(1-S)$.}
\label{table:experiment_maxpr}
\begin{center}
\begin{sc}
\begin{tabular}{c|c|c|c|c}
\Xhline{2\arrayrulewidth}
\backslashbox{$N_{s}$}{$S$} & 60.0\% & 70.0\% & 80.0\% & 90.0\% \\
\Xhline{2\arrayrulewidth}
0 & 38.6\% & 53.8\% & 67.9\% & 83.5\% \\ \hline
1 & 55.9\% & 67.4\% & 77.5\% & 88.5\% \\ \hline
2 & 58.4\% & 69.1\% & 78.9\% & 89.3\% \\ \hline
\Xhline{2\arrayrulewidth}
\end{tabular}
\end{sc}
\end{center}
\end{wraptable}
\paragraph{Impact of $S$ on memory reduction}
Table~\ref{table:experiment_maxpr} presents memory reduction using various $S$.
For a certain $S$ in Table~\ref{table:experiment_maxpr}, as $N_s$ increases, the difference between $S$ and memory reduction decreases.
In other words, regardless of $S$, sequential encoding/decoding principles are crucial for memory reduction to approach $S$ which is the maximum.
Increasing $S$ also facilitates memory reduction to approach $S$ in Table~\ref{table:experiment_maxpr} as described in Eq.~\ref{eq:mem_reduction} where increasing $S$ with a constant $E$ enhances memory reduction.
\begin{wrapfigure}{r}{0.45\textwidth}
\begin{center}
\includegraphics[width=0.45\columnwidth]{EPS/zero_ratio.eps}
\caption{$E$ with various ratio of zero in a random vector.}
\label{fig:zero_ratio}
\end{center}
\end{wrapfigure}
\paragraph{Inverting technique}
So far, we assumed that a binary weight matrix holds equal amounts of zeros and ones.
A few representations such as binary-coding-based quantization \citep{rastegariECCV16, xu2018alternating} and signed INT8 \citep{jacob2018quantization} would inherently justify such an assumption.
However, there exist exceptional representations as well (e.g., FP32).
We conduct experiments to find a relationship between the ratio of zeros and $E$ using a random weight vector as shown in Figure~\ref{fig:zero_ratio}.
$E$ increases if a substantial amount of zeros are employed as unpruned weight bits, because of a higher chance to find trivial inputs (of all zeros fed into XOR operations) to produce zero outputs.
Hence, to improve $E$ (especially when $N_s$ is low), we propose an inverting technique where an entire binary weight vector is inverted if the ratio of zeros is less than 50\%.
\begin{table*}[t]
\caption{$E$ and memory reduction of sparse Transformer and ResNet-50 pruned by two different pruning methods. When $N_s$ is 0 or 1, inverting can be applied to a layer if unpruned weights accommodate more zeros than ones. Inverting has no effect for weights of signed INT8.}
\label{table:experiment_encoding_eff}
\small
\begin{center}
\begin{tabular}{c|l|c|c|c|c|c|c}
\cline{3-8}
\multicolumn{2}{c|}{} & \multicolumn{3}{c|}{$E$ (\%)} & \multicolumn{3}{c}{Memory Reduction (\%)} \\
\multicolumn{2}{c|}{} & \multicolumn{3}{c|}{(Max: 100\%)} & \multicolumn{3}{c}{(Max: $S$)} \\
\Xhline{2\arrayrulewidth}
\mr{2}{Model} & \mr{2}{$S$(Method)} & Non-Seq. & \multicolumn{2}{c|}{Sequential} & Non-Seq. & \multicolumn{2}{c}{Sequential} \\
& & $N_{s}$=0(Inv.) & $N_{s}$=1(Inv.) & $N_{s}$=2 & $N_{s}$=0(Inv.) & $N_{s}$=1(Inv.) & $N_{s}$=2 \\
\Xhline{2\arrayrulewidth}
\mr{4}{Transformer\\\textit{WMT14 en-de}\\(FP32)} & 70\%(Mag.) & 93.8(94.5) & 98.0(98.3) & \textbf{98.7} & 50.3(52.4) & 63.1(63.8) & \textbf{65.3} \\
& 70\%(Rand.) & 94.6(95.2) & 99.2(99.3) & \textbf{99.8} & 52.8(54.6) & 66.6(66.8) & \textbf{68.3} \\ \cline{2-8}
& 90\%(Mag.) & 92.6(93.9) & 97.6(97.9) & \textbf{98.4} & 82.4(83.7) & 87.4(87.7) & \textbf{88.2} \\
& 90\%(Rand.) & 93.7(94.5) & 98.7(98.9) & \textbf{99.5} & 83.5(84.3) & 88.5(88.7) & \textbf{89.3} \\ \hline
\mr{4}{ResNet-50\\\textit{ImageNet}\\(FP32)} & 70\%(Mag.) & 94.4(95.0) & 98.6(98.7) & \textbf{99.1} & 52.2(54.2) & 64.7(65.3) & \textbf{66.5} \\
& 70\%(Rand.) & 94.6(95.1) & 99.1(99.2) & \textbf{99.7} & 52.7(54.2) & 66.5(66.7) & \textbf{68.3} \\ \cline{2-8}
& 90\%(Mag.) & 92.7(93.7) & 97.3(97.6) & \textbf{98.1} & 82.5(83.5) & 87.1(87.4) & \textbf{87.9} \\
& 90\%(Rand.) & 92.7(93.5) & 97.6(97.9) & \textbf{98.7} & 82.5(83.3) & 87.5(87.7) & \textbf{88.6} \\ \hline
\mr{4}{ResNet-50\\\textit{ImageNet}\\(Signed \\INT8)} & 70\%(Mag.) & 93.9(N/A) & 98.5(N/A) & \textbf{99.1} & 50.9(N/A) & 64.5(N/A) & \textbf{66.4} \\
& 70\%(Rand.) & 96.2(N/A) & 99.7(N/A) & \textbf{99.9} & 57.6(N/A) & 68.3(N/A) & \textbf{69.0} \\ \cline{2-8}
& 90\%(Mag.) & 92.4(N/A) & 97.1(N/A) & \textbf{98.0} & 82.2(N/A) & 86.9(N/A) & \textbf{87.8} \\
& 90\%(Rand.) & 93.5(N/A) & 98.2(N/A) & \textbf{99.2} & 83.3(N/A) & 88.0(N/A) & \textbf{89.0} \\
\Xhline{2\arrayrulewidth}
\end{tabular}
\end{center}
\end{table*}
\begin{table*}[t]
\caption{Coefficient of variation of $n_u$ and $E$ of two selected layers of the Transformer pruned by random, magnitude-based, or L0 regularization pruning method.}
\label{table:comp_diff_pruning_method}
\small
\begin{center}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c}
\Xhline{2\arrayrulewidth}
\mr{4}{Pruning\\Method} & \mr{4}{Target\\$S$} & \multicolumn{4}{c|}{Layer: dec3/self\_att/q} & \multicolumn{4}{c}{Layer: dec3/ffn2} \\
& & \multicolumn{4}{c|}{($512 \times 512$), FP32} & \multicolumn{4}{c}{($2048 \times 512$), FP32} \\ \cline{3-10}
& & \mr{2}{Coeff. of\\mathrm{Var}. ($n_u$)} & \multicolumn{3}{c|}{$E$ (\%)} & \mr{2}{Coeff. of\\mathrm{Var}. ($n_u$)} & \multicolumn{3}{c}{$E$ (\%)} \\ \cline{4-6}\cline{8-10}
& & & $N_{s}$=0 & $N_{s}$=1 & $N_{s}$=2 & & $N_{s}$=0 & $N_{s}$=1 & $N_{s}$=2 \\
\Xhline{2\arrayrulewidth}
Random & \mr{3}{0.7} & 0.299 & 94.6 & 99.2 & \textbf{99.8} & 0.303 & 94.6 & 99.2 & \textbf{99.8} \\
Mag. & & 0.324 & 94.5 & 98.9 & \textbf{99.6} & 0.366 & 94.1 & 98.3 & \textbf{98.9} \\
L0 Reg. & & 0.347 & 94.5 & 99.0 & \textbf{99.6} & 0.331 & 94.3 & 98.7 & \textbf{99.2} \\
\Xhline{2\arrayrulewidth}
\end{tabular}
\end{center}
\vskip -0.1in
\end{table*}
\subsection{Sparse Transformer and ResNet-50 ($N_{in}$=8)}
We measure compression capability of our proposed sequential encoding scheme using sparse Transformer \citep{transformer_original} on WMT'14 en-de dataset and ResNet-50 \citep{resnet} on ImageNet.
Those two models\footnote{https://github.com/google-research/google-research/tree/master/state\_of\_sparsity} in FP32 format are pruned by various methods including magnitude-based one \citep{SHan_2015}, L0 regularization \citep{louizos2018learning}, and random pruning \citep{gale2019state} (also variational dropout \citep{sparseVD} in Appendix H).
For the ResNet-50 model (on ImageNet), we also consider signed INT8 format \citep{jacob2018quantization}.
Table~\ref{table:experiment_encoding_eff} presents $E$ and memory reduction when every layer of the Transformer and ResNet-50 is pruned by the same pruning rate $S$.
Both $E$ and memory reduction are significantly improved by increasing $N_s$.
Even compared to the case when inverting technique is applied to non-sequential encoding ($N_s{=}0$), we observe that sequential encoding ($N_s {>} 0$) without inverting yields a lot higher compression capability.
Note that the compression capabilities of random pruning and magnitude-based pruning methods are similar in Table~\ref{table:experiment_encoding_eff} such that our experiments with synthetic random data are justified.
Such justification is also verified in Table~\ref{table:comp_diff_pruning_method} that is achieved by using two selected layers of the Transformer.
Compared to random pruning, magnitude-based and L0 regularization pruning methods exhibit somewhat lower $E$ that is related to higher coefficients of variation of $n_u$.
See Appendix H for additional results.
All in all, our proposed encoding method designed in the context of random pruning is also effective for other fine-grained pruning methods.
Through various cases including synthetic data and benchmark models, we demonstrated the superiority of the proposed encoding scheme to previous fixed-to-fixed compression methods including a non-sequential XOR-gate decoder of $N_s$=0 \citep{kwon2020structured} and a Viterbi-based encoder structure where $N_{in}$ is limited to be 1 \citep{viterbi_quantized}.
\section{Conclusion}
In this paper, we proposed a sequential encoding scheme that is a useful fixed-to-fixed compression for sparse NNs.
We studied the maximum compression ratio using entropy based on the strategy of mapping a weight block into a small number of symbols.
We also investigated random number generators as a practical fixed-to-fixed decoder using an XOR-gate decoder.
Random number generators can improve compression capability if input vectors are reused for multiple time indices through shift registers.
\bibliographystyle{abbrvnat}
|
\section{Introduction}
Their high intrinsic brightness and the tight relation between their period and luminosity established by the Leavitt law \citep{1908AnHar..60...87L,1912HarCi.173....1L} make long period Cepheids the most precise standard candles for the determination of extragalactic distances. Their relatively high mass ($m \approx 10-15\,M_\odot$), together with the brevity of their passage in the instability strip combine to make them very rare stars. As a result, only a handful of long-period Cepheids ($P > 30$\,d) are located within a few kiloparsecs of the Sun. RS\,Puppis (HD 68860), whose period is $P=41.5$\,d, is among the most luminous Cepheids in the Milky Way. I present in Sect.~\ref{lightechoes} the remarkable light echoes that occur in its circumstellar nebula, and how the distance to the Cepheid can be derived from their observation. Sect.~\ref{modeling} is dedicated to the modeling of the pulsation of RS\,Pup, and the calibration of the Baade-Wesselink projection factor.
\section{Light echoes in the nebula of RS Puppis\label{lightechoes}}
The circumstellar dusty nebula of RS Puppis was discovered by \citet{1961PASP...73...72W}. This author proposed that the presence of the dusty environment results in the creation of ``light echoes'' of the photometric variations of the Cepheid (see, e.g., \cite{2003AJ....126.1939S} for a review on this phenomenon). The variable illumination wavefronts emitted by the Cepheid propagate in the nebula, where they are scattered by the dust grains. The additional optical path of the scattered light compared to the light coming directly from the Cepheid causes the appearance of a time delay (and therefore a phase difference) between the directly observed Cepheid cycle and the photometric variation of the dust in the nebula.
The first detection of the light echoes of RS\,Pup was achieved by \citet{1972A&A....16..252H}. More than 30 years later, CCD observations were collected by \citet{2008A&A...480..167K} using the EMMI imager installed at the 3.6-m ESO NTT. These authors derived a distance based on the phase lag of selected dust knots with respect to the Cepheid. \citet{2009A&A...495..371B} however objected that the hypothesis of coplanarity of the selected dust knots was likely incorrect (see also \cite{2008MNRAS.387L..33F}), and that the determination of the 3D structure of the dust is necessary to derive the distance using the echoes.
To measure this distribution, polarimetric imaging has been shown to be a powerful technique, as applied for instance on V838\,Mon by \citet{2008AJ....135..605S} (see also \cite{1994ApJ...433...19S}).
Using this technique, the structure of the nebula was determined by \citet{2012A&A...541A..18K}, from ground-based observations with the VLT/FORS instrument. The dust is distributed over a relatively thin, ovoidal shell, probably swept away by the strong radiation pressure from the Cepheid light. These authors conclude that the dust mass in the nebula is too large to be explained by mass loss from the Cepheid itself, and that the nebula is a pre-existing interstellar dust cloud in which the Cepheid is temporarily embedded.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.9\textwidth,clip]{Kervella_8o02_fig1}
\caption{{\bf Left:} HST/ACS color image of the nebula of RS Pup \citep{Kervella:2014lr}. The light echoes of the maximum light phase are visible as irregular blue rings, due to the hotter temperature of the star at its maximum light. {\bf Right:} Radial velocity and angular diameter data used for the pulsation modeling of the Cepheid \citep{2017A&A...600A.127K}.}
\label{rspup:fig1}
\end{figure}
However, the seeing-limited VLT/FORS polarimetric images could not provide a sufficiently detailed map of the structure of the nebula. A time sequence of polarimetric images of RS\,Pup was therefore obtained by \citet{Kervella:2014lr} using the HST/ACS instrument. They showed in spectacular details the structure of the nebula (Fig.~\ref{rspup:fig1}, left panel), as well as the propagation of the echoes (Fig.~\ref{rspup:fig2}; see also \url{https://vimeo.com/108581936} for a video of the light echoes), as well as the degree of linear polarization of the scattered light over the nebula.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.4\textwidth,clip]{Kervella_8o02_fig2a}
\includegraphics[width=0.4\textwidth,clip]{Kervella_8o02_fig2b}
\caption{{\bf Left:} Mean scattered light intensity around RS\,Pup \citep{Kervella:201
|
4lr} (arbitrary grey scale). {\bf Right:} Phase delay of the light variation over the nebula with respect to the Cepheid cycle.}
\label{rspup:fig2}
\end{figure}
The combination of the polarization information and the phase delay of the light echoes over the nebula result in a distance of $1910 \pm 80$\,pc ($4.2\%$), equivalent to a parallax of $\varpi = 0.524 \pm 0.022$\,mas \citep{Kervella:2014lr}. This value is significantly different from the parallax of RS\,Pup $\varpi_\mathrm{GDR2} = 0.613 \pm 0.026$\,mas (adopting a Gaia parallax shift of +29\,$\mu$as) reported in the second Gaia data release \citep{2018A&A...616A...1G}. The parallax of a field star interacting with the nebula of RS\,Pup (labeled S1, $\varpi[\mathrm{S1}] = 0.532 \pm 0.048$\,mas; \citet{2019A&A...623A.117K}) is however consistent with the light echo distance to the Cepheid.
\section{Pulsation modeling and the projection factor\label{modeling}}
The classical Baade-Wesselink (BW) distance determination technique (also known as the parallax-of-pulsation) is based on the comparison of the linear amplitude of the radius variation of a pulsating star with its angular amplitude (see, e.g., \cite{2008A&A...488...25G}). The SPIPS approach is a recent BW implementation presented by \citet{2015A&A...584A..80M}, that uses a combination of spectroscopic radial velocity, multi-band photometry and interferometric angular diameters to build a consistent model of the pulsating photosphere of the star.
A strong limitation of the BW technique comes from the fact that the distance and the spectroscopic projection factor ($p$-factor) are fully degenerate. The $p$-factor is defined as the ratio between the photospheric velocity and the disk integrated radial velocity measured by spectroscopy \citep{2014IAUS..301..145N}.
The $p$-factor is presently the major limiting factor in terms of accuracy for the application of the BW technique, and the research is particularly active on this topic \citep{2007A&A...474..975G,2011A&A...534A..94S, 2012A&A...543A..55N,2013MNRAS.436..953P,2015A&A...576A..64B,2016A&A...587A.117B,2017A&A...597A..73N,2018IAUS..330..335N}.
Using the VLTI/PIONIER optical interferometer, \citet{2017A&A...600A.127K} measured the changing angular diameter of RS\,Pup over its pulsation cycle (Fig.~\ref{rspup:fig1}, right panel). These measurements, together with the light echo distance from the HST/ACS observations, enabled these authors to resolve the degeneracy between the distance and the $p$-factor, and obtain a $p$-factor value of $p=1.25 \pm 0.06$ for RS\,Pup.
\section{Conclusion}
The determination of an accurate distance to RS\,Pup using its light echoes is an important step toward a reliable calibration of the Leavitt law, as it is still today, even after the second Gaia data release \citep{2018A&A...616A...1G}, the most accurate distance estimate to a long period Cepheid.
Moreover, this original technique is independent from the trigonometric parallax, enabling an independent validation.
The light echo distance can also be employed to calibrate the Baade-Wesselink method, and in particular the $p$-factor. A reliable calibration of this classical technique provides a solid basis to estimate the distance of Cepheids that are too far for direct trigonometric parallax measurements with Gaia.
While BW analyses of individual Cepheids is accessible at present only up to the LMC and SMC \citep{2004A&A...415..531S,2011A&A...534A..95S,2017A&A...608A..18G}, it will soon become possible with extremely large telescopes as the E-ELT or the TMT to determine the distances of individual Cepheids in significantly more distant galaxies of the Local Group.
\begin{acknowledgements}
The research leading to these results has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 695099, project CepBin).
Support for Program number GO-13454 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555.
Some of the data presented in this paper were obtained from the Multimission Archive at the Space Telescope Science Institute (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NAG5-7584 and by other grants and contracts.
\end{acknowledgements}
\bibliographystyle{aa}
|
\section*{Acknowledgements}
\end{center}
PHF was supported in part
by the U.S. Department of Energy under Grant
No. DE-FG02-06ER41418 and PQH was supported in part by
the U.S.
|
Department of Energy under Grant No. DE-FG02-97ER41027.
\bigskip
\bigskip
\bigskip
\bigskip
\bigskip
\bigskip
\bigskip
\bigskip
\bigskip
|
\section{Introduction and statement of results}
We consider dynamical systems consisting
of a compact space, $X$, together with
a homeomorphism, $\varphi: X \rightarrow X$.
We say that such a system is \emph{minimal} if the
only closed sets $Y \subseteq X$ such that $\varphi(Y) = Y$ are
$Y = X$ and $Y = \emptyset$. Equivalently, for every $x$ in $X$, its orbit,
$\{ \varphi^{n}(x) \mid n \in {\mathbb Z} \}$, is dense in $X$.
There are a number of examples of such systems: rotation of the
circle through an angle which is an irrational multiple
of $2 \pi$, odometers and certain diffeomorphisms
of spheres of odd dimension $d \geq 3$ constructed by Fathi and Herman \cite{FatHer:Diffeo}.
All of these examples share one common feature: the spaces
involved are
homogeneous. There are several ways to make this more precise,
but one simple way would be to observe that the group
of homeomorphisms
acts transitively on the points.
In \cite{floyd1949}, Floyd gave the first example of a minimal system
where the space is not homogeneous in this (or an even stronger) sense.
Floyd began with the $3^{\infty}$-odometer, $(X, \varphi)$,
which is a minimal system with $X$ compact, metrizable,
totally disconnected and without isolated points. Any two such
spaces are homeomorphic and we refer to such a space as a
\emph{Cantor
set}. Floyd then constructed another minimal
system, $(\tilde{X}, \tilde{\varphi})$, together with a continuous
surjection $ \pi: \tilde{X} \rightarrow X$
satisfying $\pi \circ \tilde{\varphi} = \varphi \circ \pi$.
In general, we refer to such a map as a \emph{factor map},
we say
that $(X, \varphi)$ is a \emph{factor} of $(\tilde{X}, \tilde{\varphi})$ and
that $(\tilde{X}, \tilde{\varphi})$ is an
\emph{extension} of $(X, \varphi)$. In Floyd's example,
some points $x$ in $X$ have $\pi^{-1}\{ x \}$ homeomorphic
to the unit interval, $[0, 1]$, while for others, it is a single
point.
It is then quite easy to see, using the fact that $X$ is
totally disconnected,
that the space $\tilde{X}$ has some
connected components which are single points and
some homeomorphic to
the interval.
This example has been generalized in several ways (for example \cite{auslander1959, MR956049, FloGjeJohSys, HadJoh:AuslanderSys}). In Floyd's
example, the points $x$ with $\pi^{-1}\{ x \}$ infinite all
lie in a single orbit. Haddad and Johnson in \cite{HadJoh:AuslanderSys} showed that the set
of such $x$ could be much larger and even have positive measure
under the unique invariant measure for $(X, \varphi)$.
More importantly for our purposes, Gjerde and Johansen \cite{FloGjeJohSys} showed
that the $3^{\infty}$-odometer could be replaced with any
minimal system, $(X, \varphi)$, with $X$ a Cantor set. Their
principal tool was the Bratteli--Vershik model
for such systems \cite[Chapters 4 and 5]{Put:BookMinCantorSys}.
We will describe this in more detail in Section \ref{Sec:ConstProofs}.
Our aim here is to show that the interval, $[0,1]$, appearing as
$\pi^{-1}\{ x\}$, can be replaced by more complicated spaces. We are
particularly interested in the case of the
$n$-dimensional cube (that is, $[0,1]^{n}$), for any
positive integer $n$.
Although it is natural to generalize to more complicated spaces,
let us explain briefly why we want such a result
in the specific case of $[0,1]^n$.
The Elliott program aims to show that
a broad class of $C^{*}$-algebras may be classified up
to isomorphism by their $K$-theory \cite{Ell:classprob}. One very useful
way of constructing $C^{*}$-algebras is via groupoids \cite{MR584266} and
it becomes a natural question: which $C^{*}$-algebras in
the Elliott scheme
can be realized via a groupoid construction?
In view of the classification results themselves, this amounts
to constructing groupoids whose associated $C^{*}$-algebras
are classifiable and have some prescribed $K$-theory.
If one begins with a minimal action of the integers
on a Cantor set, it is known that the $K_{0}$-group is a simple
acyclic dimension
group and $K_{1}$ is the integers \cite{MR1194074}.
Moreover, any such
$K$-theory can be realized from such a system \cite{MR1194074}.
In another direction, if one takes a minimal action, $\varphi$,
of the
integers on some space $X$ and considers a closed,
non-empty subset $Y \subseteq X$ such that $Y$ meets each orbit
at most once, one can construct the associated ``orbit-breaking groupoid'':
the equivalence relation where the classes are either the original orbits of
$\varphi$ which do not meet $Y$
or the half-orbits, split at $Y$. The change in $K$-theory passing from the
crossed product $C^{*}$-algebra to the orbit-breaking subalgebra
can be computed, essentially in terms of the $K$-theory
of the space $Y$ (see \cite{Put:Excision} for details).
Marrying these two ideas would seem to generate many interesting
groupoids, except that the choices for $K^{*}(Y)$, where
$Y$ is a closed subset of the Cantor set, are very limited. Here, we
would like to
replace the dynamics $(X, \varphi)$ with $(\tilde{X}, \tilde{\varphi})$,
without changing the associated $K$-theory, but allowing us
to find more interesting spaces $Y$ inside of
$[0, 1]^{n} \cong \pi^{-1} \{ x\}$. These $C^*$-algebraic
applications can be found in \cite{DPS:minDSKth}.
Our construction and proof follow those of Gjerde and
Johansen in \cite{FloGjeJohSys} quite closely and, in turn, their proof is
quite similar to Floyd's original one \cite{floyd1949}.
One added feature here is that we use the framework of
iterated function systems, as this allows us to replace
the interval, $[0,1]$, with the more complicated spaces.
Following usual conventions (see for example \cite{Hut:FraSelfSim}) an
\emph{iterated function system}
consists of a metric space, $(C, d_{C})$, and
$\mathcal{F}$, a finite collection
of maps $f: C \rightarrow C$ with the property that
there is a constant $0 < \lambda < 1$
such that $d_{C}(f(x), f(y)) \leq \lambda d_{C}(x,y)$,
for all $x, y$ in $C$ and $f$ in $\mathcal{F}$.
In particular, each map is
continuous.
We will require a few extra properties.
\begin{defn}
\label{1-
|
10}
Let $(C,d_{C}, \mathcal{F})$ be an iterated function system.
We say it is \emph{compact}
if the metric space $(C, d_{C})$ is compact. We also say it is
\emph{invertible} if
\begin{enumerate}
\item each $f$ in $\mathcal{F}$ is injective, and
\item $\cup_{f \in \mathcal{F}} f(C) = C.$
\end{enumerate}
\end{defn}
The term ``invertible'' is meant to indicate that each map
$f$ in $\mathcal{F}$ has an inverse, $f^{-1}:f(C) \rightarrow C$.
It is not ideal as it does not rule out the possibility that
the images of the various $f$'s overlap.
Of course, the restriction that each map is injective
is quite important. On the other hand, it is well-known that
any compact iterated function system has a fixed point set
and the restriction of the maps to this set will satisfy
the invariance condition \cite[Section 3]{Hut:FraSelfSim}.
We list several simple examples of relevant iterated function systems. The
first is the one originally used by Floyd \cite{floyd1949} along with the subsequent examples \cite{auslander1959, MR956049, FloGjeJohSys, HadJoh:AuslanderSys}.
\begin{ex}
\label{1-20}
Let $C= [0, 1]$, $f_{i}(x) = 2^{-1}(x + i)$
for $x$ in $[0, 1]$ and $i=0,1$,
and $\mathcal{F} = \{ f_{0}, f_{1} \}$.
\end{ex}
The next example is a fairly simple generalization
of the last, but it is important as this is
the example
we need in our applications in \cite{DPS:minDSKth}.
\begin{ex}
\label{1-30}
Let $n$ be any positive integer,
$C= [0, 1]^{n}$, $ f_{\delta}(x) = 2^{-1}( x + \delta)$, for
each $x$ in $[0, 1]^{n}, \delta \in \{ 0, 1\}^{n}$ and
$\mathcal{F} = \{ f_{\delta} \mid \delta \in \{ 0, 1 \}^{n} \}$.
\end{ex}
\begin{ex}
\label{1-40}
A minor variation on the last example would be to
use instead $ f_{\delta}(x) = 3^{-1}( x + \delta)$,
for
each $x$ in $[0, 1]^{n}, \delta \in \{ 0, 1, 2 \}^{n}$.
On the other hand, if we
instead let
$\mathcal{F} = \{ f_{\delta} \mid \delta \in \{ 0, 2\} \}$
when $n=1$,
or $\mathcal{F} =
\{ f_{\delta} \mid \delta \in \{ 0, 1, 2\}^{2}, \delta \neq (1,1) \}$
for $n=2$,
this now fails the invariance condition of our definition.
As mentioned above,
standard results on iterated function
systems show that $C$ contains a unique closed set
and restricting our maps
to that set then satisfies all the desired conditions.
Notice that when $n=1$, the set in
question is the Cantor ternary set,
while for $n=2$, it is the Sierpinski carpet \cite{Sie:Carpet}.
\end{ex}
Our main result is the following.
\begin{thm}
\label{1-50}
Let $(C, d_{C}, \mathcal{F})$ be a compact, invertible
iterated function system and let
$(X, \varphi)$ be a minimal
homeomorphism of the Cantor set.
There exists a minimal extension, $(\tilde{X}, \tilde{\varphi})$
of $(X, \varphi)$ with factor map
$\pi: (\tilde{X}, \tilde{\varphi}) \rightarrow (X, \varphi)$
such that, for each $x$ in $X$, $\pi^{-1}\{ x \}$ is a
single point or is homeomorphic to $C$.
Moreover, both possibilities occur.
\end{thm}
\begin{thm}
\label{1-60}
Let $(C, d_{C}, \mathcal{F})$ be a compact, invertible
iterated function system and let
$(X, \varphi)$ be a minimal
homeomorphism of the Cantor set.
If $C$ is contractible, then the minimal extension $(\tilde{X}, \varphi)$ and the factor map
$\pi: (\tilde{X}, \tilde{\varphi}) \rightarrow (X, \varphi)$
may be chosen so that
\[
\pi^{*}: K^{*}(X) \rightarrow K^{*}(\tilde{X})
\]
is an isomorphism and so that
$\pi$ induces a bijection between the respective sets
of invariant measures.
\end{thm}
\section{The construction and proofs} \label{Sec:ConstProofs}
Just as for Gjerde and Johansen \cite{FloGjeJohSys}, we make critical use
of the Bratteli--Vershik model for minimal systems on the Cantor set \cite{Put:BookMinCantorSys}. Briefly, the Bratteli-Vershik model takes
some simple combinatorial data (an ordered Bratteli diagram)
and produces a minimal homeomorphism of the Cantor set. In fact,
every minimal homeomorphism of the Cantor set is produced in
this way.
A standard reference for Cantor minimal system is \cite{Put:BookMinCantorSys}, in particular see Chapters 4 and 5 for more on the Bratteli--Vershik model.
We begin with a Bratteli diagram, $(V, E)$, consisting
of a vertex set $V$ written as a disjoint union of finite, non-empty
sets $V_{n}, n \geq 0$, with $V_{0} = \{ v_{0} \}$, and an edge set
written as a disjoint union of finite, non-empty
sets $E_{n}, n \geq 1$. Each edge $e$ in $E_{n}$ has a
source, $s(e)$, in $V_{n-1}$ and range, $r(e)$, in $V_{n}$.
We may assume (see \cite{Put:BookMinCantorSys}) that our diagram has
\emph{full edge connections}, that is, every pair of vertices from
adjacent levels is connected by at least one edge. We define the
space $X_{E}$ to consist of all infinite paths in the
diagram, beginning at $v_{0}$. That is, a point
$x = (x_{1}, x_{2}, \ldots )$,
$x_{n} \in E_{n}, r(x_{n}) = s(x_{n+1})$. This space
is endowed with the metric
\[
d_{E}(x,y) = \inf \{ 2^{-n} \mid n \geq 0,
x_{i} = y_{i}, 1\leq i \leq n \}.
\]
In addition, we may assume that the edge set $E$ is endowed with an order
such that two edges $e,f$ are comparable if and only if $r(e) =r(f)$.
The set of maximal edges and the set of minimal edges each form a tree
and we assume that our diagram is
|
}=0)$} & $X_{1} = 0$ & $1-p$ & $0$ \\
& $X_{1} = 1$ & $p$ & $0$ \\
\midrule
\multirow{2}{*}{$\text{do}(X_{2}=1)$} & $X_{1} = 0$ & $1-p$ & $0$ \\
& $X_{1} = 1$ & $0$ & $p$ \\
\bottomrule
\end{tabular}
\end{subtable}
\end{center}
\end{table} \label{supp table}
\begin{table*}[h]
\caption{SCMs for~\ref{app:indep}, where $(U_{C},U_{1},U_{2},U_{Y})_{\mathcal{M}} \sim \mathcal{N}(0,\Sigma)$ and $(U_{C},U_{1},U_{2},U_{Y})_{\mathcal{M}^{\prime}} \sim \mathcal{N}(0,\Sigma^{\prime})$.}\label{supp table C correlated with U}
\begin{center}
\hspace{-33mm}
\begin{tabular}{cc}
\begin{minipage}{0.35\textwidth}
\begin{flushleft}
\begin{tabular}{llll}
\toprule
\multicolumn{1}{c}{$\mathcal{M}$} &~& \multicolumn{1}{c}{$\mathcal{M}^{\prime}$} \\
\midrule
$C\:\: = U_{C}$ &~& $ C\:\: = U_{C}$ \\
$X_{1} = C+U_{1}$ &~& $X_{1} = 2C+ U_{1}$ \\
$X_{2} = C+U_{2}$ &~& $X_{2} = 2C+U_{2}$ \\
$Y\:\: = CX_{1}X_{2}+U_{Y}$ &~& $Y\:\: = CX_{1}X_{2}+U_{Y}$ \\
\bottomrule
\end{tabular}
\end{flushleft}
\end{minipage}
\hspace{22mm}
\begin{minipage}{0.35\textwidth}
\begin{equation}
\text{ where }
\Sigma =
\begin{bmatrix}
1 & 1 & 1 & 1 \\
1 & 1 & 1 & 1 \\
1 & 1 & 1 & 1 \\
1 & 1 & 1 & 1 \\
\end{bmatrix},
\Sigma^{\prime} =
\begin{bmatrix}
1 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
1 & 0 & 0 & 1 \\
\end{bmatrix}.\notag
\end{equation}
\end{minipage}
\end{tabular}
\end{center}
\end{table*}
\section{Unidentifiability when $C$ is not independent of $U$}\label{app:indep}
Table~\ref{supp table C correlated with U} provides the counterexample mentioned in Section~\ref{sec:symmetric_ANM} (just after Equation~\ref{eq:symmetric_ANM}) showing that single-variable interventional effects are unidentifiable from observational and joint interventional data when the covariates $C$ are not independent of the unobserved confounders $U$ in the ANMs we consider.
Indeed, in model $\mathcal{M}$, $C$ is correlated with the latent $U_1,U_2$, telling us this model is non-Markovian: that there is an unobserved confounder correlating $C$ and $U_1,U_2$. $C$ is thus not independent of this unobserved confounder.
This is not the case in model $\mathcal{M}^{\prime}$
Moreover, we can see that $\mathcal{M}$ and $\mathcal{M}^{\prime}$ are identical under interchange of $X_1$ and $X_2$, and have the same joint and marginal distributions, as well as the same joint interventional distribution.
As the two SCMs coincide on observational and joint interventional distributions, this proves that the confounding distribution is not identifiable from these data regimes alone, and single-variable interventions as well as the full SCM are unidentifiable.
This demonstrates the need for the additional restrictions between $C$ and $U$ in Theorem 3.2.
\section{Proof of Theorem 1: Identifiability with Causally Dependent Treatments}
This section provides a proof for identifiability of single-variable causal effects when there is a causal dependency among treatments (\textbf{Theorem 1}).
Observational and joint interventional data are not sufficient in this case to identify causal effects on \emph{all} treatments -- but we can identify the
|
causal effect of intervening on the \emph{consequence} treatment instead of the \emph{causing} treatment.
\begin{proof}
Our causal estimand is the effect of intervening on $X_{j}$.
We can rewrite our causal estimand as follows:
\begin{equation}\label{eq:observational_supp}
\begin{split}
\mathbb{E}[Y|X_{i}=x_{i}, \text{do}(X_{j}=x_{j}), C=c] = f_{Y}(c, x_{i}, x_{j}) + \mathbb{E}[U_{Y} | C=c, X_{i}=x_{i}].
\end{split}
\end{equation}
From the joint interventional data regime, we have access to the following expectation:
\begin{equation}\label{eq:joint_interventions_supp}
\begin{split}
\mathbb{E}[Y|C=c, \text{do}(X_{i}=x_{i}, X_{j}=x_{j})] = f_{Y}(c, x_{i}, x_{j}).
\end{split}
\end{equation}
Subtracting Eq.~\ref{eq:joint_interventions_supp} from Eq.~\ref{eq:observational_supp} shows that we only need to provide identifiability for the conditional expectation on the outcome noise, given the observed value for treatment $X_{i}$ and the observed covariates $\bm{C}$:
\begin{equation}
\begin{split}
\mathbb{E}[U_{Y} | C=c, X_{i}=x_{i}] = \mathbb{E}[U_{Y} | U_{i}= x_{i}-f_{i}(c)] = \frac{\sigma_{Yi}}{\sigma_{ii}}(x_{i}-f_{i}(c)).
\end{split}
\end{equation}
Here, the first step comes from our SCM definition, and the second step comes from the fact that we assume the noise distribution to be a zero-centered multivariate Gaussian.
As such, we need to identify the function $f_{i}$, the variance on the noise variable $U_{i}$, and the covariance between $U_{i}$ and $U_{Y}$.
We obtain $\mathbb{E}[X_{i}|C=c] = f_{i}(c)$ directly from the observational data regime.
This makes the noise variable $U_{i} = X_{i} - f_{i}(\bm{C})$ identifiable.
Now, as a result, we can identify its variance $\sigma_{ii}$ and covariance $\sigma_{Yi}$ from the observational data regime, which concludes the proof.
\end{proof}
\section{Consistency under Varying Levels of Confounding (RQ3)}
In order to validate the consistency of our learning method under varying levels of unobserved confounding (\textbf{RQ3}), we adopt data from the International Stroke Trial database~\cite{Carolei1997}.
We partially follow the semi-synthetic setup laid out in Appendix 3 of \cite{zhang2021bounding}.
Specifically, we adopt their probability table for the joint observational distribution of the covariates: the gender, age and conscious state of a patient.
This table was computed from the dataset to reflect a real-world observational distribution.
They deal with discrete treatments---which we extend to the continuous case where treatments can be interpreted as varying dosages of aspirin and heparin. We model the structural equation on the outcome as:
\begin{equation}
\begin{split}
Y =~ &0.1 S - 0.1 A + 0.25(C - 1) + \alpha_{a} + 0.75 \alpha_{h} - 3 SA - 0.1 S \alpha_{a}\\
-&0.3 A \alpha_{a} + 0.1 S \alpha_{h} + 0.2 A \alpha_{h} + 0.3 C \alpha_{h} - 0.45 \alpha_{a} \alpha_{h}.
\end{split}
\end{equation}
This loosely reflects the same intuitions as laid out in \cite{zhang2021bounding}, where $S$ is the gender, $A$ the age, $C$ the conscious state, $\alpha_{a}$ the aspirin dose and $\alpha_{h}$ the heparin dose.
Note that this does not reflect correct medical knowledge or insights---the goal is merely to have a polynomial with second-order interactions that we can learn to model. As described in the main text, we add zero-mean Gaussian noise to the treatments and the outcome, where we randomly generate positive semi-definite covariance matrices with bounded non-diagonal entries---varying the limit on the size of the covariances in order to assess the effect of varying confounding on our method. We repeat this process 5 times, and sample $512$ observational and $512$ joint-interventional samples (both $\alpha_{a}$ and $\alpha_{h}$) to learn the SCM from. We evaluate our learning methods on $5\,000$ evaluation samples to predict the outcome under a single-variable intervention on the aspirin dose $\alpha_{a}$.
All experiments ran in a notebook on a laptop.
|
\section{Introduction}
The interest of placing branes in backgrounds of oppositely charged flux comes from the now very famous idea of uplifting the energy of Anti-de Sitter space-times to de Sitter using anti-branes \cite{Kachru:2003aw}. The idea is to construct a stable AdS solution and then by placing a small amount of warped down anti-D3-branes one can achieve several phenomenologically interesting features -- supersymmetry (susy) breaking, and meta-stable arbitrarily small cosmological constant via the means of varying warping and the amount of branes.
These obviously desired phenomenological effects, from ten dimensional supergravity (sugra), the low energy theory of String Theory, is of great importance and have been studied extensively, for example the possibility of other decay channels \cite{Kachru:2002gs}. Not too long ago however, in an attempt to describe the backreaction of these anti-branes it was discovered that some modes developed singularities \cite{McGuirk:2009xx}. These singularities were something new because they were not directly sourced by the brane and appears in the flux surrounding the brane, not the field strength sourced by the brane.
This has spawned several discussions trying to explain and further investigate these flux singularities. Is the singularity really there in the supergravity solution or is it simply an artefact of partial smearing or treating backreaction perturbatively \cite{Bena:2009xk,Bena:2011hz,Bena:2011wh,Giecold:2011gw,Blaback:2011pn,Gautason:2013zw,Blaback:2011nz}? If the singularity is there, what does it mean or how can it be resolved \cite{Bena:2012tx,Bena:2012bk,Bena:2012vz,Blaback:2012nf,Bena:2013hr,Bena:2012ek}? So far neither partial smearing nor perturbative backreaction seems to be the issue that creates these singularities since the singularities develop even in fully localised solutions and in fully backreacted solutions \cite{Blaback:2011pn}, more general result in \cite{Gautason:2013zw}. To resolve the singularity a few methods have been tested. One such method is resolving the solutions via the Myers effect \cite{Myers:1999ps} \`{a} la Polchinski-Strassler \cite{Polchinski:2000uf}, by letting a D$(p+2)$-brane surround and cut off the singularity, created by a D$p$-brane, to a finite value. This method has so far been unsuccessful in resolving the singularity \cite{Bena:2012tx,Bena:2012bk,Bena:2012vz}. Another possibility that has been investigated is whether the singularity could signal for some new instability of the system, which still remains a possibility \cite{Blaback:2012nf}. The method of hiding the singularity behind an event horizon, as been successful for several systems\footnote{See \cite{Bena:2013hr} for an extensive list.}, would signal that the singularity could be resolved in String Theory, i.e. beyond the supergravity approximation. However when studied it has been shown and argued that hiding these type of singularities behind a horizon is not a possible means of getting rid of the singularity \cite{Bena:2013hr,Bena:2012ek}.
In the type II sugra it was also possible to give a possible interpretation as to the physics that gives rise to the singularity. As noticed in \cite{Blaback:2011pn}, elaborated upon in \cite{Blaback:2012nf}, the singularity arise in the flux charge density term ($H \wedge F_{6-p}$) of the Bianchi identity associated to a D$p$-brane, $\d F_{8-p} = H \wedge F_{6-p} + \textrm{sources}$. The sign of the flux charge density, integrated for compact or as a UV condition for non-compact internal spaces, should be opposite that of the brane to achieve the uplifting and susy-breaking properties sought. The oppositeness of the charge would make the flux attracted towards the brane a start clumping the flux to shield the charge. This lead to the interpretation of the singularity as arising because the Ans\"atze used were stationary while the physical system dynamical. In \cite{Blaback:2012nf} it was also discovered that the singularity could imply a new channel of instability for the KKLT vacua by possibly removing the barrier for meta-stability.
The same type of singularity has also been identified in $11$D supergravity \cite{Cottrell:2013asa,Giecold:2013pza,Massai:2011vi,Bena:2010gs}, but is not yet as comprehensively studied. For example there exists fully localised solutions, to linear order in a perturbative backreaction \cite{Cottrell:2013asa,Giecold:2013pza}, but a fully backreacted solution does not yet exist. The issue of trying to resolve the singularity in $11$D has also not yet completely kicked off yet -- but perhaps $11$D supergravity has the tools necessary to interpret the singularity.
In type II sugra several tools have, as mentioned, been invented to try and study these singularities.
The purpose of this paper is to bring some of those tools into $11$D supergravity to investigate whether they provide some new information. One of these tools is what has been called a {\it topological no-go} \cite{Blaback:2011pn,Blaback:2011nz} which constrain the possible configurations of the potential of the field strength $F_{8-p}$ associated with a D$p$-brane and force a singularity in the flux density $H_3 \wedge F_{6-p}$.
The topological no-go have helped in several ways before. It helped to determine the presence of the singularity in a fully backreacted system of this type \cite{Blaback:2011pn,Blaback:2011nz}. The topological no-go also made it possible to give an interpretation of the flux singularity as flux polarisation \cite{Blaback:2011pn,Blaback:2012nf}, as explained previously.
A similar topological constraint will be invented here, and from this it will be argued that it forces an unwanted singularity to develop in the flux. The other tool that will be used is putting in an event horizon and try to shield off the singularity. This has been argued not to be possible for the type II sugra singularities \cite{Bena:2013hr,Bena:2012ek} but perhaps M-theory possess the power to resolve these singularities while String Theory does not.
The study of backreaction in type II sugra has benefited from having available several more or less simple solutions. One such example in type II sugra is the anti-D$6$-brane solutions which helped the study of full backreaction \cite{Blaback:2011pn,Blaback:2011nz} and polarisation \cite{Bena:2012tx} in systems that display this flux singularity. Therefore this paper will also describe the construction of solutions similar to those previously used in type II sugra. These solutions are smeared space-filling (anti-)M$2$-brane solutions with AdS$_{3}$ world volume on a compact internal space.
The paper is organised as follows. Section \ref{sec:sugra} will present the conventions used for $11$D supergravity, and also go through the simplest form of the Ansatz used in the paper. Then Section \ref{sec:sols} will describe the new (anti-)M$2$-brane solution. This solution is constructed from space-filling (anti-)M$2$-branes with AdS$_3$ world volume on a compact internal manifold. Furthermore the solution utilises the approximation of smeared branes, hence the goal would be to localise the source. Because the branes are positioned into oppositely charged flux this is expected to become problematic, as experience would suggest. The purpose of Section \ref{sec:topo} is to derive a topological no-go that restricts the profile for the flux occupying the internal space. This no-go will force a singularity in the flux at the position of the brane. It will also be argued that this no-go is not only relevant for these particular solutions but should also be important to any study of branes located in oppositely charged flux. The no-go is extended in Subsection \ref{ssec:noblack} where a blackening-factor representing a horizon is introduced to shield the solution from this singularity. The same section also explains how this is not possible. Finally Section \ref{sec:con} summarises and concludes the paper.
\section{$11$D Supergravity}
\label{sec:sugra}
The conventions used are as follows. The $4$-form flux equation of motion and Bianchi identity
\begin{equation}
\begin{split}
\d \star_{11} G_4 &= \frac{1}{2} G_4 \wedge G_4 + Q \delta_8(\textrm{M}2),\\
\d G_4 &= 0.
\end{split}
\end{equation}
Here $Q = \varpm |Q|$ for the charge of the (anti-)M$2$-brane and the delta function is an $8$-form, $\delta_8(\textrm{M}2) = \delta(\textrm{M}2) \star_8 1$. The Einstein equations are
\begin{equation}
\begin{split}
R_{ab} &= \frac{1}{2}\left( |G_4|^2_{ab} - \frac{1}{3} |G_4|^2 g_{ab} \right) + \frac{1}{2}\left( T_{ab}^l - \frac{1}{9} T^l g_{ab} \right),\\
T_{ab}^l &= - T g_{\mu\nu} \delta(\textrm{M}2) \delta_{ab}^{\mu\nu},
\end{split}
\end{equation}
with $T = |T|$ being the tension of the brane and anti-brane.
The Ansatz for the metric and the $4$-form flux is
\begin{equation}
\begin{split}\label{eq:metric}
\d s^2_{11} &= e^{2A} \d \tilde{s}^2_{2,1} + e^{2B} \d \tilde{s}^2_{8},\\
G_4 &= \star_{11} F_7 + F_4 + H_4,\\
F_7 &= e^{X} \star_{8} \d \alpha,\\
H_4 &= \lambda \star_8 F_4.
\end{split}
\end{equation}
The flux denoted $F_7$ is the field-strength that the brane sources, and the two four-form fluxes are the flux surrounding the brane. A parameter $\lambda$ is also introduced here to be able to vary the magnitude of the flux present in $H_4$ and $F_4$ which will turn out to be important later. This Ansatz is designed to have as much resemblance as possible to the type II sugra setups\cite{Blaback:2011pn,Blaback:2011nz,Blaback:2010sj}.
\begin{equation}
\begin{array}{|c|c|}
\hline
11 \textrm{D field} & \textrm{Type II analogue}\\
\hline
\hline
F_7 & F_{8-p}\\
F_4 & F_{6-p}\\
H_4 & H_3\\
\hline
\end{array}
\end{equation}
From this Ansatz to get the ``Fractional M$2$-brane'' solution \cite{Herzog:2000rz} simply put
\begin{equation}
\lambda = \varpm1,\ Q = \varpm|Q|,\ X = -3A,\ \alpha = \lambda e^{3A},\ B = -\frac{1}{2} A,
\end{equation}
where $\lambda = \varpm1$ implies that the combination $H_4 + F_4$ is (anti-)self dual ((A)SD), for (anti-)M$2$-branes.\\
\section{New non-SUSY AdS$_3$ solutions}
\label{sec:sols}
Presented here are new smeared solutions for M$2$-branes. These solutions are constructed in the same way as the anti-D$p$-branes in \cite{Blaback:2010sj}, and share some similarities in its construction.
The smeared approximation means that the M$2$-brane is spread out in the internal manifold. For the delta function this means that
\begin{equation}
Q \delta_8(\textrm{M}2) = Q \delta(\textrm{M}2) \star_8 1 = Q \tilde{\delta}(\textrm{M}2) \tilde{\star}_8 1\ \longrightarrow\ \frac{Q}{V} \tilde{\star}_8 1,
\end{equation}
where $V$ is the unwarped volume and tilde refers to the metric without warping or conformal factor (\ref{eq:metric}). From now on $V=1$, such that it is absorbed into the charge and tension. Furthermore, from the Ansatz used in the previous section, warping, conformal factor and the field $\alpha$ will also be removed
\begin{equation}\label{eq:off}
A \to 0,\ B\to 0,\ \alpha \to 0.
\end{equation}
The metric's internal part will then be split in two four dimensional parts,
\begin{equation}
\d s^2_{11} = \d s^2_{2,1} + g^{(H)}_{ij}\d y^i \d y^j + g^{(F)}_{ij} \d z^i \d z^j,
\end{equation}
where $\d s^2_{2,1}$ is the world volume part of the metric, and ${}^{(H/F)}$ denotes that one is occupied by $H_4$ and one with $F_4$.
Using the smearing Ansatz described above the equation of motion and Einstein equations now reduce to algebraic equations. The equation of motion gives a relation between the charge and the flux
\begin{equation}
0 = \lambda|F_4|^2 + Q.
\end{equation}
This is the tadpole cancellation condition and to solve this the signs of the charge and the flux need to be opposite $Q=\varpm |Q|$ and $\lambda = \varmp |\lambda|$, which would correspond to (anti-)M$2$-branes in oppositely charged flux.
\begin{figure}[h!]
\begin{center}
\includegraphics{curvature_plot.pdf}
\end{center}
\caption{The curvature of the ${}^{(H)}$ section is the dashed red line and the ${}^{(F)}$ section is the dotted blue line. The full purple line is the total curvature of the internal space. The left (right) side of the vertical axis is where $Q$ is positive (negative) and $\lambda$ is negative (positive).\label{fig:curv}}
\end{figure}
The Einstein equations will now give the following expressions. Externally the curvature is necessarily negative
\begin{equation}
R_{\mu\nu} = -\frac{1}{3}\left( \frac{1+\lambda^2}{2} |F_4|^2 + T_{\textrm{M}2}\right) g_{\mu\nu}.
\end{equation}
The curvature of the internal spaces can vary and even switch sign
\begin{equation}
\begin{split}
R^{(H)}_{ij} &= \frac{1}{6}\left( (2\lambda^2 - 1)|F_4|^2 + T_{M2}\right) g^{(H)}_{ij} = \frac{1}{6} (-1+|\lambda| +2 \lambda ^2)|F_4|^2 g^{(H)}_{ij}, \\
R^{(F)}_{ij} &= \frac{1}{6}\left( (2 - \lambda^2)|F_4|^2 + T_{M2}\right) g^{(F)}_{ij} = \frac{1}{6} (2+|\lambda| - \lambda ^2)|F_4|^2 g^{(F)}_{ij},
\end{split}
\end{equation}
having used the tadpole condition and that the tension is necessarily positive for (anti-)M$2$-
|
branes. Notice here that $\lambda$ is a continuous parameter. Choosing different values for $\lambda$ the curvatures varies according to Figure \ref{fig:curv}, where it should also be noted that the total curvature of the internal space remains positive for all values of $\lambda$. In the case of AdS$_{p+1}$ solutions for the D$p$ branes of \cite{Blaback:2010sj} this is not the case, since there the dilaton equation of motion restricts $\lambda$ to a fixed value.\footnote{In \cite{Blaback:2010sj} $\kappa = 1/\lambda$.} It should also be noted that when a circle can be identified in the ${}^{(H)}$ section of the internal space this construction corresponds to a direct uplift from type IIA sugra to $11$D sugra. One would expect these to be stable to some degree since the analogue solution for anti-D$6$-branes with AdS$_7$ world volume were discovered to be stable with respect to the left-invariant closed string moduli \cite{Blaback:2011nz}.
The above construction is, as mentioned, fully smeared. To continue with localisation one have to turn warping, conformal factor and $\alpha$ back on. The profile of the flux distribution in the internal manifold should also vary. For the fractional brane solutions \cite{Herzog:2000rz} the (anti)-M$2$-branes are positioned in (A)SD which is the ``natural background'', or \emph{BPS} in the same sense as declared in \cite{Blaback:2010sj}, for the brane. When the brane is forced into a compact background as considered here, where the charge dissolved in fluxes around the brane and the brane itself are not mutually BPS, the flux will arrange it self into new configurations. This means that $\lambda$ will be promoted to a function that describes the flux distribution in the localisation direction.
\section{The topological no-go}
\label{sec:topo}
Consider the similar division of the internal manifold as earlier
\begin{equation}
M_8 = (\mathbb{R} \times M_3) \times M_4,
\end{equation}
where, equivalent to the type II sugra constructions, $H_4$ occupies the $\mathbb{R} \times M_3$ section, and $F_4$ occupies $M_4$. Here $z$ is introduced to parametrise $\mathbb{R}$ and localisation will only be considered along this direction and potentially still remain smeared along other directions. The metric has of the form
\begin{equation}
\d s^2_{11} = e^{2A}\d s^2_{2,1} + e^{2B}(\d z^2 + h(z)^2 \d \tilde{s}_3^2 + \d \tilde{s}_4^2),
\end{equation}
where the factor $h(z)^2$ is present to show that this will work for compact (e.g. $h(z)=\sin(z)$), such as the smeared solution presented previously, and non-compact (e.g. $h(z)=z$) spaces.
Considering the exclusively internal legs of the equation of motion for $G_4$ one acquires the following expression
\begin{equation}\label{eq:bianchi}
e^{-3A+6B}\partial_z^2 \alpha + \partial_z \alpha \partial_z (e^{-3A+6B}h^{3}) h^{-3}= \lambda |\tilde{F}_4|^2 + Q \tilde{\delta}(M2) h^{-3}.
\end{equation}
There is also a relation that relates $\alpha$ and $\lambda$, this is derived from the equations of motion for $G_4$ considering the external legs
\begin{equation}\label{eq:alrel}
\alpha = \lambda e^{3A}.
\end{equation}
Note that this forces $\alpha$ and $\lambda$ to have the same sign and the fact that $e^{3A}$ tends to zero at the origin -- which is usually considered for a brane -- hence a finite $\alpha$ at the origin implies a singularity in $\lambda$ of same sign. This is an important point that we will get back to. Considering (\ref{eq:bianchi}) at an extremal point for the function $\alpha$ and substituting $\lambda$ for $\alpha$ using (\ref{eq:alrel}), the resulting expression is
\begin{equation}\label{eq:nogo}
\left. \textrm{sgn}\, \alpha = \textrm{sgn}\, \alpha''\quad \right|_{\alpha'=0}.
\end{equation}
These two conditions will make it sufficient to see that there is a singularity in $\lambda$.
The problem at hand is to place (anti-)M$2$-branes, $\textrm{sgn}\, Q = \varpm 1$, in a background that is not BPS in relation to the (anti-)brane, i.e. $\textrm{sgn}\, \lambda = \varmp 1$. Even though the branes are placed in oppositely charged flux, the flux do not need to have the same charge through out the whole internal space. In fact the only thing that will be imposed is $\textrm{sgn}\, \lambda = \varmp 1$ as a UV condition, i.e. to be true at $z\to \infty$ (or corresponding point on a compact space). The UV condition is important both compact and non-compact. Consider the smeared solution presented in the previous section, which is on a compact internal space. Smearing can be interpreted as using the integrated equations, which for the source terms means that the delta function is replaced by a constant and similarly for the flux, i.e. that the function $\lambda(z)$ is replaced by its integrated constant value. If the smeared solution is suppose to correspond to the localised one, which is commonly the assumption one is working under using the smeared approximation, this integral have to be equal to that constant. This is what the UV condition assures. For non-compact spaces this UV condition is the same expected from uplifting procedures as in KKLT, that is a brane is placed in a background that is not mutually BPS and the boundary conditions are assumed to be the same to be able to ``glue'' the non-compact space onto a compact one.
The first construction that one would like to consider is to have the flux being mutually BPS with the brane at the position of the brane, $z=0$. This will give the following conditions
\begin{equation}
\textrm{{\bf 1.}}\qquad \textrm{IR: } \alpha \to \varpm 0,\ \quad\textrm{UV: } \alpha \to \varmp |\alpha_\infty|,
\end{equation}
where $\alpha_\infty$ is simply the value that $\alpha$ would limit to at infinity. The sketch of such a profile is the leftmost picture in Figure \ref{fig:3pics} (only drawn for anti-M$2$-branes). By looking at the extremal points one gets from connecting the IR and UV boundary conditions are exactly of the type that are excluded by (\ref{eq:nogo}).
To change the boundary conditions to evade restriction (\ref{eq:nogo}) the IR boundary conditions have to be modified. This can be done in the following
\begin{equation}
\textrm{{\bf 2.}}\qquad \textrm{IR: } \alpha \to \varpm |\alpha_0|\ \ \&\ \ \alpha' \to \varmp |\alpha_0'|,\ \quad\textrm{UV: } \alpha \to \varmp |\alpha_\infty|.
\end{equation}
These boundary conditions would imply that the (anti-)brane is surrounded by a singular flux of same sign charge. The corresponding sketch would be the middle picture of Figure \ref{fig:3pics} (again only drawn for anti-M$2$-branes). This sketch is however not allowed. If integrated, (\ref{eq:bianchi}), where the left hand side is a total derivative, gives the following expression
\begin{equation}\label{eq:bc}
\left.e^{-3A + 6B} h^3 \alpha' \right|_{z\textrm{-const.}} = Q,
\end{equation}
which is valid for the $z$-constant term in an expansion, and hence dominating at $z=0$, assuming $\alpha$ is finite there. This means that the only allowed IR conditions here is $\textrm{sgn}\, \alpha' = \textrm{sgn}\, Q$ which was not obeyed.
This leaves only one remaining option
\begin{equation}
\textrm{{\bf 3.}}\qquad \textrm{IR: } \alpha \to \varpm |\alpha_0|\ \ \&\ \ \alpha' \to \varpm |\alpha_0'|,\ \quad\textrm{UV: } \alpha \to \varmp |\alpha_\infty|.
\end{equation}
The above boundary condition is the only one that evades all restrictions and is the only possible candidate. The sketch is found as the rightmost picture in Figure \ref{fig:3pics}, (anti-M$2$-branes). These boundary conditions do imply that there is a singular flux surrounding the brane, which has opposite charge.
\begin{figure}
\begin{center}
\begin{tabular}{ccc}
\begin{tikzpicture}[scale=.5]
\coordinate (origin) at (0,2);
\coordinate (yend) at (0,6);
\coordinate (ystart) at (0,0);
\coordinate (xend) at (6,2);
\draw [->] (ystart) -- (yend);
\draw [->] (origin) -- (xend);
\node (yendn) at (yend) {$\qquad \alpha$};
\node (xendn) at (xend) {$\qquad z$};
\coordinate (infinity) at (6,4);
\coordinate (midpoint) at (2,2);
\draw (origin) to[out=-45,in=-135,thick] (midpoint);
\draw (midpoint) to[out=45,in=180,thick] (infinity);
\end{tikzpicture}
$\quad$
&
$\quad$
\begin{tikzpicture}[scale=.5]
\coordinate (origin) at (0,2);
\coordinate (yend) at (0,6);
\coordinate (ystart) at (0,0);
\coordinate (xend) at (6,2);
\draw [->] (ystart) -- (yend);
\draw [->] (origin) -- (xend);
\node (yendn) at (yend) {$\qquad \alpha$};
\node (xendn) at (xend) {$\qquad z$};
\coordinate (infinity) at (6,4);
\draw (0,1) to[out=60,in=180,thick] (infinity);
\end{tikzpicture}
$\quad$
&
$\quad$
\begin{tikzpicture}[scale=.5]
\coordinate (origin) at (0,2);
\coordinate (yend) at (0,6);
\coordinate (ystart) at (0,0);
\coordinate (xend) at (6,2);
\draw [->] (ystart) -- (yend);
\draw [->] (origin) -- (xend);
\node (yendn) at (yend) {$\qquad \alpha$};
\node (xendn) at (xend) {$\qquad z$};
\coordinate (infinity) at (6,4);
\coordinate (start) at (0,5);
\draw (start) to[out=-30,in=180,thick] (infinity);
\end{tikzpicture}
\end{tabular}
\end{center}
\caption{The left sketch is excluded by (\ref{eq:nogo}), the middle is excluded by (\ref{eq:bc}), the right remains allowed and implies flux that is singular an not mutually BPS with the brane.\label{fig:3pics}}
\end{figure}
\subsection{No blackening of M$2$ branes}
\label{ssec:noblack}
The metric can be extended to include a {\it blackening-factor}, $e^{2f(z)} = 1 - \sfrac{|k|}{z}$, that has the possibility to hide the forced singularity from above, behind a horizon. This would provide a possibility to resolve the singularity in M-theory. The new metric is
\begin{equation}
\d s^2_{11} = e^{2A} \left( - e^{2f} \d t^2 + \d x^2_2 \right) + e^{2B} \left( e^{-2f} \d z^2 + g(z)^2 \d \tilde{s}^2_3 + e^{2C} \d \tilde{s}^2_4\right),
\end{equation}
the function $C=C(z)$ has also been added for generality. This metric gives a similar expression as (\ref{eq:bianchi}) from the $G_4$ equation of motion
\begin{equation}
e^{-3A+6B+f} \partial_z^2 \alpha + \partial_z \alpha \partial_z \left( e^{-3A+6B + 4C} h^3 \right) e^{f-4C} h^{-3} = \lambda e^{-8C} |\tilde{F}_4|^2 + Q_{M2} \tilde{\delta}(M2)e^{f-4C}h^{-3}.
\end{equation}
This expression still provides the starting point for the above no-go
\begin{equation}
\left.\textrm{sgn}\, \alpha'' = \textrm{sgn}\, \alpha\right|_{\textrm{extrema}},\quad \alpha = \lambda e^{3A+f} , \quad e^{-3A + 6B +4C} h^3 \partial_z \alpha|_{z-\textrm{const.}} = Q.
\end{equation}
The flux Ansatz is still the same, with $X= -3A -f$. It is still so that one can only have $\textrm{sgn}\, \alpha'' = \textrm{sgn}\, \alpha$ extremal points, the blackening-factor has now taken the the role that the warp factor $A$ had before, since now $e^{2f}$ tends to zero at the horizon and gives a singularity to $\lambda$, now simply pushed in front of the horizon.
\section{Conclusions}
\label{sec:con}
In this paper new AdS$_3$ solutions were presented. These solutions are (anti-)M$2$-brane solutions on a compact manifold and the sign of the total curvature of the internal manifold is positive, however depending on the flux Ansatz used one section of the internal manifold can have negative curvature. To solve the so called tadpole condition, the charge dissolved in the fluxes around the brane must be opposite that of the brane itself to create net zero charge internally. In type II sugra similar setups have been studied and present a singularity in the surrounding flux that is not directly sourced by the brane. Hence it should be expected that also the smeared (anti-)M$2$-brane solutions presented here will suffer from the same problem. This was also demonstrated in the paper. By localising the brane in one direction a topological restriction to the profile of the flux distribution is derived. It was furthermore shown how this restriction forces a singularity in the flux density to develop. The topological argument was also extended to include a blackening factor that creates a horizon around the brane, in an attempt to hide the singularity for a possible M-theory resolution of it. In exactly the same manner as a singularity is forced to develop in case of no horizon, the horizon simply shifts the singularity to appear at its surface.
The localisation problem that is described in this paper is mainly studied from the point of view of the solutions also presented within. However the argument as to why a singularity develops is more general and has in the type II sugra systems shown to give a good general picture and understanding for these types of singularities. This result do indicate that the singularity should be present even after full backreaction.
\section*{Acknowledgments}
The author would like to thank Thomas Van Riet for discussions, motivation and suggesting this project, Ulf Danielsson for comments and suggestions on draft and layout and Lena Heijkenskj\"old for comments on the draft. The author is supported by the G{\"o}ran Gustafsson Foundation.
|
\section{Introduction}
In Nature most processes are not in thermodynamic equilibrium. For example, whenever a system is exposed to a flux of matter or energy in the stationary state, then it is generally not possible to describe it with standard methods of equilibrium statistical mechanics. For systems out of equilibrium the probability distribution of configurations is normally not known and a theory which allows one to calculate macroscopic quantities is not available. Therefore, general results valid for nonequilibrium processes are of great theoretical interest. This applies in particular to fluctuation relations \cite{evans93,evans94,gallavotti95,kurchan98,lebowitz99,maes99,Jia1,andrieux07,harris07,kurchan07}, which are fairly general statements valid for any system out of equilibrium.
A large variety of nonequilibrium systems can be modelled as continuous-time Markov jump processes, meaning that the system jumps spontaneously from one classical configuration to the other with certain rates. If these rates obey detailed balance, the system relaxes into a equilibrium state. However, if detailed balance is broken, the system will relax into a nonequilibrium stationary state where the probability current between microstates is non-zero \cite{zia07}. Maintaining these non-vanishing probability currents requires an external drive which continuously produces entropy in the environment. Remarkably, this entropy production can be quantified without knowing the explicit structure of the environment~\cite{schnakenberg76}. It turns out that the average entropy production is zero if and only if detailed balance is fulfilled, meaning that entropy production can be used as an indicator of nonequilibrium.
Since the entropy production depends on the specific sequence of microscopic transitions (the so-called stochastic path), it is a fluctuating quantity with a certain probability distribution~\cite{seifert05}. A fluctuation relation is an equation that restricts the functional form of this distribution. There are two different kinds of fluctuation relations, namely, finite time fluctuation relations and infinite time fluctuation relations (see \cite{harris07}). Finite time fluctuation relations describe the distribution of the total entropy (system + environment) and hold exactly for any time interval~\cite{seifert05}. Relations of this kind include the Jarzynski equality \cite{jarzynski97} and Crooks relation \cite{croocks99}. On the other hand, infinite time fluctuation relations are asymptotically valid for the entropy produced in the environment (which is also known as action functional) \cite{lebowitz99}. Here we will deal with this infinite time relation, and we refer to it as the fluctuation theorem or the Gallavotti-Cohen-Evans-Morriss (GCEM) symmetry.
Large deviation theory \cite{ellis85,touchette09,hollander,oono} is the appropriate mathematical framework to investigate the fluctuation theorem. Using these methods the GCEM symmetry can be recast as a symmetry of the large deviation function for the probability distribution of the entropy. This symmetry does not yet allow us to calculate macroscopic observables, but at present it is the most general result for systems out of equilibrium.
Even though the fluctuation theorem is very general, one might argue that it is also very specific in the sense that it is valid only for one particular functional of the stochastic path, namely, the entropy produced in the environment. It is therefore interesting to find out if there are other physically relevant functionals with a similar symmetry. As a first step in this direction, we recently demonstrated that the height of an interface in a certain growth model defines a physically relevant time-integrated current different from the entropy with a symmetric large deviation function~ \cite{barato10}. Interestingly, this symmetry can only be observed for a particular system size of the model because only then the network of microscopic transitions acquires a particular form.
The objective of the present paper is to generalize the new symmetry found in \cite{barato10}. We prove that for a class of jump processes with a particular network of states we can find time-integrated currents different from entropy displaying a symmetric large deviation function. This symmetry is similar to the GCEM symmetry because it restricts the form of a large deviation function related to a time-integrated current. However, it is important to note that our symmetry is different from the GCEM symmetry because it refers to a different time-integrated current and has a slightly different physical origin.
It is known that the GCEM symmetry is a direct consequence of the fact that the entropy is given by the weight of a stochastic path divided by the weight of the time-reversed path. In this sense, the origin of the GCEM symmetry is related to time-reversal. An interesting question, which we address here for a particular case, would be to explain the origin of the symmetry. We show that it is also associated with time-reversal, but in a more hidden way: it comes to light only when we perform an appropriate grouping of stochastic trajectories.
The organization of the paper is as follows. In Sec. 2 we define time-integrated currents and briefly review the fluctuation theorem. In Sec. 3 we prove the new symmetry relation. Some physical examples where our symmetry appears in physically meaningful time-integrated currents are presented in Sec.~4. Before concluding we discuss the origin of the symmetry for a simple four states system in Sec. 5.
\section{Time-integrated currents and the fluctuation theorem}
Continuous-time Markov jump processes are defined by a space of microscopic configurations $\conf\in\Omega$ in which the system evolves by spontaneous transitions $\conf \to \conf'$ at rate $\rate_{\conf \to \conf'}$. The probability $P(\conf,t)$ of finding such a system at time $t$ in the configuration $\conf$ evolves according to the master equation
\begin{equation}
\frac{d}{dt}P(\conf,t)= \sum_{\conf'\neq \conf}\Big(P(\conf',t)\rate_{\conf'\to \conf}-P(\conf,t)\rate_{\conf\to \conf'}\Big).
\label{master}
\end{equation}
For simplicity we assume that stationary probability distribution exists $P(\conf)=\lim_{t\to\infty}P(\conf,t)$. If this distribution obeys the condition of detailed balance $P(\conf)\rate_{\conf\to \conf'}= P(\conf')\rate_{\conf'\to \conf}$ the stationary state is an equilibrium state, otherwise the system is out of equilibrium.
A stochastic trajectory during the time interval $\left[t_0,t_f\right]$ is a sequence of $M$ jumps
\begin{equation}
\overrightarrow{C}_{M,t}: \, \conf(t_0) \to \conf(t_1) \to \conf(t_2) \to \ldots \to \conf(t_M)
\end{equation}
taking place at times $t_1,t_2,\ldots,t_M \in \left[t_0,t_f\right]$. Note that, the length of the time interval $T= t_f-t_0$ is given while the number of jumps $M$ is a random variable that may assume different values for different trajectories. In what follows we assume that all microscopic transitions are such that, i.e. $\rate_{\conf\to \conf'}\neq 0 \, \Leftrightarrow \, \rate_{\conf'\to \conf}\neq 0$, meaning that any stochastic path can be reversed.
\headline{Time-integrated currents}
A time-integrated current is a functional of the stochastic trajectory
\begin{equation}
J[\overrightarrow{C}_{M,t}]=\sum_{i=0}^{M-1}\theta _{\conf(t_i)\to \conf(t_{i+1})}\,,
\label{defcurrent}
\end{equation}
which changes its value by $\theta_{\conf\to\conf'}$ whenever a jump from $\conf\to\conf'$ occurs. The increments are assumed to be antisymmetric, i.e. $\theta _{\conf\to \conf^{\prime }}=-\theta _{\conf^{\prime }\to \conf}$. Using the master equation the expectation value is
\begin{equation}
\left\langle J\right\rangle =\int_{t_0}^{t_f}dt
\sum_{\conf,\conf^{\prime }}\theta_{\conf\rightarrow \conf^{\prime }}P(\conf,t)w _{\conf\rightarrow \conf^{\prime }}\,.
\end{equation}
In the stationary state this expression reduces to $\left\langle J\right\rangle = T
\sum_{\conf,\conf^{\prime }}\theta_{\conf\rightarrow \conf^{\prime }}P(\conf)w _{\conf\rightarrow \conf^{\prime }}$, i.e. the current increases on average linearly with $T$. Since the system relaxes towards a stationary state, in the limit of $T\to\infty$ the quotient $J/T$ tend to a constant, which is
\begin{equation}
\lim_{T\to \infty}\frac{ J }{T}\,\rightarrow \sum_{\conf,\conf^{\prime }}\theta _{\conf\rightarrow
\conf^{\prime }}P(\conf)w_{\conf\rightarrow \conf^{\prime }}.
\end{equation}
More specifically, one expects that the corresponding probability distribution $P\left(\frac{J}{T}=x\right)$ becomes more and more peaked around this value as $T\to \infty$. Assuming that the large deviation principle holds, the large deviation function of this probability distribution is defined by \cite{ellis85,touchette09,hollander},
\begin{equation}
\lim_{T\to\infty}P\left(\frac{J}{T}=x\right) = \exp [-TI(x)].
\label{rate}
\end{equation}
The function $I(x)$ measures the rate at which the current deviates from its average value.
\headline{Entropy production and fluctuation theorem}
A very prominent time-integrated current is the entropy. If we have a jump process describing a physical system in contact with external reservoirs, this quantity describes the amount of entropy which is generated by the external driving in a fictitious external environment~\cite{schnakenberg76,seifert05,hinrichsen11}. More specifically, each transition $\conf\to\conf'$ changes the entropy in the environment by $\ln\frac{\rate_{\conf\to\conf'}}{\rate_{\conf'\to\conf}}$. Therefore, the accumulated entropy production along a stochastic path $\overrightarrow{C}_{M,t}$ is a time-integrated current of the form
\begin{equation}
\label{entropy}
J_s[\overrightarrow{C}_{M,t}] =\sum_{i=0}^{M-1} \ln \frac{\rate_{\conf(t_i)\to\conf(t_{i+1})}}{\rate_{\conf(t_{i+1})\to\conf(t_i)}}\,.
\end{equation}
The fluctuation theorem reads ~\cite{lebowitz99}
\begin{equation}
I_s(x)-I_s(-x)=-x,
\label{GCEMforlarge}
\end{equation}
where $I_s(x)$ is the large deviation function associated with $\lim_{T\to\infty}P(\frac{J_s}{T}=x)$. This property of the entropic current is also referred to as the GCEM symmetry.
\headline{Determining the large deviation function}
The master equation (\ref{master}) can be rewritten in the form
\begin{equation}
\frac{d}{dt}P(\conf,t)= -\sum_{\conf'}\hat\mathcal{L}_{\conf\conf'}P(\conf',t),
\end{equation}
where $\hat\mathcal{L}$ is the Markov generator with elements
\begin{equation}
\hat{\mathcal{L}}_{\conf\conf'}=\left\{
\begin{array}{ll}
- w_{\conf'\to \conf} & \quad \textrm{ if } \conf\neq \conf'\\
\lambda(\conf) & \quad \textrm{ if } \conf=\conf'
\end{array}\right.\,.
\end{equation}
Here
\begin{equation}
\lambda(\conf)=\sum_{\conf'\neq \conf} \rate_{\conf\to \conf'}
\label{escaperate}
\end{equation}
denotes the \textit{escape rate} from configuration $\conf$. For each time-integrated current of the form (\ref{defcurrent}), one can now define a modified generator $\hat\mathcal{L}(z)$ by
\begin{equation}
\hat{\mathcal{L}}(z)_{\conf\conf'}=\left\{\begin{array}{ll}
-w_{\conf'\to \conf}\exp(-z \,\theta_{\conf'\to \conf}) & \quad \textrm{if } \conf\neq \conf'\\
\lambda(\conf) & \quad \textrm{if } \conf=\conf'
\end{array}\right.\,.
\label{modgenerator}
\end{equation}
The scaled cumulant generating function $\hat{I}(z)$, of the respective time-integrated current $J$, is defined by
\begin{equation}
\lim_{T\to\infty}\left\langle \exp (-zJ)\right\rangle = \exp (- T\hat{I}(z)).
\end{equation}
It can be shown that $\hat{I}(z)$ is given by the minimum eigenvalue of the modified generator (\ref{modgenerator}) \cite{lebowitz99}. Moreover, the Gr\"atner-Ellis theorem~\cite{ellis85,touchette09,hollander} states that $I(x)$ is given by the Legendre-Fenchel transform of $\hat{I}(z)$, i.e.,
\begin{equation}
I(x)=\textrm{max}_z\left(\hat{I}(z)-xz\right)\,,
\end{equation}
with $z$ real. Note that $\hat{I}(0)=0$ because in this case $\mathcal{L}(z)$ reduces to the Markov generator $\mathcal{L}$ with the minimum eigenvalue $0$. The GCEM symmetry (\ref{GCEMforlarge}) in terms of the scaled cumulant generating function of the entropy $\hat{I}_s(z)$ is
\begin{equation}
\hat{I}_s(z)=\hat{I}_s(1-z).
\label{GCEMnonoscaled}
\end{equation}
The advantage of dealing with the scaled cumulant generating function is that it is easier to calculate in several situations. In the present case it corresponds to determine the minimum eigenvalue of the Perron-Frobenius matrix (\ref{modgenerator}). Note that, the curve $\hat{I}_s(z)$ has a convex shape, vanishes at $z=0$ and $z=1$, and reaches its minimum at $z=1/2$.
\headline{Non-entropic time-integrated currents}
Is it possible to find other time-integrated currents with a GCEM-like symmetry? Previous works (see e.g. \cite{harris07, bodineau07}) have shown that such currents do exist. However, these examples are unsatisfactory in so far as the proposed currents differ from the entropy only initially while they become proportional to the entropy in the long time limit. If we call such a current $J_r$, this means that $\hat{I}_r(z)=\hat{I}_r(E-z)$. Moreover, since the current $J_r$ becomes proportional to entropy in the long time limit, $\hat{I}_r(zE)=\hat{I}_s(z)$, i.e., the rescaled scaled cumulant generating functions of $J_r$ and $J_s$ have the same functional form.
The main objective of the present paper is to show that it is possible to find symmetric currents which differ from the entropy even in the limit $T\to \infty$ so that their rescaled scaled cumulant generating functions differ from $\hat{I}_s(z)$, even tough they are both symmetric and touch the horizontal axis at the same points (see below in Figs. \ref{legendremechanical}, \ref{legendreRSOS}, and \ref{legendreDavid}).
\headline{Counting the degrees of freedom}
Before proceeding let us point out that there are certain restrictions that reduce the degrees of freedom in the space of time-integrated currents. As such currents are specified by an antisymmetric matrix $\theta_{\conf\conf'}=-\theta_{\conf'\conf}$ there are in principle $N(N-1)/2$ degrees of freedom, where $N$ is the number of states. However, not all of them are independent. To see this let us consider the elementary time-integrated currents
\begin{equation}
J_{\conf \to \conf'}[\overrightarrow{C}_{M,t}] = \sum_{i=0}^{M-1} \left( \delta_{\conf,\conf(t_i)}\delta_{\conf',\conf(t_{i+1})}-\delta_{\conf',\conf(t_i)}\delta_{\conf,\conf(t_{i+1})} \right)
\label{defelementary}
\end{equation}
from which all other currents can be constructed by linear combination. The current $J_{\conf \to \conf'}[\overrightarrow{C}_{M,t}]$ is simply the number of transitions $\conf\to\conf'$ minus the number of reverse transitions $\conf'\to\conf$ along the stochastic path $\overrightarrow{C}_{M,t}$. The sum over all destinations
\begin{equation}
N_\conf = \sum_{\conf'} J_{\conf \to \conf'}
\end{equation}
is just the number how often the system reaches the configuration $c$ minus the number how often this configuration is left, hence $N_\conf$ can only take the values 0 and $\pm 1$. This implies that
\begin{equation}
\sum_{\conf'}\frac{J_{\conf \to\conf'}}{T}\to 0.
\label{restcurr}
\end{equation}
in the limit $T \to \infty$, meaning that these particular linear combinations of elementary currents do not contribute to the large-deviation function. This reduces the degrees of freedom by the number of independent relations (\ref{restcurr}), which is maximally $N$.
\section{A symmetric time-integrated current different from entropy}
\label{sec3}
\begin{figure}
\begin{center}
\includegraphics[width=133mm]{./fig1}
\caption{Artificial network of configurations $\conf_k^i$ with periodic boundary conditions. The process can jump between configurations that are connected by a line. }
\label{networkgeneral}
\end{center}
\end{figure}
In this section we prove that for jump processes with a particular network of states and suitably chosen transition rates one can define a current with a symmetric large deviation function which differs from the one for the entropy. The structure of this network is shown in Fig.~\ref{networkgeneral}. It consist of configurations $c_k^i$ organized in columns labeled by a lower index $k=0,\dots,Q-1$, each of them including $n_k$ different configurations labeled by an upper index $i=1\ldots n_k$. Spontaneous jumps are allowed only between configurations in neighboring columns with periodic boundary conditions, as indicated by straight lines in the figure. Moreover, we assume that the number of columns $Q$ is even and that even columns carry only a single configuration, i.e. $n_0=n_2=n_4=\ldots=1$. This forces the system to go through periodically arranged bottlenecks of single configurations.
On this network of configurations we consider the current
\begin{equation}
\label{ourcurrent}
J_r \;=\; \sum_{k=0}^{Q-1} \theta_k \sum_{i_k=1}^{n_k} \sum_{i_{k+1}=1}^{n_{k+1}} J_{\conf_k^{i_k} \to \conf_{k+1}^{i_{k+1}}}\,,
\end{equation}
where $\theta_k$ are numbers and $J_{\conf\to \conf'}$ are the elementary currents defined in Eq.~(\ref{defelementary}). This current increases by $\theta_k$ if the process process jumps from column $k$ to $k+1$ and decreases by the same amount if the system jumps in the opposite direction, no matter which of the configurations within a column is selected.
\headline{Counting cycles}
Assume that the process starts at a particular configuration, say $\conf_0$. Depending on the yet unspecified transition rates, the process will perform a random walk from column to column. Whenever it returns to its starting point $\conf_0$, it is easy to see that the current defined above will take the value $J_r=m\Theta$, where $\Theta=\sum_{k=0}^{Q-1}\theta_k$ and $m\in\mathbb{Z}$. The number $m$ tells us how often the system completed a cycle through all columns $0\to 1\to 2 \to\ldots\to Q-1\to 0$. Therefore, if the rates are chosen in such a way that the random walk through the columns is biased to the right, $m$ will be on average positive. With this picture in mind it is intuitively clear that the expectation value of $m$ in the long-time limit is related to the average current through each of the bottlenecks.
To prove this intuitive argument, we apply the restriction~(\ref{restcurr}) to each of the configurations in the network. For single but multiply connected configurations at even columns this restriction tells us that the incoming current is equal to the outgoing current in the long time limit, i.e. for even $k$ we have
\begin{equation}
\sum_{i=1}^{n_{k-1}}\frac{J_{\conf_{k-1}^{i}\to \conf_k}}{T}=\sum_{i=1}^{n_{k+1}}\frac{J_{\conf_{k}\to \conf_{k+1}^{i}}}{T}\,.
\label{rest2}
\end{equation}
For the simply connected configurations in columns with odd $k$, we have instead
\begin{equation}
\frac{J_{\conf_k^{i}\to \conf_{k+1}}}{T}= \frac{J_{\conf_{k-1}\to \conf_k^{i}}}{T}\,,
\label{rest1}
\end{equation}
where $i=1,\ldots,n_k$. From the above relations it is easy to show that the current in the large deviation regime is given by
\begin{equation}
\frac{J_r}{T}= \Theta\sum_{i=1}^{n_1}\frac{J_{\conf_0\to \conf_1^{i}}}{T}\,.
\label{1cld}
\end{equation}
This result proves that the large deviation properties of the current do not depend individually on the contributions $\theta_k$ but only on their sum $\Theta=\sum_k\theta_k$. This means that all currents of the form~(\ref{ourcurrent}) are proportional in the long-time limit and therefore characterized (up to rescaling) by the same large deviation function.
\headline{Structure of the characteristic polynomial}
We now prove that for suitably chosen transition rates the scaled cumulant generating function of $J_r$ exhibits the symmetry (\ref{GCEMnonoscaled}). Following Ref.~\cite{andrieux07} we first consider the characteristic polynomial
\begin{equation}
\label{charpol1}
P(z,x)\;=\;\det\bigl(x I-\hat\mathcal{L}(z)\bigr) \;=\; \sum_\pi \mbox{sgn}(\pi) \prod_c \Bigl(x\delta_{c,\pi_c}-\hat\mathcal{L}(z)_{c,\pi_c} \Bigr)\,
\end{equation}
where the sum runs over all permutations $\pi$ of the available configurations. Here $\hat\mathcal{L}(z)$ denotes the modified generator (\ref{modgenerator}) with the matrix elements
\begin{equation}
\hat\mathcal{L}(z)_{\conf',\conf} \;=\; -\rate_{\conf\to\conf'}\exp(-z\,\theta_{\conf\to\conf'})+\delta_{\conf,\conf'}\lambda(\conf)
\end{equation}
\begin{figure}
\begin{center}
\includegraphics[width=130mm]{./fig2}
\caption{Local transition cycle (left) and complete transition cycle through all columns (right).}
\label{cycles}
\end{center}
\end{figure}
where $\lambda(\conf)$ is again the escape rate~(\ref{escaperate}). Since the off-diagonal entries represent the possible microscopic transitions only those permutations will contribute to the determinant which correspond to a set of non-intersecting transition cycles. In the network of configurations shown in Fig.~\ref{networkgeneral} there are three types of closed transition cycles, namely local cycles which do not change the current $J_r$, and cycles extending over the whole system in positive or negative direction, changing the current by $\pm \Theta$ (see Fig.~\ref{cycles}). In the determinant this means that permutations corresponding to local cycles are $z$-independent since the exponential factors drop out. Conversely, complete cycles extending over the whole system contribute to the sum with terms which are proportional to $e^{\pm z\Theta}$, respectively. We can therefore split the characteristic polynomial (\ref{charpol1}) into three parts
\begin{equation}
\label{pol}
P(z,x)\;=\;f(x) e^{-z \Theta} + \overline f(x) e^{z \Theta} + g(x)\,.
\end{equation}
Labelling each complete cycle $\conf_0\to\conf_1^{i_1}\to\conf_2\to\conf_3^{i_3}\ldots\to\conf_{Q-1}^{i_{Q-1}}\to\conf_0$ by a multiindex $\multiindex{i}:=(i_1,i_3,\ldots,i_{Q-1})$
the first two functions can be expressed as
\begin{equation}
\label{fandfbar}
f(x) \;=\; \sum_{\multiindex i} T_{\multiindex i} R_{\multiindex i}(x)\,,\qquad
\overline f(x) \;=\; \sum_{\multiindex i} \overline T_{\multiindex i} R_{\multiindex i}(x)
\end{equation}
where $T_{\multiindex i}$ and $\overline T_{\multiindex i}$ are the products of all rates along the cycle in forward and backward direction, respectively. Explicitly,
\begin{equation}
T_{\multiindex i}= \rate_{\conf_0\to \conf_{1}^{i_{1}}}\rate_{\conf_{1}^{i_{1}}\to \conf_2}\ldots\rate_{\conf_{Q-2}\to \conf_{Q-1}^{i_{Q-1}}}\rate_{\conf_{Q-1}^{i_{Q-1}}\to \conf_0}\,, \qquad
\overline T_{\multiindex i}=\rate_{\conf_0\to\conf_{Q-1}^{i_{Q-1}}}\rate_{\conf_{Q-1}^{i_{Q-1}}\to\conf_{Q-2}}\ldots\rate_{\conf_2\to\conf_{1}^{i_{1}}}\rate_{\conf_{1}^{i_{1}}\to \conf_0}.
\end{equation}
Moreover, $R$ comprises all diagonal entries coming from the configurations that are not involved in the cycle:
\begin{equation}
\label{Req}
R_{\multiindex i}(x) = \prod_{k=1}^{Q/2} \,\prod_{j_{2k-1}=1, j_{2k-1}\neq i_{2k-1}}^{n_{2k-1}} \Bigl( -\lambda(\conf_{2k-1}^{j_{2k-1}})+x \Bigr)
\end{equation}
There is also a complicated expression for $g(x)$ but, as we will see below, this function is not needed for finding the symmetry.
\headline{Symmetry condition and derivation of the constraints}
Clearly, a symmetry of the characteristic polynomial $P(z,x)=P(E-z,x)$ is a sufficient condition for the symmetry $\hat{I}_r(z)=\hat{I}_r(E-z)$ of the minimal eigenvalue. Because of Eq.~(\ref{pol}) this means that the condition
\begin{equation}
f(x)=\exp(E\Theta)\overline{f}(x)
\label{constraints}
\end{equation}
implies a GCEM symmetry of the current defined in (\ref{ourcurrent}). Since $f(x)$ and $\overline f(x)$ are polynomials in $x$ of order $N-Q$, where $N=\sum_k n_k$ is the total number of configurations, we can compare the coefficients on both sides. Equating the leading order $x^{N-Q}$ we obtain
\begin{equation}
\exp(E \Theta)=\frac{\sum_{\multiindex i}T_{\multiindex i}}{\sum_{\multiindex i}\overline T_{\multiindex i}}\,.
\label{eqsymmetricfactor}
\end{equation}
The above formula says that $\exp(E \Theta)$ is given by the sum of the product of the transition rates of each possible forward cycle that goes trough all the columns (increasing the current by $\Theta$) divided by the sum of the product of the transition rates of each possible backward cycle.
The comparison of the $N-Q$ remaining orders yields a set of constraints for the transition rates. To this end we rewrite Eq.~(\ref{Req}) as a power series
\begin{equation}
\label{ReqPowerSeries}
R_{\multiindex i}(x) = \, \sum_{n=0}^{N-Q}(-1)^{n} x^{N-Q-n} \, \sigma_n(\Lambda_{\multiindex i})\,,
\label{defR}
\end{equation}
where $\sigma_n(Y_1,Y_2,\ldots,Y_{N-Q})=\sum_{1\le l_1<l_2<\ldots<l_n\le N-Q}Y_{l_1}Y_{l_2}\ldots Y_{l_{n}}$ is the elementary symmetric polynomial and where
\begin{equation}
\Lambda_{\multiindex i}\;=\;\Bigl\{\,\lambda(c_k^{j_k})\quad \Bigl|\quad k=1,3\ldots,Q-1; \quad j_k=1,\ldots,n_k; \quad j_k\neq i_k\Bigr\}
\label{lambdabig}
\end{equation}
is a set of $N-Q$ arguments, consisting of all escape rates which are not part of the cycle labeled by $\multiindex i$. Inserting this power series into Eq.~(\ref{fandfbar}) and comparing the coefficients in Eq.~(\ref{constraints}) one is led to $N-Q$ constraints for the transition rates of the form
\begin{equation}
\label{consym}
\frac{\sum_{\multiindex i} T_{\multiindex i}}{\sum_{\multiindex i} \overline T_{\multiindex i}} \;=\;
\frac{\sum_{\multiindex i} T_{\multiindex i} \, \sigma_n(\Lambda_{\multiindex i})}{\sum_{\multiindex i} \overline T_{\multiindex i} \, \sigma_n(\Lambda_{\multiindex i})}
\qquad n=1,2,\ldots,N-Q
\end{equation}
\headline{Simple solutions of the constraints}
Even though these constraints appear to be complicated, they are fulfilled trivially by setting
\begin{equation}
\lambda(\conf_k^i):=\lambda_k
\label{constressimple}
\end{equation}
for all $k=0,\ldots,Q-1$ and $i=1,\ldots,n_k$, meaning that all configurations in the same column have the same escape rate. In this case the function $R_{\multiindex i}(x)=R(x)$ does no longer depend on the specific choice of the cycle labeled by the multiindex $\multiindex i$, meaning that the polynomials $f(x)$ and $\overline{f}(x)$ reduce to
\begin{equation}
f(x)=R(x)\sum_{\multiindex i}T_{\multiindex i}\,,\qquad
\overline f(x)=R(x)\sum_{\multiindex i}\overline T_{\multiindex i}\,.
\end{equation}
Clearly these functions satisfy equation (\ref{constraints}), which implies in a symmetric characteristic polynomial and therewith a GCEM-like symmetry of the current~(\ref{ourcurrent}).
Another trivial solution that fulfills the constraints is the following. Let us take a column $k$ that has more than one state and define the quantity
\begin{equation}
F_{k}^{i_k}= \frac{\rate_{\conf_{k-1}\to \conf_k^{i_k}}\rate_{\conf_k^{i_k}\to \conf_{k+1}}}{\rate_{\conf_{k+1}\to \conf_k^{i_k}}\rate_{\conf_k^{i_k}\to \conf_{k-1}}}\,,
\label{contri2}
\end{equation}
where $i_k=1,2,\ldots,n_k$. If this quantity is constant in all columns with more than one site, i.e.
\begin{equation}
F_{{k}}=F_{{k}}^{i_{{k}}}\qquad\textrm{for }i_{{k}}=1,\ldots,n_{{k}},
\label{Fkconst}
\end{equation}
then the ratio
\begin{equation}
\frac{T_{\multiindex i}}{\overline T_{\multiindex i}} \;=\; \prod_{k=1}^{Q/2} F_{2k-1}
\end{equation}
is independent on the cycle labeled by the multiindex $\multiindex i$ because the reversal any complete cycle changes the product of the rates always by the same factor, hence we arrive at the symmetry condition
\begin{equation}
f(x)= \left(\prod_{k=1}^{Q/2}F_{2k-1}\right)\overline{f}(x)\,.
\end{equation}
Finally, we get a larger class of solutions by mixing the conditions (\ref{constressimple}) and (\ref{Fkconst}), i.e., some of the $Q$ columns with more than one state have constant escape rates while the remaining columns have constant $F_{{k}}$. Using similar arguments one can again show that the ratio $f/\overline f$ is constant and, therefore, the constraints (\ref{consym}) are fulfilled. Nevertheless, we note that not all the solutions of the constraints equations are of this type.
\headline{The relation between $J_r$ and $J_s$}
Depending on the transition rates the current $J_r$ may be proportional to entropy. If this is the case, then $J_r$ displays the GCEM symmetry. However, when $J_r$ is not proportional to entropy in the large deviation regime and still with a symmetric large deviation function, then we have a symmetry different from the GCEM symmetry. In the following we derive the condition on the transition rates such that $J_s$ is proportional to $J_r$.
For the network of states shown in Fig. \ref{networkgeneral} the entropy current is given by
\begin{equation}
\frac{J_s}T\;=\; \frac1T
\sum_{k=1,3,5,\ldots}^{Q-1}\,\left(
\sum_{i=1}^{n_k}J_{\conf_{k-1}\to \conf_k^{i}}\ln\frac{\rate_{\conf_{k-1}\to \conf_k^{i}}}{\rate_{\conf_k^{i}\to \conf_{k-1}}}+\sum_{i=1}^{n_k}J_{\conf_k^{i}\to \conf_{k+1}}\ln\frac{\rate_{\conf_k^{i}\to \conf_{k+1}}}{\rate_{\conf_{k+1}\to \conf_k^{i}}}
\right)\,.
\end{equation}
Using relation (\ref{rest1}), in the long time limit the above term divided by $T$ becomes
\begin{equation}
\frac{J_s}T\;=\; \frac1T
\sum_{k=1,3,5,\ldots}^{Q-1}\,\sum_{i=1}^{n_k}J_{\conf_{k-1}\to \conf_k^{i}}\ln\frac{\rate_{\conf_{k-1}\to \conf_k^{i}}\rate_{\conf_k^{i}\to \conf_{k+1}}}{\rate_{\conf_{k+1}\to \conf_{k}^{i}}\rate_{\conf_k^{i}\to \conf_{k-1}}}\,.
\label{becentropy}
\end{equation}
Let us first consider the second solution of the constraints (\ref{Fkconst}). In this case the quotient of rates appearing in the logarithm does not depend on $i$, therefore, using relation (\ref{rest2}) we obtain
\begin{equation}
\frac{J_s}{ T} \;=\; \frac1T \left(\sum_{k=1,3,5,\ldots}^{Q-1}\ln F_k \right) \left(\sum_{i_1=1}^{n_1}J_{\conf_0\to \conf_1^{i_1}} \right)\,.
\end{equation}
In this case $J_r$ is proportional to the entropy and thus exhibits a GCEM symmetry. However, we can also conclude in opposite direction that for systems obeying the constraint equations (\ref{consym}) in such a way that (\ref{Fkconst}) is not satisfied for all ${k}$, then the current $J_r$ is not proportional to $J_s$. In this case we expect to have a new type of symmetry different from the GCEM symmetry.
We point out that there is also a multi-dimensional version of the fluctuation theorem \cite{lebowitz99,andrieux07,faggionato11}, where the joint probability distribution of a set of currents (which when summed give the entropy) displays a symmetric large deviation function. It is
important to note that our symmetry is also different from this multi-dimensional case. This becomes clear if the fluctuation theorem obtained in \cite{andrieux07} using the cycle decomposition approach \cite{schnakenberg76} is considered. Following \cite{andrieux07}, the fact that the product of the
transition rates associated with a fundamental cycle in a given direction divided by the product of the transition rates in the opposite direction is
independent of the states within the cycle, is analogous to our condition (\ref{Fkconst}) holding for all columns.
\headline{Transitions network with odd $Q$}
So far we have considered transition networks shown in Fig. \ref{networkgeneral} with an even number of columns $Q$. The above results can be easily generalized to the case where $Q$ is odd. In this case there are at least two adjacent columns with only one configuration.
In order to do this generalization we can consider an index $\tilde{k}$ that runs only over columns with more than one state. Note that we might also have odd columns with one state. Moreover we take $\tilde{Q}$ as the number of columns with more than one state, such that $\tilde{k}=1,\ldots,\tilde{Q}$. For the case where we have an even number of columns and all odd columns have more than one state we have $\tilde{Q}=Q/2$ and $k=2\tilde{k}-1$.
Note that the number of possible complete forward (and backward) cycles is given by $\prod_{\tilde{k}=1}^{\tilde{Q}}n_{\tilde{k}}$ and only the columns with more than one state are relevant in determining a complete cycle. Now if we consider the multiindex ${\multiindex i}=\{i_{\tilde{k}}\}$ and change definition (\ref{lambdabig}) to
\begin{equation}
\Lambda_{\multiindex i}\;=\;\Bigl\{\,\lambda(c_{\tilde{k}}^{j_{\tilde{k}}})\quad \Bigl|\quad \tilde{k}=1,2\ldots,\tilde{Q}; \quad j_{\tilde{k}}=1,\ldots,n_{\tilde{k}}; \quad j_{\tilde{k}}\neq i_{\tilde{k}}\Bigr\}
\end{equation}
the formulas (\ref{eqsymmetricfactor}), (\ref{defR}) and (\ref{consym}) take the same form if $Q$ is odd. Therefore, the proof also works for the $Q$ odd case.
\section{Examples}
In order to illustrate how this new symmetry can be established we consider some examples of physical systems described by jump processes with a network of states of the type shown in Fig. \ref{networkgeneral}, where the current $J_r$ has a clear physical meaning. The examples include a molecular motor in a particular network of states, the so-called restricted solid-on-solid (RSOS) model with four sites, and a growth model where the nucleation of the first particle and the completion of the layer take place in the time-scales of the same order.
\headline{Molecular motor with four different states}
A molecular motor is a biological protein that converts chemical energy into mechanical work by hydrolysis of adenosinetriphosphate (ATP) to adenosinediphosphate (ADP) and phosphate (P)~\cite{howard01} (see also \cite{lacoste} for a review in fluctuation relation for molecular motors). Unlike macroscopic motors, which move unidirectionally in a well-defined cycle, a molecular motor performs a random walk driven by the chemical potential difference $\Delta \mu = \mu_{\rm ATP}-\mu_{\rm ADP}-\mu_{\rm P}$, moving preferentially forward if ATP is in excess. This allows one to model molecular motors by stochastic jump processes~\cite{juelicher97}. Assuming that the motor is thermally equilibrated with its local environment the jump rate is proportional to $\exp(\beta(\Delta \mu-W))$, where $W=Fl$ is the mechanical work to transport the cargo with force $F$ over the distance $l$.
\begin{figure}
\begin{center}
\includegraphics[width=90mm]{./fig3}
\caption{Configuration network of a molecular motor with two possible transition cycles $A$ and $B$ (see text).}
\label{network4state}
\end{center}
\end{figure}
The simplest model of a molecular motor would require a cycle of two states. In such a model the consumption of ATP would be proportional to the mechanical work. However, realistic molecular motors are often characterized by several possible transition cycles, i.e. the motor protein can advance by one step through different paths of intermediate configurations with different energy consumption. In what follows we consider a hypothetical network of four configurations with two transition paths denoted by $A$ and $B$, as shown in Fig. \ref{network4state}. One interesting question is whether such nano-machine leads to a better efficiency at maximum power than a simple linear chain \cite{seifert08}. Here we are interested in providing a simple example where our theory can be applied. Clearly, this network belongs to the class of networks studied in the previous section with $Q=3$ columns.
If both paths are characterized by different chemical potential differences $\Delta \mu_A$ and $\Delta \mu_B$, the jump rates in positive direction will be proportional to $X_{A}=\exp\left[\beta\left(\Delta\mu_{A}-Fl\right)\right]$ and $X_{B}=\exp\left[\beta\left(\Delta\mu_{B}-Fl\right)\right]$, respectively, while the rates for jump in opposite direction do not depend on the chemical potential. Moreover, we included a constant factor $Y$ in path $B$, accounting for a possibly different attempt frequency. Finally we assume that the transition rates between $\conf_0$ and $\conf_2$ are symmetric and equal to 1, as shown in Fig.~\ref{network4state}. The corresponding time evolution operator in a canonical basis $\{\conf_0,\conf_A,\conf_B,\conf_2\}$ reads
\begin{eqnarray}
\mathcal{L}=
\left(
\begin{array}{cccc}
(1+X_{A}+YX_{B}) & -1 & -Y & -1 \\
-X_{A} &(1+X_{A}) &0 & -1 \\
-YX_{B} &0 &Y(1+X_{B}) &-Y \\
-1 &-X_{A} &-YX_{B} &2+Y \\
\end{array}\right)\,.
\end{eqnarray}
We are now going to show that the propagation of the molecular motor defines a time-integrated current with a clear physical meaning that exhibits a symmetry in the large deviation regime that differs from the one for the entropy. To this end, we note that there are two possible complete cycles one going trough $\conf_A$ and the other trough $\conf_B$. The product of the rates along
|
these cycles in forward and backward direction are given by
\begin{eqnarray}
T_A &=&\rate_{\conf_0\to \conf_A}\rate_{\conf_A\to \conf_2}\rate_{\conf_2\to \conf_0} \;=\; X_A^2\nonumber\\
\overline T_A &=& \rate_{\conf_0\to \conf_2}\rate_{\conf_2\to \conf_A}\rate_{\conf_A\to \conf_0} \;=\; 1\nonumber\\
T_B &=& \rate_{\conf_0\to \conf_B}\rate_{\conf_B\to \conf_2}\rate_{\conf_2\to \conf_0} \;=\; Y^2X_B^2\nonumber\\
\overline T_B &=& \rate_{\conf_0\to \conf_2}\rate_{\conf_2\to \conf_B}\rate_{\conf_B\to \conf_0} \;=\;Y^2
\end{eqnarray}
Following the procedure described in the previous section we consider the time-integrated current (\ref{ourcurrent}). Note that the polynomials (\ref{fandfbar}) are of order one for the present model. Comparing the leading order terms in Eq.~(\ref{constraints}) we obtain the condition
\begin{equation}
\exp(E \Theta)=\frac{T_A+T_B}{\overline{T}_A+\overline{T}_B}\,.
\end{equation}
Comparing the terms of order $x^0$ we obtain a constraint on the transition rates of the form
\begin{equation}
\frac{T_A+T_B}{\overline{T}_A+\overline{T}_B}=\frac{T_A\lambda_B+T_B\lambda_A}{\overline{T}_A\lambda_B+\overline{T}_B\lambda_A}\,,
\label{con4}
\end{equation}
where $\lambda_A=X_A+1$ and $\lambda_B=Y(X_B+1)$ are the escape rate in the configurations $C_A$ and $C_B$, respectively. This equation has two solutions, namely $T_A/\overline{T}_A=T_B/\overline{T}_B$, for which (\ref{Fkconst}) holds so that the current is proportional to entropy in the large deviation regime, and $\lambda_A=\lambda_B$, which is the one that gives symmetry different from GCEM. Therefore, we need $\lambda_A=\lambda_B$, which implies
\begin{equation}
Y=(1+X_A)/(1+X_B).
\end{equation}
Without loss of generality we choose $\Delta\mu_A>\Delta\mu_B$, which gives $Y>1$. In this case, the above restriction on $Y$ means that the cycles that go trough configurations with higher chemical potential evolve at a slower time-scale.
Assuming this condition to hold, we consider a time-integrated current of the form (\ref{ourcurrent}), namely the mechanical work
\begin{equation}
J_m \;=\; Fl\big(J_{\conf_A\to \conf_2}+J_{\conf_{0}\to \conf_A}+J_{\conf_B\to \conf_2}+J_{\conf_{0}\to \conf_B}\big)
\end{equation}
for which the factor~(\ref{eqsymmetricfactor}) is given by
\begin{equation}
E=\frac{1}{2Fl}\ln\frac{T_A+T_B}{\overline{T}_A+\overline{T}_B}=\frac{1}{2Fl}\ln\left[\frac{\left(X_{A}^{2}+X_{B}^{2}\right)\left(1+X_{B}\right)^{2}}{\left(1+X_{A}\right)^{2}+\left(1+X_{B}\right)^{2}}\right]\,.
\label{eqE}
\end{equation}
In order to demonstrate that this quantity exhibits a non-GCEM symmetry, we compute the smallest eigenvalues $\hat{I}_s(z)$ and $\hat{I}_m(z)$ of the modified time evolution operators (\ref{modgenerator}) for both the entropy
\begin{figure}
\begin{center}
\includegraphics[width=85mm]{./fig4}
\caption{Scaled cumulant generating functions for $F=\beta=l=1$, $\Delta\mu_A=3$, and $\Delta\mu_B=2$. The black line corresponds to the entropy and the red line corresponds to mechanical work (with $z\to z/E$). They are obtained from the minimum eigenvalue of (\ref{entropymolecular}) and (\ref{mechmolecular}), respectively. They are both symmetric and not proportional.}
\label{legendremechanical}
\end{center}
\end{figure}
\begin{eqnarray}
\hat{\mathcal{L}}_s(z)=
\left(
\begin{array}{cccc}
(1+X_{A}+YX_{B}) & -X_A^{z} & -YX_B^{z} & -1 \\
-X_{A}^{(1-z)} &(1+X_{A}) & 0 & -X_A^{z} \\
-YX_{B}^{(1-z)} &0 &Y(1+X_{B}) &-YX_B^{z} \\
-1 &-X_{A}^{(1-z)} &-YX_{B}^{(1-z)} &2+Y \\
\end{array}\right)
\label{entropymolecular}
\end{eqnarray}
and for the mechanical work
\begin{eqnarray}
\hat{\mathcal{L}}_m(z)=
\left(
\begin{array}{cccc}
(1+X_{A}+YX_{B}) & -1\exp(Flz) & -Y\exp(Flz) & -1 \\
- X_{A}\exp(-Flz) &(1+X_{A}) & 0 & -1\exp(Flz) \\
-YX_{B}\exp(-Flz) &0 &Y(1+X_{B}) &-Y\exp(Flz) \\
-1 &-X_{A}\exp(-Flz) &-YX_{B}\exp(-Flz) &2+Y \\
\end{array}\right)\,.
\label{mechmolecular}
\end{eqnarray}
As expected, the large deviation function for the entropy obeys the GCEM symmetry $\hat{I}_s(z)=\hat{I}_s(1-z)$ while the mechanical work fulfills the symmetry $\hat{I}_m(z)=\hat{I}_m(E-z)$ with $E$ given in (\ref{eqE}). Moreover, plotting $I_s(z)$ with $I_m(Ez)$ in Fig.~\ref{legendremechanical} we see that the rescaled Legendre transforms of the large deviation functions are different. Therefore, the mechanical work is a time-integrated current with a clear physical meaning that exhibits a new type of symmetry that differs from the GCEM symmetry of the entropy.
\headline{Restricted solid on solid growth model in a four sites lattice}
The second example is a restricted solid-on-solid (RSOS) model for interface growth~\cite{barato10}. In this model the interface configuration is described by height variables $h_i\in\mathbb{Z}$ residing on the sites $i$ of a one-dimensional lattice. Particles are deposited everywhere with rate $q$ while they evaporate with rate~$1$ from the edges and with rate $p$ from the interior of plateaus, provided that the restriction $h_i-h_{i \pm 1}$ is not violated (see left panel of Fig.~\ref{rsos}). On an infinite lattice one observes that the interface roughens according to the predictions of the Kardar-Parisi-Zhang universality class \cite{KPZ}.
\begin{figure}[t]
\centering\includegraphics[width=155mm]{./fig5}
\caption{Restricted solid on solid model. Left: Dynamic rules for (a) deposition and (b) evaporation. Right: Transition network of the model with four sites and periodic boundary conditions (see text).}
\label{rsos}
\end{figure}
Let us now consider a small system with $L=4$ sites with periodic boundary conditions. Moreover, let us identify all configurations which differ only by translation either in space or in height direction so that the interface can be in six possible configurations. By doing that, it can be demonstrated that the network of states takes the from shown in the right panel Fig.~\ref{rsos} (see \cite{barato10} for details). This network belongs to the class of networks studied in the previous section. Moreover, one can easily show that the escape rates $\lambda(\conf_B^3)=\lambda(\conf_A^3)=2q+2$ in column $3$ are constant and that the rates across column $1$ obey the condition (\ref{Fkconst}),
\begin{equation}
\frac{\rate_{\conf_0\to \conf_1^A}\rate_{\conf_1^A\to \conf_2}}{\rate_{\conf_2\to \conf_1^A}\rate_{\conf_1^A\to \conf_0}}=\frac{\rate_{\conf_0\to \conf_1^B}\rate_{\conf_1^B\to \conf_2}}{\rate_{\conf_2\to \conf_1^B}\rate_{\conf_1^B\to \conf_0}}=\frac{q^2}{p}\,.
\end{equation}
Hence, as explained in the previous section, the process fulfills the constraint (\ref{consym}).
The most important time-integrated current of the form (\ref{ourcurrent}) with a clear physical meaning is the total interface height which increases (decreases) by one whenever a deposition (evaporation) happens. The corresponding modified generator is given by
\begin{equation}
\hat{\mathcal{L}}_h(z)=
\left(\begin{array}{cccccc}
2q+p+2 & -4pe^{z} & -e^{z} & 0 & -2qe^{-z} & -2qe^{-z} \\
-qe^{-z} & 4p+4q & 0 & -e^{z} & 0 & 0 \\
-qe^{-z} & 0 & q+1 & -pe^{z} & 0 & 0 \\
0 & -4qe^{-z} & -qe^{-z} & 1+p+3q & -2e^{z} & -2e^{z}\\
-2e^{z} & 0 & 0 & -2qe^{-z} & +2+2q & 0 \\
-pe^{z} & 0 & 0 & -qe^{-z} & 0 & 2+2q
\end{array}\right)
\label{matrixheightRSOS}
\end{equation}
with the factor $E=\frac{1}{4}\ln\frac{3q^4}{2p+p^2}$, given by relation (\ref{eqsymmetricfactor}). Again its lowest eigenvalue $\hat I_h(z)$ has to be compared with the lowest eigenvalue $\hat I_s(z)$ of the corresponding modified generator for the entropy current, which reads
\begin{equation}
\hat{\mathcal{L}}_s(z)=
\left(\begin{array}{cccccc}
2q+p+2 \,&\, -q^z(4p)^{1-z} \,&\, -q^z \,&\, 0 \,&\, -2q^{1-z} \,&\, -(2q)^{1-z}p^z \\
-q^{1-z}(4p)^{z} \,&\, 4p+4q \,&\, 0 \,&\, -(4q)^z \,&\, 0 \,&\, 0 \\
-q^{1-z} \,&\, 0 \,&\, q+1 \,&\, -q^zp^{1-z} \,&\, 0 \,&\, 0 \\
0 \,&\, -(4q)^{1-z} \,&\, -q^{1-z}p^z \,&\, 1+p+3q \,&\, -2q^{z} \,&\, -2^{1-z}q^z\\
-2q^{z} \,&\, 0 \,&\, 0 \,&\, -2q^{1-z} \,&\, 2+2q \,&\, 0 \\
-(2q)^zp^{1-z} \,&\, 0 \,&\, 0 \,&\, -2^zq^{1-z} \,&\, 0 \,&\, 2+2q
\end{array}\right)
\label{matrixentropyRSOS}\,.
\end{equation}
Plotting the scaled cumulant generating function in Fig. \ref{legendreRSOS} we can see that both are symmetric and not proportional.
\begin{figure}[t]
\centering\includegraphics[width=80mm]{./fig6}
\caption{Scaled cumulant generating functions for the RSOS model for $p=0.02$ and $q=10$. The red line corresponds to height (with $z\to z/E$) and the black line to the entropy, they are obtained from the minimum eigenvalue of (\ref{matrixheightRSOS}) and (\ref{matrixentropyRSOS}), respectively.}
\label{legendreRSOS}
\end{figure}
Unfortunately, for $L>4$ sites the height current is no longer symmetric because in this case the network of transitions is not of the form shown in Fig.~\ref{networkgeneral}. In the next example, we introduce a model where the space of states is of the form displayed in Fig. \ref{networkgeneral} for any system size.
\headline{Growth process with instantaneous monolayer completion}
As a third example, let us define a growth process with deposition and evaporation which has the special property that after a nucleation the actual monolayer is completed on a very short time scale. The model is defined on a one-dimensional lattice with particles of species $\alpha$ and $\beta$, assuming that only $\alpha$ particles can be deposited on top of $\beta$ particles and vice versa (see Fig. \ref{growthmodelrules}). Once a particle is deposited with rate $d_i^{\alpha,\beta}$ at site $i$ on a flat surface, the subsequent event is either the completion of the layer or the evaporation of the particle with rate $e_i^{\alpha,\beta}$. This describes a limit in which a monolayer is completed almost instantaneously after the first deposition. The transition rates for the completion of a layer is $1$ and the transition rate for the reversed event, that is, the evaporation of $L-1$ particles of a monolayer, is $\epsilon$.
\begin{figure}[t]
\centering\includegraphics[width=90mm]{./fig7}
\caption{Possible cycle of transitions for the case $L=4$. The $\alpha$ particles are blue and the $\beta$ particles red. The left-most configuration is a flat interface with $\beta$ particles.
}
\label{growthmodelrules}
\end{figure}
The network of configurations of this model is of the form shown in Fig. \ref{networkgeneral} with $Q=4$, where $\conf_0$ and $\conf_2$ correspond to flat interface configurations while the columns $\conf_1$ and $\conf_3$ have $L$ states, each corresponding to a single particle nucleated at one of the $L$ sites. In this model the physically relevant time-integrated current of the form (\ref{ourcurrent}) is the interface height, which increases by $1/L$ when a particle is deposited on a flat interface, and by $(L-1)/L$ whenever a monolayer is completed.
\begin{figure}[b]
\centering\includegraphics[width=80mm]{./fig8}
\caption{Scaled cumulant generating functions for the growth model with fast monolayer completion. We used $L=3$, $d^\alpha_1=1$, $d^\alpha_2=2$, $d^\alpha_3=3$, $d^\beta_1=2$, $d^\beta_2=2$, $d^\beta_3=5$, $e^\alpha_1=e^\alpha_2=e^\alpha_3=2$, $e^\beta_1=e^\beta_2=e^\beta_3=3$, and $\epsilon=0.05$. The black line is related to entropy and the red line to height.
}
\label{legendreDavid}
\end{figure}
The product of rates on a forward and backward cycle through $\conf_1^i$ and $\conf_3^j$ are $T_{ij}= d^\alpha_id^\beta_j$ and $\overline{T}_{ij}= \epsilon^{2}e^\alpha_ie^\beta_j$ while the escape rates are $\lambda(\conf_1^i)=\lambda_i^\alpha= 1+e_i^\alpha$ and $\lambda(\conf_3^i)=\lambda_i^\beta= 1+e_i^\beta$. Hence, the polynomials (\ref{fandfbar}) with degree $2L-2$ are given by
\begin{eqnarray}
f(x)&=&\sum_{i=1}^{L}\sum_{j=1}^{L}d_i^{\alpha}d_j^{\beta}\prod_{l\neq i}(x-e_l^{\alpha}-1)\prod_{m\neq j}(x-e_m^{\beta}-1)\\
\overline{f}(x)&=&\epsilon^2\sum_{i=1}^{L}\sum_{j=1}^{L}e_i^{\alpha}e_j^{\beta}\prod_{l\neq i}(x-e_l^{\alpha}-1)\prod_{m\neq j}(x-e_m^{\beta}-1)
\end{eqnarray}
and for the factor (\ref{eqsymmetricfactor}) we get
\begin{equation}
E= \ln\frac{\sum_{i,j=1}^{L}d_i^{\alpha}d_j^{\beta}}{\epsilon^{2}\sum_{i,j=1}^{L}e_i^{\alpha}e_j^{\beta}}.
\label{Eheight}
\end{equation}
The constraints (\ref{consym}) take the form
\begin{equation}
\frac{\sum_{i,j=1}^{L}d_i^{\alpha}d_j^{\beta}}{\epsilon^{2}\sum_{i,j=1}^{L}e_i^{\alpha}e_j^{\beta}}=
\frac{\sum_{i,j=1}^{L}d_i^{\alpha}d_j^{\beta}\sigma_k\left[\lambda_1^\alpha,\ldots,\lambda_{i-1}^\alpha,\lambda_{i+1}^\alpha,\ldots,\lambda_L^\alpha,\lambda_1^\beta,\ldots,\lambda_{j-1}^\beta,\lambda_{j+1}^\beta,\ldots,\lambda_L^\beta\right]}{\epsilon^{2}\sum_{i,j=1}^{L}e_i^{\alpha}e_j^{\beta}\sigma_k\left[\lambda_1^\alpha,\ldots,\lambda_{i-1}^\alpha,\lambda_{i+1}^\alpha,\ldots,\lambda_L^\alpha,\lambda_1^\beta,\ldots,\lambda_{j-1}^\beta,\lambda_{j+1}^\beta,\ldots,\lambda_L^\beta\right]}
\label{constraintDavid},
\end{equation}
where $k=1,2,\ldots,2L-2$. Whenever the deposition and evaporation rates fulfill the above equation, the probability distribution of velocity will symmetric with respect to (\ref{Eheight}). Again, this symmetry is generally different from the GCEM symmetry. As an example in Fig. \ref{legendreDavid} we plot the scaled cumulant generating function for the entropy and height for a system with $L=3$ sites, where the transition rates are chosen in such a way that the escape rate is constant in columns $\conf_1$ and $\conf_3$ so that the relation (\ref{constraintDavid}) is satisfied.
\section{On the origin of the symmetry}
In the following we demonstrate that the new symmetry we proved here also comes from time-reversal. However, in order to see the symmetry we have to consider a functional of a group of trajectories instead of a single trajectory. We will restrict our analysis to the four-state system shown in Fig.~\ref{network4state}, but we expect the same kind of proof to be valid for general networks of the form shown in Fig .~\ref{networkgeneral}. Before going into detail, we provide a simple demonstration of the GCEM symmetry, where it becomes clear that its physical origin is related to time-reversal.
\headline{A simple demonstration of the fluctuation theorem}
As in Sect. 2 we consider stochastic trajectory with $M$ jumps in the time interval $[t_0,t_f]$ and jumps from $\conf(t_{i})$ to $\conf(t_{i+1})$ take place at times $t_i$. We denote this trajectory by $\overrightarrow{C}_{M,t}$ and its weight is given by
\begin{equation}
W[\overrightarrow{C}_{M,t}]= \exp[-\lambda(\conf(t_M))(t_f-t_M)]\prod_{i=0}^{M-1}\rate_{\conf(t_{i})\to \conf(t_{i+1})}\exp[-\lambda(\conf(t_i))(t_{i+1}-t_i)],
\end{equation}
where $\lambda(\conf)$ is the scape rate from state $\conf$. We note that we should also multiply the weight by the probability distribution of the initial state, but we are considering a uniform initial distribution of states. The reversed trajectory, where the system starts at state $\conf(t_M)$ at time $t_f$ (again with the uniform distribution) and jumps from state $\conf(t_{i+1})$ to state $\conf(t_{i})$ at time $t_f+t_0-t_{i+1}$, is denoted by $\overleftarrow{C}_{M,t_f+t_0-t}$. The weight of this trajectory is written as
\begin{equation}
W[\overleftarrow{C}_{M,t_f+t_0-t}]= \exp[-\lambda(\conf(t_M))(t_f-t_M)]\prod_{i=0}^{M-1}\rate_{\conf(t_{i+1})\to \conf(t_{i})}\exp[-\lambda(\conf(t_i))(t_{i+1}-t_i)].
\end{equation}
The entropy current (\ref{entropy}) is related to the weight of a trajectory divided by the weight of the time-reversed trajectory by
\begin{equation}
\exp(-J_s[\overrightarrow{C}_{M,t}]])= \frac{W[\overleftarrow{C}_{M,t_f+t_0-t}]}{W[\overrightarrow{C}_{M,t}]},
\label{entropypath}
\end{equation}
From (\ref{entropypath}) it follows that
\begin{equation}
W[\overrightarrow{C}_{M,t}]\exp(-X)\delta(J_s[\overrightarrow{C}_{M,t}]-X)= W[\overleftarrow{C}_{M,t_f+t_0-t}]\delta(J_s[\overrightarrow{C}_{M,t}]-X)\,.
\end{equation}
Summing over all possible trajectories and using $J_s[\overrightarrow{C}_{M,t}]=-J_s[\overleftarrow{C}_{M,t_f+t_0-t}]$ we obtain the fluctuation theorem (\ref{GCEMforlarge}), i.e.,
\begin{equation}
\frac{P(J_s=-X)}{P(J_s=X)}=\exp(-X).
\end{equation}
This demonstrates that the GCEM symmetry is a direct consequence of the fact that the entropy is the weight of a trajectory divided by the weight of the time-reversed trajectory.
\headline{Time-reversal of a group of trajectories}
We now consider Markov jump process with the network of states given in Fig. \ref{network4state}. In the following we will denote the states $\conf_A$ and $\conf_B$ by $\conf_1^A$ and $\conf_1^B$, respectively. Here, instead of considering one stochastic trajectory, we consider a certain group of trajectories (or class of trajectories), which can be defined as follows. Two trajectories belong to the same class if they have the same number of jumps taking place at the same times $t_i$, following of the same sequence of columns. For example, the trajectories $\conf_0\to \conf_1^A\to \conf_2\to \conf_0$ and $\conf_0\to \conf_1^B\to \conf_2\to \conf_0$ belong to the same class. More generally, a trajectory that goes trough column $1$ for $K$ different times pertains to a class with $2^K$ trajectories.
We denote this group of trajectories by $\{\overrightarrow{C}_{M,t}\}$ and its weight, which is the sum of the weights of each trajectory in the group, by $R[\{\overrightarrow{C}_{M,t}\}]$. We define a quantity $\tilde{J}$ such that
\begin{equation}
\exp(-{\tilde{J}}[\{\overrightarrow{C}_{M,t}\}])= \frac{R[\{\overleftarrow{C}_{M,t_f+t_0-t}\}]}{R[\{\overrightarrow{C}_{M,t}\}]},
\label{current2}
\end{equation}
where $R[\{\overleftarrow{C}_{M,t_f+t_0-t}\}]$ is the sum of the weights of the reversed trajectories in the group and are using a tilde to denote functionals of the group of trajectories. Note that, unlike the entropy, the current $\tilde{J}$ is a functional of a group of trajectories. What we show next is that in the case $\lambda_1=\lambda(\conf_1^A)=\lambda(\conf_1^B)$ (which is the sufficient condition (\ref{constressimple}) for the symmetry), then the the functional $\tilde{J}_r$, which is generated by the current $J_r$ in the way explained below, is equal to $\tilde{J}$, provided the increments $\theta_k$ defined in (\ref{ourcurrent}) are chosen appropriately.
The difference between a current that is a functional of the stochastic path $\overrightarrow{C}_{M,t}$ and a current that is a functional of the group of paths $\{\overrightarrow{C}_{M,t}\}$ is in the increment when there is a jump to an state at column $1$. For the second kind of currents the increment is as follows. Let us consider a trajectory which at time $t_{i-1}$ hops to a state $c(t_i)$ that pertains to the column $c_1$, and stays in that state during the time interval $t_i - t_{i-1}$. The contribution to the current $\tilde{J}[\{\overrightarrow{C}_{M,t}\}]$ is given by
\begin{equation}
\ln\frac{\rate_{\conf(t_{i-1})\to \conf_1^A}\rate_{\conf_1^A\to \conf(t_{i+1})}\exp[-\lambda(\conf_1^A)(t_{i}-t_{i-1})]+\rate_{\conf(t_{i-1})\to \conf_1^B}\rate_{\conf_1^B\to \conf(t_{i+1})}\exp[-\lambda(\conf_1^B)(t_{i}-t_{i-1})]}{\rate_{\conf(t_{i+1})\to \conf_1^A}\rate_{\conf_1^A\to \conf(t_{i-1})}\exp[-\lambda(\conf_1^A)(t_{i}-t_{i-1})]+\rate_{\conf(t_{i+1})\to \conf_1^B}\rate_{\conf_1^B\to \conf(t_{i-1})}\exp[-\lambda(\conf_1^B)(t_i-t_{i-1})]}.
\end{equation}
If the escape rate is constant at column $1$ the above term becomes
\begin{equation}
\ln\frac{\rate_{\conf(t_{i-1})\to \conf_1^A}\rate_{\conf_1^A\to \conf(t_{i+1})}+\rate_{\conf(t_{i-1})\to \conf_1^B}\rate_{\conf_1^B\to \conf(t_{i+1})}}{\rate_{\conf(t_{i+1})\to \conf_1^A}\rate_{\conf_1^A\to \conf(t_{i-1})}+\rate_{\conf(t_{i+1})\to \conf_1^B}\rate_{\conf_1^B\to \conf(t_{i-1})}}.
\end{equation}
Note that, the same relation holds if the relation (\ref{Fkconst}) is valid at column $1$. The fact that the scape rates are constant simplify the situation considerably: in this case the dependence on the time interval $t_i-t_{i-1}$ disappears. From now on we assume that the escape rates are constant at column $1$.
The current $J_r$ is a functional that is invariant within the class of trajectories, i.e., all the trajectories in the same group have the same value of $J_r$. This important property comes from the fact the the current (\ref{ourcurrent}) is defined in a way such that it does not discriminate between different states in the same column. Hence, $J_r$ induces a current $\tilde{J}_r$ defined on a class of trajectories by the relation
\begin{equation}
\tilde{J}_r[\{\overrightarrow{C}_{M,t}\}]=J_r[ \overrightarrow{C}_{M,t}] \,\,\mbox{for}\,\, \overrightarrow{C}_{M,t}\in\{\overrightarrow{C}_{M,t}\}.
\label{defJt}
\end{equation}
If we consider the current $J_r$ with the increments
\begin{equation}
\theta_0=\theta_1=\frac{1}{2}\ln\frac{\rate_{\conf_0\to \conf_1^A}\rate_{\conf_1^A\to \conf_2}+\rate_{\conf_0\to \conf_1^B}\rate_{\conf_1^B\to \conf_2}}{\rate_{\conf_2\to \conf_1^A}\rate_{\conf_1^A\to \conf_0}+\rate_{\conf_2\to \conf_1^B}\rate_{\conf_1^B\to \conf_0}}\,,\qquad
\theta_2= \ln\frac{\rate_{\conf_2\to \conf_0}}{\rate_{\conf_0\to \conf_2}},
\label{incre}
\end{equation}
then it is clear that $\tilde{J}_r[\{\overrightarrow{C}_{M,t}\}]$ is equal to the current given in (\ref{current2}). It then follows that
\begin{equation}
R[\{\overrightarrow{C}_{M,t}\}]\exp(-X)\delta(\tilde{J}_r[\{\overrightarrow{C}_{M,t}\}]-X)= R[\{\overleftarrow{C}_{M,t_f+t_0-t}\}]\delta(\tilde{J}_r[\{\overrightarrow{C}_{M,t}\}]-X)\,.
\end{equation}
Now summing over all possible group of trajectories we get $\frac{P(\tilde{J}_r=-X)}{P(\tilde{J}_r=X)}=\exp(-X)$. Since relation (\ref{defJt}) gives
$P(\tilde{J}_r=X)=P(J_r=X)$ we finally obtain
\begin{equation}
\frac{P(J_r=-X)}{P(J_r=X)}=\exp(-X).
\end{equation}
This last relation implies the symmetry for the current $J_r$ with the choice of increments (\ref{incre}). However, as we proved in (\ref{1cld}), at large time all the currents $J_r$ are proportional for any choice of increments $\theta_k$. Therefore, this argument shows that the symmetry in the current $J_r$ comes also from time-reversal, but at a more coarse grained level. We point out that functionals of a group of paths have been considered also in \cite{rahav}, however in a rather different context.
We believe that the same demonstration can be extended for the more general network of Fig. \ref{networkgeneral}. However, formalizing the proof for this more general case leads to much more cumbersome formulas. We pretend to address this problem in future work.
\section{Conclusion}
In this paper we have investigated a new symmetry of time-integrated currents which is generally different from the GCEM-symmetry of the entropy current in the long time limit. It is valid for a very restricted class of Markov processes in the sense that they have the very peculiar network of configurations shown in Fig. \ref{networkgeneral}. Moreover, the symmetry appears only when the transition rates fulfill a certain set of constraints. Nevertheless, to our knowledge, this is the only case where a current not proportional to the entropy in the large deviation regime displays a symmetric large deviation function.
We showed three physical examples where this current is a relevant physical observable. We considered a toy model for a molecular model, where the mechanical work has a symmetric large deviation function different from the one for the entropy. Moreover, we analyzed two growth models where the height displays such a symmetry.
As is the case of the GCEM symmetry, the origin of our symmetry seems to be related to time-reversal. However, as we showed in Sec 5 for a $4$ states system, the symmetry becomes clear only when we consider a functional of a group of stochastic trajectories. That is, the symmetric current as a functional of the group of trajectories is given by the weight of the trajectories divided by the weight of the reversed trajectories. We demonstrated the symmetry with this grouping of trajectories argument only for the specific $4$ states system, but we expect the same kind of proof also to be valid for the whole class of networks considered here.
A natural extension of the this work is to look for other currents, in a more general space of states, with a symmetric large deviation function. This can be done by looking for the conditions that the transition rates have to fulfill in order for the characteristic polynomial of the modified generator to be symmetric. We expect that whenever this characteristic polynomial is symmetric, the origin of the symmetry should be related to time-reversal (of a trajectory or a group of trajectories). One might speculate about a situation where the characteristic polynomial of the modified generator is not symmetric, however its minimum eigenvalue is still symmetric. In this case the origin of the symmetry might be also related to time-reversal, but of some most probable stochastic trajectory dominating a sum over different trajectories. Ultimately, it would be of great theoretical interest for nonequilibrium statistical physics to find out which are the time-integrated currents with a symmetric large deviation function and what might be the origin of these symmetries.
\begin{acknowledgements}
The support of the Israel Science Foundation (ISF) is gratefully acknowledged. ACB and RC thank the Weizmann Institute of Science for hospitality.
\end{acknowledgements}
|
\section{Binary hypothesis testing}
\paragraph{Matching algorithm}
We consider an algorithm that does gives us a `matching' \(\hat{m}\subseteq\calU\times\calV\) that is not necessarily bijective, i.e. any entry can have multiple matches in the other dataset.
Recall that we denote the \(j\)-th row of a matrix \(\dul{z}\) by \(\ul{z}_{j*}\).
Given some \(\dul{a}\in\mathbb{R}^{\calU\times[d_1]}\) and \(\dul{b}\in\mathbb{R}^{\calV\times[d_2]}\) and \(f=\bpth{\dul{a},\dul{b}}\) the estimated `matching' is given by
\begin{align*}
\hat{m}(f)= \acc{(u,v)\in\calU\times\calV\Big|(\ul{a}_{u*}^\top,\ul{b}_{v*}^\top)\in H_\tau }.
\end{align*}
\(H_\tau\) is the log ratio test given by
\begin{align*}
H_\tau = \acc{(\ul{x},\ul{y})\in\mathbb{R}^d\times\mathbb{R}^d\Big|\log\frac{p_{\ul{X}\ul{Y}}(\ul{x},\ul{y})}{p_{\ul{X}}(\ul{x})p_{\ul{Y}}(\ul{y})}\geq \tau}
\end{align*}
where \(p_{\ul{X}}\) and \(p_{\ul{Y}}\) denote the probability density functions of feature vectors associated with identifiers in \(\calU\) and \(\calV\) respectively, and \(\tau\in\mathbb{R}\) is some constant to be determined.
\subsection{Achievability analysis}
\bcomment{
We assume the special problem setting with \(\dul{\Sigma}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}=\dul{\Sigma}_{\textrm{b}}=\dul{I}^d\) and \(\dul{\Sigma}_{\textrm{ab}}=\operatorname{diag}(\ul{\rho})\) where \(-1<\rho_i<1\) for all \(i\in[d]\) and establish sufficient conditions on the mutual information \(I_{\ul{X}\ul{Y}}\) between feature pairs. Under these assumptions the probability density functions of feature vectors from both sides are identical, therefore we denote both simply by \(q(\ul{z}) = p_{\ul{X}}(\ul{z})=p_{\ul{Y}}(\ul{z})\).
}
In our analysis we establish upper and lower bounds on the threshold \(\tau\) that allow given probability bounds on false negatives and false positives.
The mean and variance of the log ratio random variable were computed in Section~\ref{section:corr}.
Using these values we get an upper bound on the probability of false negatives in Lemma \ref{lemma:FN} by the Chebyshev inequality. Lemma \ref{lemma:FP} gives an upper bound on the number of false positives. Finally, taking the intersection of the conditions on \(\tau\) allows us to derive the achievability result given in Theorem \ref{thm:achLRT}.
\begin{lemma}
\label{lemma:FN}
If \(\tau \leq I_{\ul{X}\ul{Y}} - \sigma_{\ul{X}\ul{Y}}/\sqrt{\varepsilon}\) then
\begin{align*}
\Pr\croc{(\dul{A}_{u*}^\top,\dul{B}_{v*}^\top)\notin H_\tau|(u,v)\in M} \leq \varepsilon.
\end{align*}
\end{lemma}
\begin{proof}
Let \((u,v)\in M\) and \((\ul{X},\ul{Y})=(\dul{A}_{u*}^\top,\dul{B}_{v*}^\top)\). Given \(\mu = I_{\ul{X}\ul{Y}} = \E{\log \frac{p_{\ul{X}\ul{Y}}(\ul{X},\ul{Y})}{p_{\ul{X}}(\ul{X})p_{\ul{Y}}(\ul{Y})}} = I_{\ul{X}\ul{Y}}\) and \(\sigma^2 = \sigma_{\ul{X}\ul{Y}}^2 = \operatorname{Var}\pth{\log \frac{p_{\ul{X}\ul{Y}}(\ul{X},\ul{Y})}{p_{\ul{X}}(\ul{X})p_{\ul{Y}}(\ul{Y})}}\), by Chebyshev's inequality we get
\begin{align*}
\Pr\croc{\norm{\mu-\log \frac{p_{\ul{X}\ul{Y}}(\ul{X},\ul{Y})}{p_{\ul{X}}(\ul{X})p_{\ul{Y}}(\ul{Y})}}\geq \frac{\sigma}{\sqrt{\varepsilon}}} \leq \varepsilon
\end{align*}
This probability is lower bounded by \(\Pr\left[\mu-\log \frac{p_{\ul{X}\ul{Y}}(\ul{X},\ul{Y})}{p_{\ul{X}}(\ul{X})p_{\ul{Y}}(\ul{Y})}\geq \frac{\sigma}{\sqrt{\varepsilon}}\right]\) which is equal to the probability \(\Pr\left[(\dul{A}_{u*}^\top,\dul{B}_{v*}^\top)\notin H_\tau|(u,v)\in M\right]\) for \(\tau = \mu-\sigma/\sqrt{\varepsilon}\). Then this choice of \(\tau\), or any smaller value, is a sufficient condition to bound the error probability by \(\varepsilon\).
\end{proof}
\begin{lemma}
\label{lemma:FP}
Given any \(\tau\in\mathbb{R}\),
\begin{align*}
\Pr\croc{(\dul{A}_{u*}^\top,\dul{B}_{v*}^\top)\in H_\tau|(u,v)\notin M} \leq e^{-\tau}
\end{align*}
\end{lemma}
\begin{proof}
Let \((u,v)\in M\) and \((\ul{X},\ul{Y})=(\dul{A}_{u*}^\top,\dul{B}_{v*}^\top)\). By Markov's inequality we get
\begin{align*}
\Pr\croc{\log \frac{p_{\ul{X}\ul{Y}}(\ul{X},\ul{Y})}{p_{\ul{X}}(\ul{X})p_{\ul{Y}}(\ul{Y})}\geq \tau}
&\leq e^{-\tau}\cdot\E{\frac{p_{\ul{X}\ul{Y}}(\ul{X},\ul{Y})}{p_{\ul{X}}(\ul{X})p_{\ul{Y}}(\ul{Y})}}
\end{align*}
We calculate the mean:
\begin{align*}
\E{\frac{p_{\ul{X}\ul{Y}}(\ul{X},\ul{Y})}{p_{\ul{X}}(\ul{X})p_{\ul{Y}}(\ul{Y})}}
&= \int_{\ul{X},\ul{Y}} p_{\ul{X}}(\ul{X})p_{\ul{Y}}(\ul{Y})\cdot \frac{p_{\ul{X}\ul{Y}}(\ul{X},\ul{Y})}{p_{\ul{X}}(\ul{X})p_{\ul{Y}}(\ul{Y})} d(\ul{X},\ul{Y})\\
&= \int_{\ul{X},\ul{Y}} p_{\ul{X}\ul{Y}}(\ul{X},\ul{Y}) d(\ul{X},\ul{Y})
\end{align*}
which equals to 1 since \(p_{\ul{X}\ul{Y}}(\ul{X},\ul{Y})\) is as a probability density function.
\end{proof}
\paragraph{Proof of Theorem \ref{thm:achLRT}}
By Lemma \ref{lemma:FN}, if \(\tau \leq I_{\ul{X}\ul{Y}} - \sigma_{\ul{X}\ul{Y}}/\sqrt{\varepsilon}\), then the probability that any correct match is not included in \(H_\tau\) is upper bounded by \(\varepsilon_{FN}/n\). There are \(n\) correct matches in \(\calU\times\calV\). Then the expected number of correct matches not included in \(H_\tau\), i.e. the expected number of false negatives, is upper bounded by \(\varepsilon_{FN}\).
By Lemma \ref{lemma:FP}, if \(\tau \geq \log\pth{n^2/\varepsilon_{FP}}\), then the probability that any incorrect match is included in \(H_\tau\) is upper bounded by \(\varepsilon_{FP}/n^2\). There are \(\binom{n}{2}<n^2\) incorrect matches in \(\calU\times\calV\). Then the expected number of incorrect matches included in \(H_\tau\), i.e. the expected number of false positives, is upper bounded by \(\varepsilon_{FP}\).
A choice for \(\tau\in\mathbb{R}\) satisfying both conditions exists if and only if the condition in the theorem statement holds. \hfill \qedsymbol
\bcomment{
\newtheorem*{tAchLRT}{Theorem \ref*{thm:achLRT}}
\begin{tAchLRT}
\thmAchLRT
\end{tAchLRT}
\begin{proof}
By Lemma \ref{lemma:FN}, if \(\tau \leq I_{\ul{X}\ul{Y}} - \sigma/\sqrt{\varepsilon}\), then the probability that any correct match is not included in \(H_\tau\) is upper bounded by \(\varepsilon_{FN}/n\). There are \(n\) correct matches in \(\calU\times\calV\). Then the expected number of correct matches not included in \(H_\tau\), i.e. the expected number of false negatives, is upper bounded by \(\varepsilon_{FN}\).
By Lemma \ref{lemma:FP}, if \(\tau \geq \log\pth{n^2/\varepsilon_{FP}}\), then the probability that any incorrect match is included in \(H_\tau\) is upper bounded by \(\varepsilon_{FP}/n^2\). There are \(\binom{n}{2}<n^2\) incorrect matches in \(\calU\times\calV\). Then the expected number of incorrect matches included in \(H_\tau\), i.e. the expected number of false positives, is upper bounded by \(\varepsilon_{FP}\).
A choice for \(\tau\in\mathbb{R}\) satisfying both conditions exists if and only if the condition in the theorem statement holds.
\end{proof}
}
\section{MAP estimation}
\paragraph{Matching algorithm}
The maximum a posteriori estimator is the optimal estimator for the exact matching \(M\) given \(F\). Given some realization \(\dul{f}=(\dul{a},\dul{b})\),
\begin{align*}
\hat{m}(\dul{f})
&= \argmax_{m}\,\,\, \Pr\croc{M=m|\dul{F}=\dul{f}}\\
&= \argmax_{m}\,\,\, \frac{p_{\dul{F}|M}(\dul{f}|m)P_{M}(m)}{p_{\dul{F}}(\dul{f})}\\
&\eql{a} \argmax_{m}\,\,\, p_{\dul{F}|M}(\dul{f}|m)
\end{align*}
where $(a)$ follows from the fact that $M$ has a uniform distribution.
\subsection{Achievability analysis}
We establish a sufficient condition on the mutual information \(I_{XY}\) between feature pairs to achieve a perfect alignment. The rest of this section assumes the cannonical setting. However, by the equivalence between the general setting and the canonical setting (as shown in \hyperref[sec:featureTransformation]{Appendix \ref*{sec:featureTransformation}}), the result directly applies to the general setting.
Our analysis goes as follows: Lemma \ref{lemma:ChernoffBound} sets an upper bound on the error probability that a given matching is more likely than the actual one. This bound is in the form of a function \(R\) whose explicit value remains to be determined. Lemma \ref{lemmma:bound2determinant} gives an expression of \(R\) that has a decomposition with terms corresponding to each cycle of `mismatchings'. Finally Lemma \ref{lemma:determinant} gives the explicit expression for each of these cycle-terms and Lemma \ref{lemma:bound4determinant} bounds their product by a function whose value only depends on the number of mismatchings. Joining these results gives us the achievability condition in Theorem \ref{thm:achMAP}.
\begin{definition}
Given any pair of bijective matchings \(m_1,m_2\subseteq\calU\times\calV\), define the event
\begin{align*}
\mathcal{E}(m_1,m_2) = \acc{\dul{f}
: p_{\dul{F}|M}(\dul{f}|m_1) \leq p_{\dul{F}|M}(\dul{f}|m_2)}.
\end{align*}
\end{definition}
Notice that given matching \(m=M\), the MAP estimator fails if and only if there exists some matching \(m'\neq m\) such that \(\dul{F}\in \mathcal{E}(m,m')\).
\begin{definition}
Given any pair of bijective matchings \(m_1,m_2\subseteq\calU\times\calV\), define the function
\begin{align*}
R(m_1,m_2) &\triangleq \int \sqrt{p_{\dul{F}|M}(\dul{f}|m_1) p_{\dul{F}|M}(\dul{f}|m_2)} \mathop{d\dul{f}}
\end{align*}
where the integral is over the whole space $\mathbb{R}^{(\calU \sqcup \calV) \times [d]}$.
\end{definition}
\begin{lemma}
\label{lemma:ChernoffBound}
For any pair of bijective matchings \(m_1,m_2\subseteq\calU\times\calV\)
\begin{align*}
\Pr\croc{\dul{F}\in\mathcal{E}(m_1,m_2)|M=m_1} \leq R(m_1,m_2)
\end{align*}
\end{lemma}
\begin{proof}
For any \(\theta\geq0\)
\begin{align*}
\Pr[\dul{F}\in\,\,\mathcal{E}(m_1,m_2)|M=m_1]
&= \E{\ind{\frac{p_{\dul{F}|M}(\dul{f}|m_2)}{p_{\dul{F}|M}(\dul{f}|m_1)}\geq1}\Big|M=m_1}\\
&\leq \int \pth{\frac{p_{\dul{F}|M}(\dul{f}|m_2)}{p_{\dul{F}|M}(\dul{f}|m_1)}}^{\theta} p_{\dul{F}|M}(\dul{f}|m_1) \mathop{d\dul{f}}\\
&= \int (p_{\dul{F}|M}(\dul{f}|m_2))^{\theta} (p_{\dul{F}|M}(\dul{f}|m_1))^{1-\theta}\mathop{d\dul{f}}
\end{align*}
Selecting \(\theta=1/2\) gives the claim.
\end{proof}
\begin{definition}
\label{def:shiftedIdentity}
Define \textit{shifted identity matrices} \(\dul{I}^{(k,+)}\) and \(\dul{I}^{(k,-)}\) of size \(k\) as
\begin{align*}
\dul{I}^{(k,+)}_{i,j} &= \ind{j-i=1 \mod k}\\
\dul{I}^{(k,-)}_{i,j} &= \ind{j-i=-1 \mod k}.
\end{align*}
We simply write \(\dul{I}^{(+)}\) and \(\dul{I}^{(-)}\) when there is no need to specify the size of the matrix.
\label{def:canonicalLaplacian}
For any \(\ell\in\mathbb{N}^+\),
\begin{align*}
\dul{L}^\ell(s,t) \triangleq s\dul{I}^\ell - \frac{t}{2}\pth{\dul{I}^{(\ell,+)}+\dul{I}^{(\ell,-)}},
\end{align*}
where \(s,t\in\mathbb{R}\).
\end{definition}
\begin{lemma}
\label{lemmma:bound2determinant}
Suppose $d_a}%{\dul{\Sigma}_{\textrm{I}} = d_b}%{\dul{\Sigma}_{\textrm{I}} =1$ and $\dul{\Sigma} = \crocMat{1}{\rho}{\rho}{1}$.
For bijective matchings \(m_1,m_2\subseteq \calU\times\calV\),
\begin{align*}
R(m_1,m_2) = \pth{1-\rho^2}^{\frac{n}{2}}\prod_{\ell} \croc{\det \dul{L}^\ell\pth{1-\frac{\rho^2}{2},\frac{\rho^2}{2}}}^{-\frac{k_\ell}{2}}
\end{align*}
where \(k_\ell\) is the number of cycles of length \(\ell\) of permutation \(m_1 \circ m_2^\top \subseteq \calU \times \calU\).
\end{lemma}
\vspace{-0.5cm}
\begin{proof}
For a matching \(m \subseteq \calU\times\calV\), let \(\dul{m}\in\{0,1\}^{\calU\times\calV}\) be the indicator matrix for \(m\).
Because $d_a}%{\dul{\Sigma}_{\textrm{I}} = d_b}%{\dul{\Sigma}_{\textrm{I}} = 1$, we will treat the databases as vectors $\ul{A} \in \mathbb{R}^{\calU}$ and $\ul{B} \in \mathbb{R}^{\calV}$.
Let $\ul{F} \in \mathbb{R}^{\calU \sqcup \calV}$ be the concatenation of $\ul{A}$ and $\ul{B}$.
Observe that $\dul{\Sigma}^{-1} = \frac{1}{1-\rho^2}\crocMat{1}{-\rho}{-\rho}{1}$.
Then we can write
\begin{align}
p_{\ul{F}|M}((\ul{a},\ul{b})|m) =
\frac{1}{\bpth{2\pi\sqrt{1-\rho^2}}^n}\cdot
\exp\pth{-\frac{1}{2(1-\rho^2)}\crocVec{\ul{a}}{\ul{b}}^\top \crocMat{\dul{I}^{\mathcal{U}}}{-\rho\dul{m}}{-\rho\dul{m}^\top}{\dul{I^{\mathcal{V}}}} \crocVec{\ul{a}}{\ul{b}}}.
\label{f-density}
\end{align}
For compactness, call the matrix that appears in \eqref{f-density} \(\dul{\Sigma}(m)\).
This gives us
\begin{align*}
&\mathrel{\phantom{=}}\pth{p_{\ul{F}|M}\bpth{\ul{f};m_1}p_{\ul{F}|M}\bpth{\ul{f};m_2}}^{\frac{1}{2}}
=\frac{1}{\bpth{2\pi\sqrt{1-\rho^2}}^n}\cdot
\exp\pth{-\frac{\ul{f}^\top \croc{\dul{\Sigma}(m_1)+\dul{\Sigma}(m_2)} \ul{f}}{4(1-\rho^2)}}.
\end{align*}
We obtain $R(m_1,m_2)$ by integrating this expression over the whole space:
\begin{align}
R(m_1,m_2)
&= \int \sqrt{p_{\ul{F}|M}\bpth{\ul{f};m_1}p_{\ul{F}|M}\bpth{\ul{f};m_2}}df \nonumber\\
&=\croc{\frac{\pth{1-\rho^2}^n}{\det\pth{\frac{1}{2}\dul{\Sigma}(m_1)+\frac{1}{2}\dul{\Sigma}(m_2)}}}^{1/2}. \label{eqn:achMap1}
\end{align}
Observe that \(\crocMat{\dul{I}}{\dul{z}}{\dul{z}^\top}{\dul{I}} = \crocMat{\dul{I}}{\dul{0}}{\dul{z}^\top}{\dul{I}}\crocMat{\dul{I}}{\dul{z}}{\dul{0}}{\dul{I}-\dul{z}^\top\dul{z}}\) for any matrix \(\dul{z}\). Then \(\det\crocMat{\dul{I}}{\dul{z}}{\dul{z}^\top}{\dul{I}} = \det\bpth{\dul{I}-\dul{z}^\top\dul{z}}\). Using this relation we have
\begin{align}
\det\pth{\frac{1}{2}\dul{\Sigma}(m_1)+\frac{1}{2}\dul{\Sigma}(m_2)}
&= \det\crocMat{\dul{I}}{-\frac{\rho}{2}\bpth{\dul{m}_1+\dul{m}_2}}{-\frac{\rho}{2}\bpth{\dul{m}_1+\dul{m}_2}^\top}{\dul{I}} \nonumber\\
&= \det\pth{\dul{I}-\frac{\rho^2}{4}\bpth{\dul{m}_1+\dul{m}_2}^\top\bpth{\dul{m}_1+\dul{m}_2}} \nonumber\\
&= \det\pth{\pth{1-\frac{\rho^2}{2}}\dul{I}-\frac{\rho^2}{4}\pth{\dul{m}_1^\top\dul{m}_2+\dul{m}_2^\top\dul{m}_1}}. \label{eqn:achMap2}
\end{align}
Notice that \(\dul{m}_1^\top\dul{m}_2\in\{0,1\}^{\calU\times\calU}\) is the permutation matrix corresponding to permutation \(\pi = m_1\circ m_2^\top\) described in the statement of the lemma. Let \(\mathcal{C}\) be the set of cycles of \(\pi\) and \(\{\ell_c\}_{c\in\mathcal{C}}\) denote their lengths. Consider the cycle notation of this permutation, i.e. \((u_1,u_2,\cdots,u_{\ell_c})(u_1',\cdots,u_{\ell_{c'}}')\cdots\), and specify an ordering of \(\calU\) based on this expression: \(u_1,u_2,\cdots,u_{\ell_c},u_1',\cdots,u_{\ell_{c'}}',\cdots\). Given this ordering of rows and columns, the permutation matrix \(\dul{m}_1^\top\dul{m}_2\) has block diagonal matrix form, with one block for each cycle \(c\in\mathcal{C}\) and every block having the form of a shifted identity matrix \(\dul{I}^{(\ell_c,+)}\). Then \(\dul{m}_2^\top\dul{m}_1 = \bpth{\dul{m}_1^\top\dul{m}_2}^\top\) has the same block diagonal form with the shifted identity matrices \(\dul{I}^{(\ell_c,-)}\), since \(\dul{I}^{(\ell_c,-)} = \bpth{\dul{I}^{(\ell_c,+)}}^\top\).
The determinant of a block diagonal matrix is equal to the product of the determinants of each block. Then we have
\begin{align*}
\det\pth{\pth{1-\frac{\rho^2}{2}}\dul{I}-\frac{\rho^2}{4}\pth{\dul{m}_1^\top\dul{m}_2+\dul{m}_2^\top\dul{m}_1}}
&= \prod_{c\in\mathcal{C}} \det\pth{\pth{1-\frac{\rho^2}{2}}\dul{I} - \frac{\rho^2}{4}\pth{\dul{I}^{(\ell_c,+)}+\dul{I}^{(\ell_c,-)}}}\\
&= \prod_{c\in\mathcal{C}} \det\pth{\dul{L}^{\ell_c}\pth{1-\frac{\rho^2}{2},\frac{\rho^2}{2}}}\\
&= \prod_{\ell\in[n]} \croc{\det\pth{\dul{L}^{\ell}\pth{1-\frac{\rho^2}{2},\frac{\rho^2}{2}}}}^{k_\ell},
\end{align*}
where \(k_\ell\) denotes the number of cycles of length \(\ell\) in the permutation \(\pi\). Combining this with (\ref{eqn:achMap1}) and (\ref{eqn:achMap2}) gives us the claimed result.
\end{proof}
\begin{lemma}
\label{lemma:determinant}
For any \(\ell\in\mathbb{N}^+\),
\begin{align*}
\det\pth{\dul{L}^\ell(s,t)} &= \prod_{j\in[\ell]} \croc{s-t\cdot\cos\pth{j\frac{2\pi}{\ell}}}.
\end{align*}
In particular
\begin{align*}
\det\pth{\dul{L}^{1}(s,t)} = s-t \qquad \textrm{ and } \qquad \det\pth{\dul{L}^{2}(s,t)} = s^2-t^2
\end{align*}
\end{lemma}
\begin{proof}
Let \(\ul{z}^{\,k}\in\mathbb{C}^\ell\) denote a family of vectors such that for any \(k\in [\ell]\), \(\ul{z}^{\,k}_j = e^{2\pi i\frac{jk}{\ell}}\), where \(i^2=-1\). Observe that
\begin{align*}
\dul{I}^{(+)}\ul{z}^{\,k} &= e^{2\pi i \frac{k}{\ell}} \ul{z}^{\,k} \qquad\textrm{ and }\qquad
\dul{I}^{(-)}\ul{z}^{\,k} = e^{-2\pi i \frac{k}{\ell}} \ul{z}^{\,k}
\end{align*}
Vectors \(\ul{z}^{\,k}\) are the eigenvectors of \(\dul{L}^k(s,t)\):
\begin{align*}
\dul{L}^\ell(s,t)\,\,\ul{z}^{\,k}
&= \croc{s\cdot \dul{I} - \frac{t}{2}\pth{\dul{I}^{(+)}+\dul{I}^{(-)}}}\ul{z}^{\,k}\\
&= \croc{s -\frac{t}{2}\pth{e^{2\pi i \frac{k}{\ell}}+e^{-2\pi i \frac{k}{\ell}}}}\ul{z}^{\,k}\\
&= \croc{s-t\cdot\cos\pth{2\pi\frac{k}{\ell}}}\ul{z}^{\,k}
\end{align*}
We compute the determinant by taking the product of the \(\ell\) eigenvalues (one for each \(k\in[\ell]\)).
\end{proof}
\begin{lemma}
\label{lemma:bound4determinant}
For any \(\ell\in\mathbb{N}\setminus\{0,1\}\) and \(s,t\in\mathbb{R}\) such that \(s>|t|\),
\begin{align*}
\det\croc{\dul{L}^\ell(s,t)} \geq \pth{\det\croc{\dul{L}^2(s,t)}}^{\ell/2}
\end{align*}
\bcomment{where \(\dul{L}^\ell\) and \(\dul{L}^2\) refer to the matrix valued functions as defined in Definition \ref{def:canonicalLaplacian}}
\end{lemma}
\begin{proof}
First note that, by Lemma \ref{lemma:determinant},
\begin{align*}
\det\croc{\dul{L}^2(s,t)} = s^2-t^2.
\end{align*}
We want to bound the determinant of the matrix \(\dul{L}^\ell(s,t)\), which is equal to the product of its eigenvalues \(\pth{\lambda_j}_{j\in[\ell]}\). The sum of eigenvalues is equal to the trace of the matrix, which is known, since all diagonal elements of \(\dul{L}^{\ell}(s,t)\) equal \(s\) for any \(\ell\geq 2\). So \(\sum \lambda_k = \operatorname{tr}\bpth{\dul{L}^\ell(s,t)} = s\ell\). Furthermore, observe that all eigenvalues are in the range \([s-t,s+t]\). Consider a sequence formed of two copies of each eigenvalue \(\lambda_i\). This sequence has mean \(s\) and has all entries within the range \([s-t,s+t]\). Then, as it is proven in \hyperref[lemma:elementary]{Lemma \ref*{lemma:elementary}},
\begin{align*}
\prod_{j\in[2\ell]} \lambda_j^2 \geq \pth{s-t}^\ell \pth{s+t}^\ell
\end{align*}
Taking the square root of both sides results in the claim.
\end{proof}
\newtheorem*{tAchMAP}{Theorem \ref*{thm:achMAP}}
\begin{tAchMAP}
\thmAchMAP
\end{tAchMAP}
\begin{proof}
Recall the canonical setting where \(\dul{\Sigma}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}=\dul{\Sigma}_{\textrm{b}}=I\) and \(\dul{\Sigma}_{\textrm{ab}}=\operatorname{diag}(\ul{\rho})\).
Let $R_i(m,m')$ be the value of $R(m,m')$ when $\dul{\Sigma} = \crocMat{1}{\ul{\rho}_i}{\ul{\rho}_i}{1}$.
By the union bound
\begin{align*}
\mathrel{\phantom{=}}\Pr\Big[F\in\bigcup_{m'\neq m}\mathcal{E}(m,m')|M=m\Big]
&\leq \sum_{m'\neq m}\Pr\croc{F\in\mathcal{E}(m,m')|M=m}\\
& \stackrel{(a)}{\leq} \sum_{m'\neq m} R(m,m') = \sum_{m'\neq m}\prod_{i\in[d]} R_i(m,m')\\
& \stackrel{(b)}{=} \sum_{m'\neq m}\prod_{i\in[d]} \croc{\frac{\pth{1-\rho_i^2}^n}{\prod_\ell \croc{\det\pth{\dul{L}^\ell\pth{1-\frac{\rho_i^2}{2},\frac{\rho_i^2}{4}}}}^{k_\ell}}}^{\frac{1}{2}}\\
&\stackrel{(c)}{\leq} \sum_{m'\neq m}\prod_{i\in[d]}\croc{\frac{\pth{1-\rho_i^2}^n}{\pth{s-t}^{|m\cap m'|}\pth{s^2-t^2}^{\frac{1}{2}\pth{n-|m\cap m'|}}}}^{\frac{1}{2}}\\
&= \sum_{m'\neq m}\prod_{i\in[d]} \pth{1-\rho_i^2}^{\frac{n-|m\cap m'|}{4}},
\end{align*}
where (a) follows from Lemma \ref{lemma:ChernoffBound}, (b) follows from Lemma \ref{lemmma:bound2determinant}, with \(k_\ell\) denoting the number of cycles of length \(\ell\) in the permutation \(m'\circ m^\top\), and (c) follows from Lemmas \ref{lemma:determinant} and \ref{lemma:bound4determinant}, with \(s=1-\frac{\rho_i^2}{2}\) and \(t=\frac{\rho_i^2}{2}\), which gives us \(s-t = s^2-t^2 = 1-\rho_i^2\).
Given any \(k\in\mathbb{N}\) there are exactly \((!k)\times\binom{n}{k}\) different matchings \(m'\) such that \(k=n-|m\cap m'|\), where \((!k)\) represents the number of derangements over a set of size \(k\). We bound \((!k)\times\binom{n}{k}\leq n^k\). Thus
\begin{align*}
\Pr\Big[F\in\bigcup_{m'\neq m}\mathcal{E}(m,m')|M=m\Big]
\leq \sum_{k\in\mathbb{N}} \,n^k \cdot \prod_{i\in[d]} \pth{1-\rho_i^2}^{k/4}
\end{align*}
If \(n \prod_{i \in [d]} \pth{1-\rho_i^2}^{\frac{1}{4}} \leq o(1)\), then by summing the geometric series, we see that the above expression is \(o(1)\). Therefore
\begin{align*}
\exp\pth{-I_{XY}} = \prod_{i\in[d]}\pth{1-\rho_i^2}^{\frac{1}{2}} \leq o(1/n^2)
\end{align*}
is a sufficient condition for exact recovery under the canonical setting. Taking the logarithm of both sides gives us the claimed result.
\end{proof}
\section{Transformation of feature vectors}
\label{sec:featureTransformation}
By the invertibility of \(\dul{\Sigma}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}\) and \(\dul{\Sigma}_{\textrm{b}}\), there exist Cholesky decompositions with invertible matrices \(\dul{L}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}\) and \(\dul{L}_{\textrm{b}}\) such that \(\dul{\Sigma}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}=\dul{L}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}\La^\top\) and \(\dul{\Sigma}_{\textrm{b}} = \dul{L}_{\textrm{b}}\Lb^\top\).
Furthermore let \(\dul{U}\dul{\Sigma}_{\textrm{ab}}'\dul{V}^\top = \dul{L}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}^{-1}\dul{\Sigma}_{\textrm{ab}}\bpth{\dul{L}_{\textrm{b}}^\top}^{-1}\) be a singular-value decomposition with \(\dul{U}\) and \(\dul{V}\) unitary matrices and \(\dul{\Sigma}_{\textrm{ab}}'\) a diagonal matrix of size \(d_1\times d_2\).
Finally let \(\ul{\mu}_a}%{\dul{\Sigma}_{\textrm{I}}\) and \(\ul{\mu}_b}%{\dul{\Sigma}_{\textrm{I}}\) denote the mean of features from the two datasets.
Let the feature transformations be defined as
\begin{align*}
\ul{X}' = T_a}%{\dul{\Sigma}_{\textrm{I}}(\ul{X}) &= \bpth{\dul{U}^\top\dul{L}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}^{-1}}\bpth{\ul{X}-\ul{\mu}_a}%{\dul{\Sigma}_{\textrm{I}}}\\
\ul{Y}' = T_b(\ul{Y}) &= \bpth{\dul{V}^\top\dul{L}_{\textrm{b}}^{-1}}\bpth{\ul{Y}-\ul{\mu}_b}%{\dul{\Sigma}_{\textrm{I}}}.
\end{align*}
It can be verified that
\begin{align*}
\mathbb{E}\crocVec{\ul{X}'}{\ul{Y}'} = \ul{0} \textrm{ and }\dul{\Sigma}'\triangleq \E{\crocVec{\ul{X}'}{\ul{Y}'}\crocVec{\ul{X}'}{\ul{Y}'}^\top} = \crocMat{\dul{I}^{d_1}}{\dul{\Sigma}_{\textrm{ab}}'}{\dul{\Sigma}_{\textrm{ab}}'^\top}{\dul{I}^{d_2}}.
\end{align*}
Both \(\dul{U}^\top\dul{L}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}^{-1}\) and \(\dul{V}^\top\dul{L}_{\textrm{b}}^{-1}\) are invertible matrices, therefore \(T_a}%{\dul{\Sigma}_{\textrm{I}}\) and \(T_b\) are bijective functions with no loss of information.
Next we perform a non-reversible operation. Consider the zero rows and columns of \(\dul{\Sigma}_{\textrm{ab}}'\). These correspond to transformed feature entries that have no correlation with features from the other dataset, hence they provide no information in identifying matching features. Simply dropping these results in no loss of information. \(\dul{\Sigma}_{\textrm{ab}}'\) is a diagonal matrix, therefore the number of features we keep from either dataset is simply equal to the number of non-zero entries in \(\dul{\Sigma}_{\textrm{ab}}'\). Thus the final feature vectors are of equal size \(d\). Let \(\ul{X}''\) and \(\ul{Y}''\) be the final features obtained after throwing away entries in \(\ul{X}'\) and \(\ul{Y}'\), and \(\dul{\Sigma}_{\textrm{ab}}''\in\mathbb{R}^{d\times d}\) be the matrix obtained by removing zero rows and columns from \(\dul{\Sigma}_{\textrm{ab}}'\). Then,
\begin{align*}
\dul{\Sigma}'' \triangleq \E{\crocVec{\ul{X}''}{\ul{Y}''}\crocVec{\ul{X}''}{\ul{Y}''}^\top} = \crocMat{\dul{I}^d}{\dul{\Sigma}_{\textrm{ab}}'}{\dul{\Sigma}_{\textrm{ab}}'^\top}{\dul{I}^d}.
\end{align*}
Let \(\ul{\rho}\) denote the diagonal entries in \(\dul{\Sigma}_{\textrm{ab}}''\).
\begin{lemma}
\label{lemma:mutualInformation}
Let
\begin{align*}
\dul{\Sigma} = \crocMat{\dul{\Sigma}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}}{\dul{\Sigma}_{\textrm{ab}}}{\dul{\Sigma}_{\textrm{ab}}^\top}{\dul{\Sigma}_{\textrm{b}}} \textrm{ and } \dul{\Sigma}'' = \crocMat{\dul{I}}{\operatorname{diag}(\ul{\rho})}{\operatorname{diag}(\ul{\rho})}{\dul{I}}
\end{align*}
be the covariance matrices of the original features and of the transformed features respectively. The mutual information between matched pairs of original features \((\ul{X},\ul{Y})\) and matched pairs of transformed features \((\ul{X}'',\ul{Y}'')\) is the same and given by
\begin{align*}
I_{XY} &= -\frac{1}{2}\sum_{i\in[d]}\log(1-\rho_i^2)\\
&= -\frac{1}{2}\log \frac{\det\bpth{\dul{\Sigma}}}{\det\bpth{\dul{\Sigma}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}}\cdot\det\bpth{\dul{\Sigma}_{\textrm{b}}}}.
\end{align*}
\end{lemma}
\begin{proof}
Our transformation keeps all information from features that exhibits correlation with the other feature list. Therefore the mutual information between feature pairs is the same for original features and transformed features.
The final feature vector pairs have mutual information \(-\frac{1}{2}\log(1-\rho_i^2)\) per feature \(i\). Thus we have \(I_{XY} = -\frac{1}{2}\sum \log(1-\rho_i^2) = -\frac{1}{2}\log\det\bpth{\dul{\Sigma}''}\). It can be shown that removing the zero rows/columns in \(\dul{\Sigma}'\), which corresponds to removing rows/columns with all zeros except 1 on the diagonal in \(\dul{\Sigma}'\), does not change the determinant of the matrix. Then \(I_{XY} = \det\bpth{\dul{\Sigma}'}\). Finally we show that \(\det\bpth{\dul{\Sigma}'} = \frac{\det\bpth{\dul{\Sigma}}}{\det\bpth{\dul{\Sigma}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}}\cdot\det\bpth{\dul{\Sigma}_{\textrm{b}}}}\).
\begin{align*}
\textrm{We have } \quad \det\bpth{\dul{\Sigma}'} = \det\bpth{\dul{I}-\dul{\Sigma}_{\textrm{ab}}'\dul{\Sigma}_{\textrm{ab}}'^\top} \quad \textrm{ and }
\end{align*}
\vspace{-0.5cm}
\begin{align*}
\bpth{\dul{L}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}\dul{U}}\bpth{\dul{I}-\dul{\Sigma}_{\textrm{ab}}'\dul{\Sigma}_{\textrm{ab}}'^\top}\bpth{\dul{L}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}\dul{U}}^\top &= \dul{\Sigma}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}} - \dul{\Sigma}_{\textrm{ab}}\dul{\Sigma}_{\textrm{b}}\dul{\Sigma}_{\textrm{ab}}^\top.\\
\implies \enspace \textrm{det}^2\bpth{\dul{L}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}\dul{U}}\det\bpth{\dul{\Sigma}'} &= \det\bpth{\dul{\Sigma}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}} - \dul{\Sigma}_{\textrm{ab}}\dul{\Sigma}_{\textrm{b}}\dul{\Sigma}_{\textrm{ab}}^\top}
\end{align*}
Furthermore \(\det^2\bpth{\dul{L}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}\dul{U}} = \det\bpth{\dul{L}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}\dul{U}\dul{U}^\top\dul{L}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}^\top} = \det\bpth{\dul{L}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}\La^\top} = \det\bpth{\dul{\Sigma}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}}\). So
\begin{align*}
\det\bpth{\dul{\Sigma}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}}\det\bpth{\dul{\Sigma}_{\textrm{b}}}\det\bpth{\dul{\Sigma}'} &= \det\bpth{\dul{\Sigma}_{\textrm{b}}} \det\bpth{\dul{\Sigma}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}} - \dul{\Sigma}_{\textrm{ab}}\dul{\Sigma}_{\textrm{b}}\dul{\Sigma}_{\textrm{ab}}^\top}.
\end{align*}
Notice that the right-hand side is specifically the block matrix expression for the determinant of \(\dul{\Sigma}\). So \begin{align*}
\det\bpth{\dul{\Sigma}'} &=\frac{\det\bpth{\dul{\Sigma}}}{\det\bpth{\dul{\Sigma}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}}\cdot\det\bpth{\dul{\Sigma}_{\textrm{b}}}}\\
&= -\frac{1}{2}\sum_{i\in[d]}\log(1-\rho_i^2).
\end{align*}
\end{proof}
\begin{lemma}
\label{lemma:variance}
Let \(p_X\), \(p_Y\) and \(p_{XY}\) denote the probability density functions of \(\ul{X}\), \(\ul{Y}\) and \((\ul{X},\ul{Y})\) respectively, where \((\ul{X},\ul{Y})\) are a pair of correlated feature vectors. Let
\begin{align*}
\dul{\Sigma} = \crocMat{\dul{\Sigma}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}}{\dul{\Sigma}_{\textrm{ab}}}{\dul{\Sigma}_{\textrm{ab}}^\top}{\dul{\Sigma}_{\textrm{b}}} \textrm{ and } \dul{\Sigma}'' = \crocMat{\dul{I}}{\operatorname{diag}(\ul{\rho})}{\operatorname{diag}(\ul{\rho})}{\dul{I}}
\end{align*}
be the covariance matrices of these features and of their transformed variants, as described above. Then
\begin{align*}
\operatorname{Var}\pth{\log\frac{p_{XY}(\ul{X},\ul{Y})}{p_X(\ul{X})p_Y(\ul{Y})}} &= \sum_{i\in[d]}\rho_i^2\\
&= \operatorname{tr}\bpth{\dul{\Sigma}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}^{-1}\dul{\Sigma}_{\textrm{ab}}\dul{\Sigma}_{\textrm{b}}^{-1}\dul{\Sigma}_{\textrm{ab}}^\top}
\end{align*}
\end{lemma}
\begin{proof}
Notice that the log likelihood ratio of the transformed features are the same as the log likelihood ratio for the original features. Thus we just assume that the original features vectors already have the covariance matrix of the specified form. We already know the mean of the log likelihood ratio, which is equal to the mutual information \(I_{XY}\). Then
\begin{align*}
\frac{p_{XY}(\ul{X},\ul{Y})}{p_X(\ul{X})p_Y(\ul{Y})} - I_{XY} &= \sum_{i\in[d]}-\frac{\rho_i^2(X_i^2+Y_i^2)-2\rho_i^2X_iY_i}{2(1-\rho_i^2)}
\end{align*}
Let \(Z_i \triangleq \rho_i^2\pth{X_i^2+Y_i^2}-2\rho_iX_iY_i\) for any \(i\in[d]\).
Then
\begin{align*}
\operatorname{Var}&\pth{\frac{p_{XY}(\ul{X},\ul{Y})}{p_X(\ul{X})p_Y(\ul{Y})}} = \E{\frac{Z_i^2}{4(1-\rho_i^2)^2}}\\
&= \sum_{i\in[d]} \frac{\E{Z_i^2}}{4\pth{1-\rho_i^2}^2} + \sum_{\{i,j\}\in\binom{[d]}{2}} \frac{2\E{Z_i}\E{Z_j}}{4\pth{1-\rho_i^2}\pth{1-\rho_j^2}}
\end{align*}
Observe that, for any \(i\in[d]\)
\begin{align*}
\E{Z_i} = \rho_i^2\E{X_i^2}+\rho_i^2\E{Y_i^2} - 2\rho_i\E{X_iY_i} = 0.
\end{align*}
Furthermore
\begin{align*}
\E{Z_i^2} =& \rho_i^2\pth{\E{X_i^4}+\E{Y_i^4}}\\
&- 4\rho_i^3\pth{\E{X_i^3Y_i}+\E{X_iY_i^3}}\\
&+ \pth{2\rho_i^2+4\rho_i^2}\E{X_i^2Y_i^2}
\end{align*}
Note that \((X_i,Y_i)\) has the same distribution as \((Y_i,X_i)\). Then the expression for \(\E{Z_i^2}\) simplifies and we get
\begin{align*}
\E{Z_i^2} =&\,\, 2\rho_i^2\E{X_i^4} - 8\rho_i^3\E{X_i^3Y_i}\\
&+ \pth{2\rho_i^4+4\rho_i^2}\E{X_i^2Y_i^2}.\\
\intertext{By Isserli's theorem this gives us}
\E{Z_i^2} =&\,\, 6\rho_i^4\mathbb{E}^2\croc{X_i^2}-24\rho^3\E{X_i^2}\E{X_iY_i}\\
&+\pth{2\rho_i^4+4\rho_i^2}\cdot\pth{\E{X_i^2}\E{Y_i^2}+2\mathbb{E}^2\croc{X_iY_i}}\\
&= 4\rho_i^2 - 8\rho_i^4 + 4\rho_i^6 = 4\rho_i^2\pth{1-\rho_i^2}^2.
\end{align*}
Then \(\frac{\E{Z_i^2}}{4\pth{1-\rho_i^2}^2} = \rho_i^2\) and \(\operatorname{Var}\pth{\log \frac{q(\alpha,\beta)}{q(\alpha)q(\beta)}} = \sum \rho_i^2\).
Finally we derive the expression for this variance based on the original covariance matrix. We refer to matrices \(\dul{\Sigma}_{\textrm{ab}}''\), \(\dul{\Sigma}_{\textrm{ab}}'\), \(\dul{L}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}\), \(\dul{U}\) as given in the description of the feature transformation.
\begin{align*}
\sum_{i\in[d]} \rho_i^2 &= \operatorname{tr}\bpth{\dul{\Sigma}_{\textrm{ab}}''\dul{\Sigma}_{\textrm{ab}}''^\top}
= \operatorname{tr}\bpth{\dul{\Sigma}_{\textrm{ab}}'\dul{\Sigma}_{\textrm{ab}}'^\top}\\
&= \operatorname{tr}\Bpth{\bpth{\dul{U}^\top\dul{L}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}^{-1}\dul{\Sigma}_{\textrm{ab}}\bpth{\dul{L}_{\textrm{b}}^{-1}}^\top V}\bpth{\dul{V}^\top\dul{L}_{\textrm{b}}^{-1}\dul{\Sigma}_{\textrm{ab}}^\top\bpth{\dul{L}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}^{-1}}^\top U}}\\
&= \operatorname{tr}\Bpth{\bpth{\dul{L}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}^{-1}}^\top\dul{L}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}^{-1}\dul{\Sigma}_{\textrm{ab}}\bpth{\dul{L}_{\textrm{b}}^{-1}}^\top\dul{L}_{\textrm{b}}^{-1}\dul{\Sigma}_{\textrm{ab}}^\top}\\
&= \operatorname{tr}\bpth{\dul{\Sigma}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}^{-1}\dul{\Sigma}_{\textrm{ab}}\dul{\Sigma}_{\textrm{b}}^{-1}\dul{\Sigma}_{\textrm{ab}}^\top}
\end{align*}
\end{proof}
\section{Additional lemmas}
\begin{lemma}
\label{lemma:elementary}
Given any \(k\in\mathbb{N}\), \(\mu\in\mathbb{R}_+\) \(\delta\in[0,1]\) and \(z_1,z_2\cdots,z_{2k}\in[1-\delta,1+\delta]\) a sequence with mean \(\mu\), we have
\begin{align*}
\prod_{i\in[2s]} z_i \,\, \geq \pth{\mu-\delta}^k\pth{\mu+\delta}^k.
\end{align*}
\end{lemma}
\begin{proof}
We give an algorithmic proof.
Consider a sequence \(z_1,z_2,\cdots,z_{2k}\) with mean \(\mu\) and all entries in the range \([\mu-\delta,\mu+\delta]\). Assume the product \(\pi\) of the sequence is not \(\pth{\mu-\delta}^k\pth{\mu+\delta}^k\). Then there exists two entries \(z_i,z_j\in(\mu-\delta,\mu+\delta)\). Without loss of generality assume \(|z_i-\mu|\geq|z_j-\mu|\) and \(z_i\leq z_j\). Modify the sequence by replacing \(z_i\) with \(\mu-\delta\) and \(z_j\) with \(z_j+z_i-\mu+\delta\). The new sequence still has mean \(\mu\) and entries within the same range. However observe that its product \(\frac{(\mu-\delta)(z_j+z_i-\mu+\delta)}{z_iz_j}\times \pi\) is strictly smaller.
Iteratively applying this modification on any initial sequence eventually (in at most 2k-1 modifications) turns the sequence into one with product \(\pth{\mu-\delta}^k\pth{\mu+\delta}^k\), which cannot be larger than the product of any intermediary sequence or the initial sequence.
\end{proof}
\subsection{Converse analysis}
We present a converse on the performance of the binary hypothesis testing algorithm based on Fano's inequality.
\bcomment{
Given arbitrary pair of identifiers \((u,v)\in\calU\times\calV\), define random variables \(Z\triangleq\ind{(u,v)\in M}\) and \(\hat{Z}\triangleq\ind{(u,v)\in \hat{m}(F)}\). Notice that \(Z\) only depends on rows \(\ul{A}_{u*}\) and \(\ul{B}_{v*}\) in \(F = \bpth{\dul{A},\dul{B}}\). Let \((\ul{X},\ul{Y}) = (\ul{A}_{u*}^\top,\ul{B}_{v*}^\top)\).
\begin{lemma}
\label{lemma:conditionalEntropy}
\begin{align*}
H(Z|\ul{X},\ul{Y}) \geq \frac{\log n - I_{\ul{X}\ul{Y}}}{n}
\end{align*}
\end{lemma}
\begin{proof}
By Bayes' theorem,
\begin{align}
H(Z|\ul{X},\ul{Y}) &= \E{-\log\Pr[Z|\ul{X},\ul{Y}]} \nonumber\\
&= \E{-\log\frac{p_{XY|Z}(\ul{X},\ul{Y}|Z)\cdot\Pr[Z]}{p_{\ul{X}\ul{Y}}(\ul{X},\ul{Y})}} \nonumber\\
&= h(\ul{X},\ul{Y}|Z) + H(Z) - h(\ul{X},\ul{Y}), \label{eqn:conLRT1}
\end{align}
where \(p_{XY|Z}\) and \(p_{\ul{X}\ul{Y}}\) are probability density functions and \(h\) denotes differential entropy.
By \(|\calU\times\calV|= n|M| = n^2\), given arbitrary \((u,v)\in\calU\times\calV\) we have \(\Pr[(u,v)\in M] = \Pr[Z=1] = 1/n\). Then, given \(H_b\) the binary entropy function, \(H(Z) = H_b(1/n) \geq \frac{\log n}{n}\).
We have
\begin{align*}
h(\ul{X},\ul{Y}|Z) &= \mathbb{E}_Z\croc{h(\ul{X},\ul{Y}|Z=z)}\\
&= \sum_{z\in\{0,1\}} \Pr[Z=z]\cdot h(\ul{X},\ul{Y}|Z=z).
\end{align*}
For \(Z=1\), \(\ul{X}\) and \(\ul{Y}\) are jointly Gaussian and \begin{align*}
h(\ul{X},\ul{Y}|1) &= \frac{1}{2}\log\pth{\det\bpth{2\pi e\dul{\Sigma}}}\\
&= d+d\log(2\pi) + \frac{1}{2}\log\pth{\det\bpth{\dul{\Sigma}}},
\end{align*}
where \(\dul{\Sigma}\) is the covariance matrix of vector \((\ul{X}^\top,\ul{Y}^\top)^\top\), as defined in section \ref{subsec:problem}.
For \(Z=0\), \(\ul{X}\) and \(\ul{Y}\) are independent and therefore
\begin{align*}
h(\ul{X},\ul{Y}|0) &= h(\ul{X}) + h(\ul{Y})\\
&= \frac{1}{2}\log\pth{\det\bpth{2\pi e\dul{\Sigma}_A}} + \frac{1}{2}\log\pth{\det\bpth{2\pi e\dul{\Sigma}_B}}\\
&= d+d\log(2\pi) + \frac{1}{2}\log\pth{\det\bpth{\dul{\Sigma_A}}\cdot\det\bpth{\dul{\Sigma_B}}},
\end{align*}
where \(\dul{\Sigma}_A\) and \(\dul{\Sigma}_B\) are the covariance matrices of vectors \(\ul{X}\) and \(\ul{Y}\) respectively, as defined in section \ref{subsec:problem}.
Then, by \(\Pr[Z=1] = 1/n\),
\begin{align}
h(\ul{X},\ul{Y}|Z) =& d+d\log(2\pi) + \frac{1}{2}\log\pth{\det\bpth{\dul{\Sigma_A}}\cdot\det\bpth{\dul{\Sigma_B}}} \nonumber\\
&+ \frac{1}{2n}\log\frac{\det\bpth{\dul{\Sigma}}}{\det\bpth{\dul{\Sigma}_A}\cdot\det\bpth{\dul{\Sigma}_B}}. \label{eqn:conLRT2}
\end{align}
Notice that this last term is equal to \(-I_{\ul{X}\ul{Y}}/n\).
Finally
\begin{align*}
h(\ul{X},\ul{Y}) &= h(\ul{X}) + h(\ul{Y}) - I(\ul{X};\ul{Y}) \leq h(\ul{X}) + h(\ul{Y})\\
&= d+d\log(2\pi) + \frac{1}{2}\log\pth{\det\bpth{\dul{\Sigma_A}}\cdot\det\bpth{\dul{\Sigma_B}}}
\end{align*}
Combining this with \(H(Z)=\log n/n\), (\ref{eqn:conLRT1}) and (\ref{eqn:conLRT2}) gives us the claimed result.
\end{proof}}
\begin{lemma}
\label{lemma:conditionalEntropy}
For $u \in \calU$ and $v \in \calV$, $H(\dul{M}_{u,v}|\dul{A}_u,\dul{B}_v) \geq \frac{\log n - I_{\ul{X}\ul{Y}}}{n}$.
\end{lemma}
\begin{proof}
We have
\begin{align*}
H(\dul{M}_{u,v}|\dul{A}_u,\dul{B}_v) = H(\dul{M}_{u,v}) + I(\dul{A}_u;\dul{B}_v)
- I(\dul{M}_{u,v};\dul{A}_u) - I(\dul{M}_{u,v};\dul{B}_v) - I(\dul{A}_u;\dul{B}_v|\dul{M}_{u,v}).
\end{align*}
Then $I(\dul{M}_{u,v};\dul{A}_u) = I(\dul{M}_{u,v};\dul{B}_v) = 0$ and
\begin{align*}
I(\dul{A}_u;\dul{B}_v|\dul{M}_{u,v})
&= \frac{n-1}{n} I(\dul{A}_u|(\dul{M}_{u,v} = 0);\dul{B}_v|(\dul{M}_{u,v} = 0))
+ \frac{1}{n} I(\dul{A}_u|(\dul{M}_{u,v} = 1);\dul{B}_v|(\dul{M}_{u,v} = 1))\\
&= \frac{n-1}{n}\cdot 0 + \frac{1}{n}I_{\ul{X}\ul{Y}}.
\end{align*}
Finally $I(\dul{A}_u;\dul{B}_v) \geq 0$ and $H(\dul{M}_{u,v}) = \frac{1}{n} \log n + \frac{n-1}{n}\log \frac{n}{n-1} \geq \frac{\log n}{n}$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:conLRT}]
Let \(\hat{\dul{M}}_{u,v}\triangleq \ind{\bpth{\dul{A}_u,\dul{B}_v}\in H_\tau}\) denote the estimation on the relation between identifiers \(u\) and \(v\). We have a correct estimation if \(\hat{\dul{M}}_{u,v} = \dul{M}_{u,v}\).
Define \(E \triangleq \ind{\hat{\dul{M}}_{u,v} \neq \dul{M}_{u,v}}\). Then by Fano's inequality,
\begin{align*}
H(\dul{M}_{u,v}|\dul{A}_u,\dul{B}_v) \leq H(E) + \Pr[E=1],
\end{align*}
which gives the upper bound as \(H(E)\).
Let \(\epsilon \triangleq \Pr[E=1]\). This value can also be expressed as the expected frequency of false matches, i.e. given \(\varepsilon_{FN}\) and \(\varepsilon_{FP}\) the expected number of false negatives and false positives, \(\epsilon = \frac{\varepsilon_{FN}+\varepsilon_{FP}}{|\calU\times\calV|} = \frac{\varepsilon_{FN}+\varepsilon_{FP}}{n^2}\).
Let \(H_b\) denote the binary entropy function. By Fano's inequality, using Lemma \ref{lemma:conditionalEntropy}, we have
\begin{equation}
H_b(\epsilon) \geq H(\dul{M}_{u,v}|\dul{A}_u,\dul{B}_v) \geq \frac{\log n-I_{\ul{X}\ul{Y}}}{n} \label{eqn:conLRT3}
\end{equation}
We have
\begin{align*}
H_b(\epsilon) &\leq -\epsilon\log\epsilon + \epsilon
= \frac{\varepsilon_{FN}+\varepsilon_{FP}}{n^2}\pth{2\log n - \log \pth{\varepsilon_{FN}+\varepsilon_{FP}} + 1}.
\end{align*}
Combining this with (\ref{eqn:conLRT3}) gives us
\begin{equation*}
\varepsilon_{FN}+\varepsilon_{FP} \geq \frac{n}{2} \pth{\frac{\log n -I_{\ul{X}\ul{Y}}}{2 \log n + 1}}
\end{equation*}
and the claim follows.
\end{proof}
\bcomment{
\newtheorem*{tConLRT}{Theorem \ref*{thm:conLRT}}
\begin{tConLRT}
\thmConLRT
\end{tConLRT}
\begin{proof}
Define \(E \triangleq \ind{Z\neq \hat{Z}}\). Then by Fano's inequality
\begin{align*}
H(Z|F) \leq H(E) + \Pr[E=1]\log(|\{0,1\}|-1) = H(E)
\end{align*}
Let \(\epsilon \triangleq \Pr[E=1]\). This value can also be expressed as the expected frequency of false matches, i.e. given \(\varepsilon_{FN}\) and \(\varepsilon_{FP}\) the expected number of false negatives and false positives, \(\epsilon = \frac{\varepsilon_{FN}+\varepsilon_{FP}}{|\calU\times\calV|} = \frac{\varepsilon_{FN}+\varepsilon_{FP}}{n^2}\).
Let \(H_b\) denote the binary entropy function. By Fano's inequality, using Lemma \ref{lemma:conditionalEntropy}, we have
\begin{align}
H_b(\epsilon) &\geq H(Z|F) = H(Z|\ul{X},\ul{Y}) \nonumber\\
&\geq \frac{\log n-I_{\ul{X}\ul{Y}}}{n} + o(1) \label{eqn:conLRT3}
\end{align}
For \(\epsilon = o(1)\) we have
\begin{align*}
H_b(\epsilon) &= -\epsilon\log\epsilon + o(1) \nonumber\\
&= \frac{\varepsilon_{FN}+\varepsilon_{FP}}{n^2}\pth{2\log n - \log \pth{\varepsilon_{FN}+\varepsilon_{FP}}} + o(1) \nonumber\\
&\leq \frac{2\pth{\varepsilon_{FN}+\varepsilon_{FP}}\log n}{n^2}.
\end{align*}
Combining this with (\ref{eqn:conLRT3}) gives us
\begin{align*}
\frac{I_{\ul{X}\ul{Y}}}{\log n} \geq 1-\frac{2\pth{\varepsilon_{FN}+\varepsilon_{FP}}}{n}
\end{align*}
\end{proof}
}
\subsection{Converse analysis}
We establish a necessary condition on the mutual information \(I_{XY}\) between feature pairs to achieve a perfect alignment.
\begin{lemma}
\label{lemma:Cramer}
Let \(d\in\mathbb{N}\) such that \(d=\omega(1)\) as well as \(\dul{\Sigma}_A=\dul{\Sigma}_B=\dul{I}^d\) and \(\dul{\Sigma}_{AB}=\rho\dul{I}\). Given bijective matchings \(m_1,m_2\subset \calU\times\calV\) such that \(|m_1\cap m_2| = n-2\),
\begin{align*}
\Pr[F\in\mathcal{E}(m_1,m_2)|M=m_1] \geq (1-\rho_i^2)^{\frac{d}{2}(1+o(1))}.
\end{align*}
\end{lemma}
\begin{proof}
Consider the conditional generating function
\begin{align*}
c_i(\theta)
&= \E{\exp\pth{\theta\log\frac{p_{\dul{F}_{*i}|M}(\dul{f}_i|m_2)}{p_{\dul{F}_{*i}|M}(\dul{f}_i|m_1)}}\Big| M=m_1}\\
&= \int \pth{\frac{p_{\dul{F}_{*i}|M}(\dul{f}_i|m_2)}{p_{\dul{F}_{*i}|M}(\dul{f}_i|m_1)}}^{\theta} p_{\dul{F}_{*i}|M}(\dul{f}_i|m_1) d\dul{f}_i\\
&= \int (p_{\dul{F}_{*i}|M}(\dul{f}_i|m_2))^{\theta} (p_{\dul{F}_{*i}|M}(\dul{f}_i|m_1))^{1-\theta} d\dul{f}_i
\end{align*}
The generating function is minimized at \(\theta = 1/2\) in which case we get \(c_i(\theta) = R_i(m_1,m_2)\).
We evaluate the value of this function using Lemmas \ref{lemmma:bound2determinant} and \ref{lemma:determinant} with \(s=1-\rho^2/2\) and \(t=\rho^2/2\). By \(|m_1\cap m_2| = n-2\) we get \(R_i(m_1,m_2) = \sqrt{1-\rho^2}\).
By Cramér's Theorem on the asymptotic tightness of the Chernoff bound (see for example \cite{hajek}), there is some \(\epsilon(d) \leq o(1)\) such that
\begin{align*}
\mathrel{\phantom{=}} \Pr\croc{\log\frac{p_{\dul{F}|M}(\dul{f}|m_2)}{p_{\dul{F}|M}(\dul{f}|m_1)}\geq 0}
&= \Pr\croc{\sum_{i \in [d]} \log\frac{p_{\dul{F_{*i}}|M}(\dul{f}_{*i}|m_2)}{p_{\dul{F}_{*i}|M}(\dul{f}_{*i}|m_1)}\geq 0}\\
&\geq \exp\pth{-d\croc{\epsilon - \inf_\theta \log c_i(\theta)}}\\
&= \exp\pth{d (\log R_i(m_1,m_2) - \epsilon}\\
&\geq \pth{R_i(m_1,m_2)}^{d(1-o(1))}\\
&= \pth{1-\rho^2}^{\frac{d}{2}(1-o(1))}
\end{align*}
\end{proof}
\begin{lemma}
\label{lemma:errorIntersection}
If \(\dul{\Sigma}_A=\dul{\Sigma}_B=I\) and \(\dul{\Sigma}_{AB}=\rho\dul{I}\), then given any bijective matchings \(m_1,m_2,m_3\in\calU\times\calV\)
\begin{align*}
\Pr[F\in\mathcal{E}(m_1,m_2)\cap \mathcal{E}&(m_1,m_3)|M=m_1]
\leq \pth{1-\rho_i^2}^{\frac{d}{4}\pth{n-|m_2\cap m_3|}}.
\end{align*}
\end{lemma}
\begin{proof}
We will abbreviate $p_{\dul{F}|M}(\cdot|\cdot)$ as $p(\cdot|\cdot)$.
For any \(\theta,\theta'>0\) we have
\begin{align*}
\Pr[F\in\mathcal{E}(m_1,m_2)\cap \mathcal{E}(m_1,m_3)|M=m_1]
&= \E{\ind{\frac{p(\dul{f}|m_2)}{p(\dul{f}|m_1)}\geq1,\frac{p(\dul{f}|m_3)}{p(\dul{f}|m_1)}\geq1}\Big|M=m_1}\\
&\leq \int \pth{\frac{p(\dul{f}|m_2)}{p(\dul{f}|m_1)}}^{\theta}\pth{\frac{p(\dul{f}|m_3)}{p(\dul{f}|m_1)}}^{\theta'} p(\dul{f}|m_1) d\dul{f}\\
&= \int (p(\dul{f}|m_2))^{\theta} (p(\dul{f}|m_3))^{\theta'} (p(\dul{f}|m_1))^{1-\theta-\theta'} d\dul{f}.
\end{align*}
The choice of \(\theta=\theta'=1/2\) gives the upper bound as \(R(m_2,m_3)\). We evaluate this function using Lemmas \ref{lemmma:bound2determinant} and \ref{lemma:determinant}, which give us the claimed result.
\end{proof}
\newtheorem*{tConMAP}{Theorem \ref*{thm:conMAP}}
\begin{tConMAP}
\thmConMAP
\end{tConMAP}
\begin{proof}
Let \(\mathcal{M}^\mathcal{E}(f,m) \triangleq \acc{m'|f\in\mathcal{E}(m,m'), m'\neq m}\) denote the set of matches that are at least as likely as \(m\) under the database instance \(f\). The MAP algorithm succeeds if and only if \(\mathcal{M}^\mathcal{E}(F,M)=\emptyset\).
Also define \(\mathcal{M}_2(m) \triangleq \acc{m'\big||m\cap m'| = n-2}\).
For compactness, let \(X \triangleq |\mathcal{M}^\mathcal{E}(f,m) \cap \mathcal{M}_2(m)|\).
Clearly \(0 \leq X \leq |\mathcal{M}^\mathcal{E}(f,m)|\).
We apply Chebyshev's inequality:
\begin{align*}
\Pr[|\mathcal{M}^\mathcal{E}(F,M)| = 0] \leq \Pr[X = 0]
\leq \Pr\croc{\pth{X-\mathbb{E}X}^2 \geq \mathbb{E}^2X}
\leq \frac{\operatorname{Var} X}{\mathbb{E}^2 X}
\end{align*}
All matchings are equally likely. Therefore, given any bijective matching \(m\in\calU\times\calV\),
\begin{align*}
|
\mathbb{E}\norm{\mathcal{M}_2^\mathcal{E}(F,M)} =\sum_{m'\in \mathcal{M}_2(m)} \Pr[F\in\mathcal{E}(m,m')|M=m]
\end{align*}
Let \(\varepsilon_1 \triangleq \Pr[F\in\mathcal{E}(m,m')|M=m]\) given \(|m\cap m'| = n-2\). Notice that this probability does not depend on the choice of \(m'\in\mathcal{M}_2(m)\). Then \(\mathbb{E}\norm{\mathcal{M}_2^\mathcal{E}(F,M)} = |\mathcal{M}_2(m)|\cdot \varepsilon_1 = \binom{n}{2}\cdot\varepsilon_1\).
\begin{align*}
|\mathcal{M}_2^\mathcal{E}(f,m)|^2
&= \croc{\sum_{m'\in \mathcal{M}_2(m)}\ind{f\in \mathcal{E}(m,m')}}^2\\
&= \sum_{m'\in \mathcal{M}_2(m)} \ind{\mathcal{E}(m,m')}
+ 2\times \sum_{\{m',m''\} \subset \mathcal{M}_2(m)} \ind{\mathcal{E}(m,m'),\mathcal{E}(m,m'')}
\end{align*}
There are \(3\binom{n}{4}\) different ways to choose to matchings \(\{m',m''\}\subset \mathcal{M}_2(m)\) such that \(|m'\cap m''| = n-4\), and \(3\binom{n}{3}\) ways to choose them such that \(|m'\cap m''| = n-3\).
Notice that \(3\binom{n}{4}+3\binom{n}{3} = \binom{|\mathcal{M}_2(m)|}{2}\) and these partition all the choices for \(\{m',m''\}\subset \mathcal{M}_2(m)\).
When \(|m'\cap m''| = n-4\), the error events become independent and we get \[\Pr[F\in\mathcal{E}(m,m')\cap \mathcal{E}(m,m'')|M=m] = \varepsilon_1^2.\]
Let \(\varepsilon_2 \triangleq \Pr[F\in\mathcal{E}(m,m')\cap \mathcal{E}(m,m'')|M=m]\) given \(|m'\cap m''| = n-3\).
By the relation \(z + 2\binom{z}{2} = z^2\). For \(z = |\mathcal{M}_2(m)| = \binom{n}{2}\) and \(\binom{z}{2} = 3\binom{n}{3} + 3\binom{n}{4}\) we can write:
\begin{align*}
\mathbb{E}^2\norm{\mathcal{M}_2^\mathcal{E}(F,M)}
&= |\mathcal{M}_2(m)|^2\varepsilon_1^2\\
&= \binom{n}{2}\varepsilon_1^2 + \croc{6\binom{n}{3}+6\binom{n}{4}}\varepsilon_1^2\\
\E{|\mathcal{M}_2^\mathcal{E}(F,M)|^2} &= \binom{n}{2} \varepsilon_1 + 6\binom{n}{3}\varepsilon_2 + 6\binom{n}{4}\varepsilon_1^2\\
\operatorname{Var}\norm{\mathcal{M}_2^\mathcal{E}(F,M)} &= \binom{n}{2}(\varepsilon_1-\varepsilon_1^2) + 6\binom{n}{3}(\varepsilon_2-\varepsilon_1^2)\\
&\leq \binom{n}{2}\varepsilon_1 + 6\binom{n}{3}\varepsilon_2
\end{align*}
Plugging these values into the Chernoff bound we get
\begin{align*}
\Pr[|\mathcal{M}^\mathcal{E}(F,M)| = 0]
&\leq \frac{\binom{n}{2}\varepsilon_1 + 6\binom{n}{3}\varepsilon_2}{\binom{n}{2}^2\varepsilon_1^2}
\leq \mathcal{O}\pth{\frac{1}{n^2\varepsilon_1}+\frac{\varepsilon_2}{n\varepsilon_1^2}}
\end{align*}
By lemma \ref{lemma:Cramer} and \ref{lemma:errorIntersection} we have \(\varepsilon_1\geq (1-\rho_i^2)^{\frac{d}{2}(1-o(1))}\) and \(\varepsilon_2 \leq (1-\rho_i^2)^{3d/4}\).
Thus \(\varepsilon_1^2/\varepsilon_2 \geq (1-\rho^2)^{\frac{d}{4}(1-o(1))}\).
If $(1-\rho_i^2)^{\frac{d}{2}} \geq n^{-2 + \Omega(1)}$, then
\begin{align*}
n\varepsilon_1^2/\varepsilon_2 \geq n(1-\rho^2)^{\frac{d}{4}(1-o(1))} &\geq n^{1+\frac{1}{2}(1-o(1))(-2+\Omega(1))} \geq n^{\Omega(1)}\\
\textrm{and } n^2\epsilon_1 = n^2 (1-\rho_i^2)^{\frac{d}{2}(1-o(1))} &\geq n^{2 + (1-o(1))(-2+\Omega(1))} \geq n^{\Omega(1)}
\end{align*}
and therefore $\Pr[|\mathcal{M}^\mathcal{E}(F,M)| = 0] \leq \mathcal{O}(n^{-\Omega(1)}) \leq o(1)$.
\bcomment{
Let \(\eta\leq \mathcal{O}(1)\) such that \((1-\rho_i^2)^d \geq n^{-2+\eta}\).
Then \(\varepsilon_1 \geq n^{-2+\eta - o(1)}\) and \(\varepsilon_2/\varepsilon_1 \leq n^{-1+
\eta/2+o(1)}\). This gives us
\begin{align*}
\Pr[e(F,M) = 0]
&\leq \mathcal{O}\pth{\frac{n^{-2+\eta/2+o(1)}}{\varepsilon_1}}\\
& \leq \mathcal{O}\pth{n^{-\eta/2 +o(1)}}
\end{align*}
If \(\eta \geq \omega(1/\log n)\) then \(\Pr[e(F,M) = 0] \leq o(1)\).
This corresponds to \(I_{XY} \leq (2-\eta)\log n \leq 2\log n - \omega(1)\).
From Lemma~\ref{lemma:three-matchings} we have $\epsilon_2 \leq (b^{\circ}_2(z,z))^{\frac{3}{2}}$ and from Lemma~\ref{lemma:event-lb} we have $\epsilon_1 \geq (b^{\circ}_2(z,z))^{1 + o(1)}$, so
\begin{equation*}
\Pr[ = 0] \leq \mathcal{O}\left(\frac{1}{n^2 (b^{\circ}_2(z,z))^{1 + o(1)}} + \frac{1}{n(b^{\circ}_2(z,z))^{\frac{1}{2} + o(1)}}\right).
\end{equation*}
If $b^{\circ}_2(z,z) \geq n^{-2 + \Omega(1)}$, then
\[
n^2 b^{\circ}_2(z,z)^{1+o(1)} \geq n^{2 + (1+o(1))(-2+\Omega(1))} \geq n^{\Omega(1)} \geq \omega(1)
\]
and $\Pr[X = 0] \leq o(1)$.
}
\end{proof}
\section{Introduction}
Consider the following setting: There are a large set of entities (e,g, users) with some measurable characteristics. Let the measures of these characteristics be jointly Gaussian, with known statistics. We refer to these measures as features. Consider two different sources, each providing a database with lists of features for these entities. Furthermore, let one these sources lack proper labeling for features that would allow for the identification of feature pairs from the two sources that correspond to the same entity. This might be due to privacy concerns, if the mentioned features provided by the source contain sensitive information that ought to remain anonymous, or it might simply be that a reliable labeling is not available.
If the correlation between features pairs is sufficiently strong, then it is possible to exploit this correlation to identify correspondences between the two databases and in fact generate a perfect alignment between the feature lists. Such a capability might be a valuable tool to recuperate missing information by labeling unlabeled features or by allowing the junction of measurements coming from distinct sources. However it also has serious implications in privacy as it makes anonymous data vulnerable to deanonymization attacks \cite{imdb}.
It then becomes critical to understand the limitations of database alignment and to identify the conditions that characterize these limitations. This allows us to assess the feasibility and reliability of alignment procedures as well as the vulnerability of deanonymization schemes. In this study we investigate the conditions that guarantee either the achievability of alignment or its infeasibility. We analyze these conditions for both partial alignments and as well as for complete alignments. Cullina et al. have recently analyzed this problem for the case of discrete random variables, introducing a new correlation measure characterizing the feasibility of alignment \cite{DBLP:conf/isit/2018}. Takbiri et al. have investigated a related problem where the feature of each user is Gaussian with characteristic statistics and has correlation with other user features \cite{takbiriGaussian}. In this setting an adversary with perfect knowledge of system statistics attempts to match features with the characteristic user statistics. This follows the authors' previous studies of the same setting for discrete valued features and with data obfuscation \cite{takbiri1},\cite{DBLP:journals/corr/abs-1805-01296}.
The database alignment problem is connected to the widely studied graph alignment problem.
In that setting, each feature is associated with a pair of anonymized users.
In the simplest case, the feature is a Bernoulli random variable indicating the presence or absence of an edge between the users.
A recent line of work has characterized the information theoretic limits of the graph alignment problem \cite{pedarsani_privacy_2011,cullina_exact_2017,cullina_improved_2016}.
The problem of aligning correlated Wigner matrices, in which each feature is a Gaussian random variable, has served as a proxy for understanding the effectiveness of graph alignment algorithms \cite{ding_efficient_2018}.
\section{Model}
\paragraph{Notation}
We denote random variables by capital letters and fixed values by lowercase letters.
For a set \(\mathcal{S}\) and finite sets \(\mathcal{T},\mathcal{U}\), we denote by \(\mathcal{S}^{\mathcal{T}\times\mathcal{U}}\) the set of matrices with entries in \(\mathcal{S}\), rows indexed by \(\mathcal{T}\) and columns indexed by \(\mathcal{U}\).
We mark vectors with arrows and write matrices in boldface.
Given some matrix \(\dul{z}\), we denote its \(i\)-th row by \(\dul{z}_{i*}\), \(j\)-th column by \(\dul{z}_{*j}\) and its \((i,j)\)-th entry by \(\dul{z}_{ij}\).
We denote the identity matrix with rows and columns indexed by $\mathcal{T}$ by \(\dul{I}^{\mathcal{T}}\).
When the indexing set is clear from context, we will drop the superscript.
We denote the set of integers from \(1\) to \(n\) by \([n]\).
\subsection{General problem formulation}
\label{subsec:problem}
In this model, a database is just a function from a set of users to some space.
The value of the function for a user $u$ is the database entry for that user.
Cullina, Mittal, and Kiyavash considered database entries in finite alphabets\cite{DBLP:conf/isit/2018}.
In this paper, we consider database entries that are finite dimensional real vectors sampled from a gaussian distribution.
We are given two sets of user identifiers, \(\calU\) and \(\calV\), with \(|\calU|=|\calV|=n\).
We express the content of databases by matrices \(\dul{A}\in\mathbb{R}^{\calU\times[d_a}%{\dul{\Sigma}_{\textrm{I}}]}\) and \(\dul{B}\in\mathbb{R}^{\calV\times[d_b}%{\dul{\Sigma}_{\textrm{I}}]}\), so \(d_a}%{\dul{\Sigma}_{\textrm{I}}\) and \(d_b}%{\dul{\Sigma}_{\textrm{I}}\) are the lengths of feature vectors.
There exists a natural bijective correspondence between the identifier sets, i.e. each identifier in one set is related to exactly one identifier in the other set. We express this correspondence by the bijective matching \(M \subseteq \calU\times\calV\).
Let $p_{\ul{X}\ul{Y}}$ be the density of jointly gaussian random variables $\ul{X} \in \mathbb{R}^{d_a}%{\dul{\Sigma}_{\textrm{I}}}$ and $\ul{Y} \in \mathbb{R}^{d_b}%{\dul{\Sigma}_{\textrm{I}}}$ such that $(\ul{X},\ul{Y}) \sim \mathcal{N}\bpth{\ul{\mu},\dul{\Sigma}}$.
The density $p_{\dul{A}\dul{B}|M}$ is defined as follows.
For each $(u,v) \in M$, $(\dul{A}_{u*},\dul{B}_{v*}) \sim p_{\ul{X}\ul{Y}}$ and these $n$ random variables are independent:
\[
p_{\dul{A}\dul{B}|M}(\dul{a},\dul{b}|m) = \prod_{(u,v) \in m} p_{\ul{X}\ul{Y}}(\dul{a}_{u*},\dul{b}_{v*}).
\]
The matching $M$ is uniformly distributed over the $n!$ bijective matchings between $\calU$ and $\calV$.
The database alignment problem is to recover $M$ from $\dul{A}\dul{B}$, given knowledge of $p_{\ul{X}\ul{Y}}$.
Observe that the rows of $\dul{A}$ are i.i.d. and that $\dul{A}$ is independent of $M$.
The same is true for $\dul{B}$.
In other words, by examining one database, an observer learns nothing about $M$.
A pair of databases are illustrated in Figure \ref{fig}.
\paragraph{Canonical form of covariance}
We write $\ul{\mu} = \crocVec{\ul{\mu}_a}%{\dul{\Sigma}_{\textrm{I}}}{\ul{\mu}_b}%{\dul{\Sigma}_{\textrm{I}}}$ and \(\dul{\Sigma} = \crocMat{\dul{\Sigma}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}}{\dul{\Sigma}_{\textrm{ab}}}{\dul{\Sigma}_{\textrm{ab}}^\top}{\dul{\Sigma}_{\textrm{b}}}\), so $\dul{A}_{u*} \sim \mathcal{N}(\ul{\mu}_a}%{\dul{\Sigma}_{\textrm{I}},\dul{\Sigma}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}})$ for each $u \in \calU$ and $\dul{B}_{u*} \sim \mathcal{N}(\ul{\mu}_b}%{\dul{\Sigma}_{\textrm{I}},\dul{\Sigma}_{\textrm{b}})$ for each $v \in \calV$.
Let $d_a}%{\dul{\Sigma}_{\textrm{I}}'$ be the dimension of the support of $\ul{X}$, i.e. the rank of $\dul{\Sigma}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}$.
Let $\phi : \mathbb{R}^{[d_a}%{\dul{\Sigma}_{\textrm{I}}]} \to \mathbb{R}^{d_a}%{\dul{\Sigma}_{\textrm{I}}'}$ be an affine transformation that is injective on the support of $\ul{X}$.
If we apply $\phi$ to each row of $\dul{A}$, which can be done without knowledge of $M$, we obtain an equivalent database alignment problem.
Similarly, the database $\dul{B}$ can be transformed to obtain an equivalent problem.
For any gaussian database alignment problem, there is an equivalent problem with $\ul{\mu} = \ul{0}$ and
\[
\dul{\Sigma} = \crocMat{\dul{I}^{[d]}}{\operatorname{diag}(\ul{\rho})}{\operatorname{diag}(\ul{\rho})}{\dul{I}^{[d]}} = \bigoplus_{i \in [d]} \crocMat{1}{\rho_i}{\rho_i}{1}
\]
where $d = \min(d_a,d_b)$.
Thus the correlation structure of $(\ul{X},\ul{Y})$ is completely summarized by the vector $\ul{\rho} \in \mathbb{R}^d$.
The explicit transformations that put $\dul{\Sigma}$ into this form are described in \hyperref[sec:featureTransformation]{Appendix \ref*{sec:featureTransformation}}.
\bcomment{
We say two identifiers are not related only if their feature vectors are independent, i.e. \((u,v)\notin M \implies \ul{A}_{u*}\independent \ul{B}_{v*}\).
Given any pair of related identifiers \((u,v)\in M\), the pair of feature vectors \((\ul{A}_{u*},\ul{B}_{v*})\) is jointly Gaussian and distributed as \(\mathcal{N}\bpth{\ul{\mu},\dul{\Sigma}}\) where \(\ul{\mu}\) and \(\dul{\Sigma}\) do not depend on the pair \((u,v)\in M\).
We write \(\dul{\Sigma} = \pthMat{\dul{\Sigma}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}}{\dul{\Sigma}_{\textrm{ab}}}{\dul{\Sigma}_{\textrm{ab}}^\top}{\dul{\Sigma}_{\textrm{b}}}\) and assume both \(\dul{\Sigma}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}\in\mathbb{R}^{d_1\times d_1}\) and \(\dul{\Sigma}_{\textrm{b}}\in\mathbb{R}^{d_2\times d_2}\) are invertible.
As shown in Appendix \ref{sec:featureTransformation}, it is possible to transform the feature vectors to get a special case of the problem setting with some nice properties. Therefore assuming these properties does not result in a loss of generality.
Let feature vectors in the two datasets have equal length, i.e. \(d_1=d_2=d\). Furthermore let features have zero mean and unit variance, i.e. \(\ul{\mu}=\ul{0}\) and \(\dul{\Sigma}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}=\dul{\Sigma}_{\textrm{b}}=\dul{I}^d\). Finally let features pairs at each index be independent, i.e. let \(\Sigma_{12}=\operatorname{diag}(\ul{\rho})\) be a diagonal matrix with \(-1<\rho_i<1\) for all \(i\in[d]\).
}
\begin{figure}
\centering
\begin{tikzpicture}[
scale = 1/2,
text height = 1.5ex,
text depth =.1ex,
b/.style={very thick}
]
\draw (0,7) node {$M$};
\draw (-6,7) node {$\dul{A}$};
\draw (6,7) node {$\dul{B}$};
\draw[dotted] (-2,6.5) -- (2,6.5) -- (2,1.5) -- (-2,1.5) -- cycle;
\draw[dotted] (-8.2,6.5) -- (-3,6.5) -- (-3,1.5) -- (-8.2,1.5) -- cycle;
\draw[dotted] ( 8.2,6.5) -- ( 3,6.5) -- ( 3,1.5) -- ( 8.2,1.5) -- cycle;
\draw (-2.5,7) node {$\calU$};
\draw ( 2.5,7) node {$\calV$};
\draw (-2.5,6) node {$u_1$};
\draw (-2.5,5) node {$u_2$};
\draw (-2.5,4) node {$u_3$};
\draw (-2.5,3) node {$\vdots$};
\draw (-2.5,2) node {$u_n$};
\draw (2.5,6) node {$v_1$};
\draw (2.5,5) node {$v_2$};
\draw (2.5,4) node {$v_3$};
\draw (2.5,3) node {$\vdots$};
\draw (2.5,2) node {$v_n$};
\draw[b] (-2,6) -- (2,4);
\draw[b] (-2,5) -- (2,6);
\draw[b] (-2,4) -- (2,3.1);
\draw[b] (-2,3.1) -- (2,5);
\draw[b] (-2,2.9) -- (2,2);
\draw[b] (-2,2) -- (2,2.9);
\draw (-6,6) node {$(0.427,3.003)$};
\draw (-6,5) node {$(0.558,4.377)$};
\draw (-6,4) node {$(0.353,8.365)$};
\draw (-6,2) node {$(0.591,1.934)$};
\draw (6,6) node {$(0.562,43.60)$};
\draw (6,5) node {$(0.398,51.38)$};
\draw (6,4) node {$(0.421,30.37)$};
\draw (6,2) node {$(0.496,61.37)$};
\draw[b,->] (-3,6) -- (-4,6);
\draw[b,->] (-3,5) -- (-4,5);
\draw[b,->] (-3,4) -- (-4,4);
\draw[b,->] (-3,2) -- (-4,2);
\draw[b,->] (3,6) -- (4,6);
\draw[b,->] (3,5) -- (4,5);
\draw[b,->] (3,4) -- (4,4);
\draw[b,->] (3,2) -- (4,2);
\end{tikzpicture}
\caption{
Databases $\dul{A}$ and $\dul{B}$ with $d_a = d_b = 2$ and a matching $M$ between their user identifier sets.
}
\label{fig}
\end{figure}
\subsection{Correlation measures}
\label{section:corr}
Let \(I_{\ul{X}\ul{Y}} \triangleq I(\dul{A}_{u*},\dul{B}_{v*},|(u,v)\in M)\) denote the mutual information between any pair of related identifiers coming from \((u,v)\in M\).
Then
\begin{align*}
I_{\ul{X}\ul{Y}}
&= -\frac{1}{2}\log\frac{\det\bpth{\dul{\Sigma}}}{\det\bpth{\dul{\Sigma}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}}\cdot\det\bpth{\dul{\Sigma}_{\textrm{b}}}}\\
&= -\frac{1}{2}\sum_{i\in[d]}\log\pth{1-\rho_i^2}.
\end{align*}
Under the canonical formulation where \(\dul{\Sigma}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}=\dul{\Sigma}_{\textrm{b}}=\dul{I}^d\) and \(\dul{\Sigma}_{\textrm{ab}}=\operatorname{diag}(\ul{\rho})\) this becomes
Given any \((u,v)\in M\) and \((\ul{X},\ul{Y})=(\dul{A}_{u*}^\top,\dul{B}_{v*}^\top)\),
\begin{align*}
\sigma_{\ul{X}\ul{Y}}^2 \triangleq \operatorname{Var}\pth{\log \frac{p_{\ul{X}\ul{Y}}(\ul{X},\ul{Y})}{p_{\ul{X}}(\ul{X})p_{\ul{Y}}(\ul{Y})}}.
\end{align*}
Then \(\sigma_{\ul{X}\ul{Y}}^2 = \operatorname{tr}\bpth{\dul{\Sigma}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}^{-1}\dul{\Sigma}_{\textrm{ab}}\dul{\Sigma}_{\textrm{b}}^{-1}\dul{\Sigma}_{\textrm{ab}}^\top}\).
Furthermore under the canonical formulation where \(\dul{\Sigma}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}=\dul{\Sigma}_{\textrm{b}}=\dul{I}^d\) and \(\dul{\Sigma}_{\textrm{ab}}=\operatorname{diag}(\ul{\rho})\) this simplifies to \(\sigma_{\ul{X}\ul{Y}}^2 = \sum \rho_i^2\).
These calculations are made explicit in our supplementary material.
Note that \(\sigma_{\ul{X}\ul{Y}}\) is upper bounded by \(\sqrt{2I_{\ul{X}\ul{Y}}}\). This can easily be seen in the canonical formulation, where \(\sigma_{\ul{X}\ul{Y}}^2 = \sum \rho_i^2 \leq -\sum\log(1-\rho_i^2) = 2 I_{\ul{X}\ul{Y}}\).
\section{Results}
Our results identify conditions on \(I_{\ul{X}\ul{Y}}\) and \(\sigma_{\ul{X}\ul{Y}}\), as defined in Section~\ref{section:corr}.
\paragraph{MAP estimation}
The algorithm considers all possible alignments between the two sets and chooses the most likely one.
The log likelihood of an alignment is, by the independence of correlated feature pairs, equal to the sum of the log likelihood of each aligned feature pair. MAP estimation can then be implemented by computing the joint likelihood for each feature pair in \(\mathcal{O}(n^2d)\)-time and computing the maximum weight matching between databases in \(\mathcal{O}(n^3)\)-time using the Hungarian algorithm.
\newcommand{\thmAchMAP}{
\textbf{\upshape(Achievability)} If mutual information between feature pairs \(I_{\ul{X}\ul{Y}} \geq 2\log n + \omega(1)\), then the MAP estimator returns the proper alignment with probability \(1-o(1)\).
}
\begin{theorem}
\label{thm:achMAP}
\thmAchMAP
\end{theorem}
\newcommand{\thmConMAP}{
\textbf{\upshape(Converse)} Let \(d\in\mathbb{N}\) such that \(d\geq\omega(1)\). Furthermore let \(\dul{\Sigma}_{\textrm{a}}}%{\dul{\Sigma}_{\textrm{I}}=\dul{\Sigma}_{\textrm{b}}=\dul{I}^d\) and \(\dul{\Sigma}_{\textrm{ab}}=\rho \dul{I}\). If \(I_{\ul{X}\ul{Y}} \leq 2\log n(1 - \Omega(1))\), then any for algorithm, the probability of returning the proper alignment is \(o(1)\).
}
\begin{theorem}
\label{thm:conMAP}
\thmConMAP
\end{theorem}
\paragraph{Binary hypothesis testing}
The algorithm checks every possible pair of identifiers and uses a threshold-based method to decide whether to match the pair or not.
This can be done in \(\mathcal{O}(n^2d)\)-time, which is the complexity of computing joint likelihoods for each feature pair.
\newcommand{\thmAchLRT}{
\textbf{\upshape(Achievability)} If
\begin{align*}
I_{\ul{X}\ul{Y}} \geq \sigma_{\ul{X}\ul{Y}}\cdot\sqrt{\frac{n}{\varepsilon_{FN}}} + \log\frac{n^2}{\varepsilon_{FP}},
\end{align*}
then, choosing the threshold such that \(\log(n^2/\varepsilon_{FP})\leq \tau \leq I_{\ul{X}\ul{Y}}-\sigma_{\ul{X}\ul{Y}}\sqrt{n/\varepsilon_{FN}}\), the binary hypothesis test gives no more than \(\varepsilon_{FN}\) false negatives and \(\varepsilon_{FP}\) false positives in expectation.
}
\begin{theorem}
\label{thm:achLRT}
\thmAchLRT
\end{theorem}
It follows that the following regimes are achievable:
\begin{align*}
\bullet\,\, I_{\ul{X}\ul{Y}}&\geq \log(n)+\omega(1) & \varepsilon_{FN}&\leq o(n) & \varepsilon_{FP} &\leq o(n)\\
\bullet\,\, I_{\ul{X}\ul{Y}}&\geq 2\log(n)+\omega(1) & \varepsilon_{FN}&\leq o(n) & \varepsilon_{FP} &\leq o(1)\
\end{align*}
The next theorem holds for databases with any distribution of feature pairs, i.e. not only Gaussians.
\newcommand{\thmConLRT}{
\textbf{\upshape(Converse)} For any binary hypothesis test, the expected number of false negatives \(\varepsilon_{FN}\) and false positives \(\varepsilon_{FP}\) is lower bounded as
\begin{align*}
\varepsilon_{FN}+\varepsilon_{FP} \geq \frac{n}{2}\pth{1 - \frac{I_{\ul{X}\ul{Y}}}{\log n}}\pth{1-\mathcal{O}\pth{\frac{1}{\log n}}}.
\end{align*}
}
\begin{theorem}
\label{thm:conLRT}
\thmConLRT
\end{theorem}
It follows that, if \(I_{\ul{X}\ul{Y}} \leq \log n\bpth{1 - \Omega(1)}\), then any binary hypothesis test has expected number of errors \(\varepsilon_{FN}+\varepsilon_{FP} \geq \Omega(n)\).
\TODO{
\paragraph{Summary}
In the table, \(\varepsilon^{-}\) and \(\varepsilon^{+}\) denote the expected number of false negatives and false positives, respectively. The MAP algorithm gives us a size \(n\) matching, so for every false negative we get a false positive and thus \(\varepsilon^{+}=\varepsilon^{-}\) which we will simply denote by \(\varepsilon\).
In the cannonical setting with \(\dul{\Sigma}_{\textrm{ab}} = \rho\dul{I}\) and \(d\geq \omega(1)\), the achievable and converse regions for BHT and MAP are as follows.
\begin{center}
\begin{tabular}{ r | c || c | c}
\(\frac{I_{\ul{X}\ul{Y}}}{\log n}\) & \multicolumn{3}{l}{\qquad\qquad 1 \,\,\,\,\qquad\qquad 2}\\
\hline
\multirow{2}{*}{BHT} & \(\varepsilon^{-}+\varepsilon^{+}\) & \(\varepsilon^{-} \leq o(n)\) & \(\varepsilon^{-} \leq o(n)\) \\
& \(\geq \Omega(n)\) & \(\varepsilon^{+} \leq o(n)\) & \(\varepsilon^{+} \leq o(1)\)\\
\hline
MAP & \multicolumn{2}{c||}{\(\varepsilon\geq \Omega(1)\)} & \(\varepsilon\leq o(1)\)
\end{tabular}
\end{center}
Notice the tight gap between the converse and achievability regions, corresponding to regimes \(\log n(1-o(1))\leq I_{\ul{X}\ul{Y}}\leq \log n +\mathcal{O}(1)\) and \(2\log n - \mathcal{O}(1) \leq I_{\ul{X}\ul{Y}} \leq 2\log n + \mathcal{O}(1)\).
}
|
\section{Introduction}
\label{S1} \vspace{-4pt}
Planar random motions at finite velocity have been studied
by many authors, with different approaches.
Due to the difficulty of considering a continuous spectrum of infinite possible directions
of random motions on the plane, many studies have been devoted to the analysis
of cyclical planar random motions with a finite number of possible directions (see for example \cite{dic} and \cite{enzo}).
Planar random motion at finite velocity with an infinite number of uniformly distributed orientations of
displacements has been studied by several researchers over the years. In particular Orsigher and Kolesnik in \cite{kol} studied a planar random
motion with an infinite number of directions whose probability law is governed by the
damped wave equation. In this work the number of changes of directions
is given by an homogeneous Poisson process.
In the framework of fractional point processes, a number of papers have been devoted to the analysis of different fractional Poisson-type processes (see for example \cite{Beghin, Laskin, g4}). An interesting application of the fractional Poisson distribution (introduced in \cite{Beghin})
\begin{equation}
P\{N_{\alpha}(t)= k\}= \frac{1}{E_{\alpha,1}(\lambda t)}\frac{(\lambda t)^k}{\Gamma(\alpha k+1)},
\end{equation}
was discussed in recent papers about random flights with Dirichlet distributed displacements (see \cite{ale3, Ale, SPA}). In these papers the fractional Poisson distributions play a key-role in order to find in an explicit form the unconditional probability law of random flights in dimension $d>2$ and the corresponding governing partial differential equations.
In this paper we introduce a generalized fractional Poisson distribution with time-dependent rate and we discuss some of its main properties. Then, we study planar random motions with an infinite number of possible directions, where the number of changes of directions is given by the inhomogeneous fractional Poisson process.
We will show that explicit probability law can be expressed in terms of Mittag-Leffler functions. Finally we consider planar random motions with finite
velocity obtained by the projection of random flights with Dirichlet displacements studied in \cite{Ale} onto the plane.
\subsection{Non-homogeneous fractional Poisson distributions}
\vspace{-4pt}
Here we introduce the following generalization of the fractional Poisson distribution, given by
\begin{equation}\label{uno}
P\{N_{\alpha}(t)= n\}= \frac{1}{E_{\alpha,1}(\Lambda(t))}\frac{\left(\Lambda(t)\right)^n}{\Gamma(\alpha n+1)},\quad n \geq 0
\end{equation}
where $\alpha\in (0,1]$,
\begin{equation}
E_{\alpha,1}(t)= \sum_{k=0}^{\infty}\frac{t^k}{\Gamma(\alpha k+1)},
\end{equation}
is the classical one-parameter Mittag-Leffler function and
\begin{equation}
\Lambda(t)= \int_0^t\lambda(s)ds.
\end{equation}
It is immediate to check that, for $\alpha = 1$, the distribution
(\ref{uno})
coincides with the distribution of a non-homogeneous Poisson process with time-dependent
rate $\lambda(t)$. Moreover, for $\lambda(t)= \lambda= const.$ and $\alpha \in (0,1)$, this distribution coincides with one of the
alternative fractional Poisson processes discussed by Beghin and Orsingher in \cite{Beghin}.
We now discuss the meaning of this generalization of the Poisson process in the theory of counting processes.
First of all, we should remark the relation of distribution
(\ref{uno}) with weighted Poisson processes.
We recall that the probability mass function of a weighted
Poisson process is of the form (see for example \cite{Bala}
and references therein)
\begin{equation}
P\{N^w(t)=n\}= \frac{w(n)p(n)}{E[w(N)]},\quad n\geq 0,
\end{equation}
where $N$ is a radom variable with a Poisson distribution $p(n)$,
$w(\cdot)$ is a non-negative weight function with non-zero, finite
expectation, i.e.
\begin{equation}
0<E[w(N)]= \sum_{n=0}^{\infty}w(n) p(n) <\infty.
\end{equation}
Then, we recognize that the distribution (\ref{uno}) is a weighted
Poisson process with time-dependent rate $\lambda(t)$ and weights
$w(n)= n!/\Gamma(\alpha n+1)$.
We observe that the probability generating function of the r.v. with distribution (\ref{uno}), reads
\begin{equation}
G(u;t)= \sum_{n=0}^{\infty}u^n P\{N_{\alpha}(t)= n\}=
\frac{E_{\alpha,1}(u\Lambda(t))}{E_{\alpha,1}(\Lambda(t))}
\end{equation}
and satisfies the following fractional differential equation
\begin{equation}
\frac{d^{\alpha}}{du^{\alpha}}G(u^{\alpha};t)=\Lambda(t)G(u^{\alpha};t),
\end{equation}
where
\begin{equation}
\frac{d^{\alpha}f}{du^{\alpha}}=
\frac{1}{\Gamma(1-\alpha)}\int_0^{u}(u-s)^{-\alpha}
\frac{d}{ds}f(s) \, ds, \, u>0,
\end{equation}
is the Caputo fractional derivative of order $\alpha \in(0,1)$ (see
e.g. \cite{Pod}).
We finally recall that in \cite{noi}, the authors studied a state-dependent fractional Poisson process, namely $\widehat{N}(t)$, $t \ge 0$.
In view of the previous analysis, the non-homogeneous counterpart of this process
has univariate probabilities given by
\begin{equation}
\label{p2}
\Pr\{\widehat{N}(t)= j\}=\frac{\frac{(\Lambda(t))^j}{\Gamma(\alpha_j j+1)}
\frac{1}{E_{\alpha_j,1}(\Lambda(t))}}{\sum_{j=0}^{+\infty} \frac{(\Lambda(t))^j}{\Gamma(\alpha_j j+1)}
\frac{1}{E_{\alpha_j,1}(\Lambda(t))}},
\end{equation}
where $0<\alpha_j\leq 1$, for all $j \geq 0$. This distribution leads to an interesting form of counting process, where the fractional
weights depend on the state. The statistical applications of this approach should be object of further research.
\section{Planar random motions with finite velocity and infinite directions: main results}
In this section we recall some results obtained by Kolesnik and Orsingher in \cite{kol} about planar random motion with finite velocity with an infinite number of directions.
In their model, the motion is described by a particle taking directions $\theta_j$, $j=1, 2, \dots$, uniformly
distributed in $(0,2\pi]$ at Poisson paced times. The orientations $\theta_j$ and the governing Poisson process
$\mathcal{N}(t)$, $t\geq 0$, are assumed to be independent.
The particle starts its motion from the origin of the plane at time $t=0$
and moves with constant velocity $c$. It changes direction
at random instants according to a Poisson process.
At these instants, the particle instantaneously takes a new direction $\theta$ with uniform distribution
in $[0,2\pi)$ independently of its previous deviation.
Therefore, after $N(t)=n$ changes of direction, the position $(X(t),Y(t))$ of the particle in the plane
is given by
\begin{eqnarray}
\label{plc}
X(t) = c\displaystyle \sum_{j=1}^{n+1}(s_j-s_{j-1})\cos \theta_j,\\
Y(t)= c\displaystyle \sum_{j=1}^{n+1}(s_j-s_{j-1})\sin \theta_j,\label{prc}
\end{eqnarray}
where $\theta_j$, $j= 1, \dots,n+1$, are independent random variables
uniformly distributed in $[0,2\pi)$, $s_j$ are the instants
at which Poisson event occurs, $s_0=0$ and $s_{n+1}= t$. By means of (\ref{plc}) and (\ref{prc}),
the conditional characteristic function of the random vector $(X(t),Y(t))$ can be written as follows
\begin{eqnarray}\label{plc1}
\nonumber &E\{e^{i\alpha X(t)+i\beta Y(t)}|N(t)=n\}\\
&=\frac{2^{n/2}\Gamma(\frac{n}{2}+1)}{\left(ct\sqrt{\alpha^2+\beta^2}\right)^{n/2}}
J_{\frac{n}{2}}\left(ct\sqrt{\alpha^2+\beta^2}\right), \\
\nonumber &n\geq 1, \: (\alpha,\beta) \in R^2,
\end{eqnarray}
Then by inverting (\ref{plc1}), the following conditional distribution can be found (see formula (11) of \cite{kol})
\begin{eqnarray}
\label{pt1}
&P\{X(t)\in dx, Y(t)\in dy|N(t)=n \}\\
\nonumber &=\frac{n \left(c^2t^2-x^2-y^2\right)^{\frac{n}{2}-1}}{2\pi(ct)^n} dx \, dy,
\end{eqnarray}
for $x^2+y^2<c^2t^2$, $n\geq 1$.
In the model of Kolesnik and Orsingher, the changes
of direction are driven by an
homogeneous Poisson process, such that the absolutely continuous
component of the unconditional distribution of $(X(t),Y(t))$ reads
\begin{eqnarray}
\nonumber &P\{X(t)\in dx, Y(t)\in dy\}\\
\nonumber &=\frac{\lambda}{2\pi c}
\frac{e^{-\lambda t+\frac{\lambda}{c}\sqrt{c^2t^2-x^2-y^2}}}{\sqrt{c^2t^2-x^2-y^2}} dx \, dy,\label{pt}
\end{eqnarray}
for $x^2+y^2<c^2t^2$.
The singular component of $(X(t),Y(t))$ pertains to the probability of no
Poisson events. It is uniformly distributed on the circumference of radius $ct$
and has weight $e^{-\lambda t}$.
It has been proven that the density in (\ref{pt}) is a solution to the planar telegraph equation
(also equation of damped waves)
\begin{equation}
\frac{\partial^2 p}{\partial t^2}+2\lambda\frac{\partial p}{\partial t}
= c^2 \left\{\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}\right\}p.
\end{equation}
\section{Planar motions where the number of changes of direction is given by fractional Poisson distributions}
We now apply the family of fractional-type Poisson distributions (\ref{uno})
in order to obtain explicit probability laws of planar random motions
with infinite changes of direction. In this case we randomize the number of changes of direction by means of the general family of distributions
(\ref{uno}), depending on the real parameter $\alpha\in(0,1)$ and
the particular choice of the time-dependent rate $\lambda(t)$.
This means that we are able to construct a general family of
planar random motions that includes as a special case the one studied in
\cite{kol}. Moreover as we will see, their explicit probability law can be expressed in terms of Mittag-Leffler function, suggesting the possible relation with fractional hyperbolic partial differential equation.
\begin{theorem}\label{new}
The probability law
\begin{equation}
p(x,y,t)=\frac{P\{X(t)\in dx, Y(t)\in dy\}}{dx\, dy}
\end{equation}
of the random vector $(X(t),Y(t))$ in (\ref{plc})-(\ref{prc}), when the number of changes of
direction is given by the distribution (\ref{uno})
has the following form
\begin{eqnarray}
\nonumber & p(x,y,t)=
p_{ac}(x,y,t)I_{C_{ct}}+p_s(x,y,t)I_{\partial C_{ct}}\\
&= \frac{1}{E_{\alpha,1}(\Lambda(t))}\frac{\Lambda(t)E_{\alpha,\alpha}\left(\frac{\Lambda(t)}{ct}
\sqrt{c^2t^2-(x^2+y^2)}\right)}{2\pi \alpha ct\sqrt{c^2t^2-(x^2+y^2)}}I_{C_{ct}} \nonumber\\
&+\frac{1}{E_{\alpha,1}(\Lambda(t))}I_{\partial C_{ct}},\quad \alpha \in(0,1]\label{bello}
\end{eqnarray}
where
\begin{equation}
C_{ct}=\{(x,y)\in R^2: x^2+y^2< c^2t^2\},
\end{equation}
$I_{C_ct}$ and $I_{\partial C_{ct}}$ are the indicator functions of the circle $C_{ct}$ and its boundary.
\end{
|
theorem}
{\bf Proof:} The result stems from the fact that
\begin{eqnarray}
\nonumber p_{ac}(x,y,t)= \sum_{n=0}^{\infty}&
P\{X(t)\in dx, Y(t)\in dy| N_{\alpha}(t)= n\}\\
&\times P\{N_{\alpha}(t)=n \},
\end{eqnarray}
using the conditional distribution (\ref{pt1}) and the
non-homogeneous fractional Poisson distribution (\ref{uno})
with an arbitrary time-dependent rate $\lambda(t)$.\\
The singular component coincides with the probability of no changes
of directions according to the distribution (\ref{uno}) and is concentrated on the boundary $\partial C_{ct}$. On the other hand it is simple to check that
\begin{equation}
\int\int_{C_{ct}} p_{ac}(x,y,t)dxdy= 1- \frac{1}{E_{\alpha,1}(\Lambda(t))}.
\end{equation}
\eop
An interesting example is given by the special case
in which $\Lambda(t)= \lambda t$. In this case the absolutely continuous component of the distribution has the following
simple form
\begin{equation}\label{due}
p_{ac}(x,y,t)= \frac{1}{E_{\alpha,1}(\lambda t)}
\frac{\lambda E_{\alpha,1}\left(\frac{\lambda}{c}
\sqrt{c^2t^2-(x^2+y^2)}\right)}{2\pi c\sqrt{c^2t^2-(x^2+y^2)}},
\end{equation}
that, for $\alpha = 1$ coincides with the distribution of planar
random motions discussed in \cite{kol}.
The projection of the planar random motion with distribution (\ref{bello}) in the line can be calculated as follows
\begin{eqnarray}
\label{ciao} &p(x,t)=\int_{-\sqrt{c^2t^2-x^2}}^{\sqrt{c^2t^2-x^2}}p(x,y,t)dy\\
\nonumber &=\frac{1}{\sqrt{\pi}E_{\alpha,1}(\Lambda(t))}\sum_{k=0}^{\infty}\left(\frac{\Lambda(t)}{ct}\right)^k
\frac{\Gamma(\frac{k}{2}+1)}{\Gamma(\frac{k+1}{2})}\frac{(\sqrt{c^2t^2-x^2})^{k-1}}{\Gamma(\alpha k+1)}.
\end{eqnarray}
We remark that the zeroth-term of the series in (\ref{ciao}) pertains to the projection of the singular component of the distribution
(\ref{bello}).
The function appearing in (\ref{ciao}) can be expressed in terms of the generalized Wright function (see \cite{Kil} and
references therein)
\begin{eqnarray}
\nonumber &_p\psi_q\bigg[t\bigg|
\begin{array}{cc}
(a_1,A_1),\dots,(a_p, A_p)\\
(b_1,B_1),\dots, (b_q, B_q)
\end{array}
\bigg]\\
\nonumber &=\displaystyle\sum_{k=0}^{\infty}\frac{\prod_{j=1}^p\Gamma(a_j+A_j k)}{\prod_{j=1}^q\Gamma(b_j+B_j k)}\frac{t^k}{k!},
\end{eqnarray}
where $a_j,b_j \in R$ and $A_j, B_j>0$, for all $j\in N$.
Hence, we can write the distribution (\ref{ciao}) in the more compact form
\begin{equation}
\nonumber p(x,t)= \frac{_2\psi_2\bigg[\frac{\Lambda(t)}{ct}\sqrt{c^2t^2-x^2}\bigg|
\begin{array}{cc}
(1,1),(1, \frac{1}{2})\\
(\frac{1}{2},\frac{1}{2}), (1, \alpha)
\end{array}
\bigg]}{\sqrt{\pi}E_{\alpha,1}(\Lambda(t))\sqrt{c^2t^2-x^2}}
\end{equation}
For $\alpha = 1$, from (\ref{ciao}), we have that
\begin{eqnarray}
\nonumber &p(x,t)= e^{-\Lambda(t)}\displaystyle\sum_{k=0}^{\infty}\left(\frac{\Lambda(t)}{2ct}\right)^k\frac{(\sqrt{c^2t^2-x^2})^{k-1}}{[\Gamma(\frac{k+1}{2})]^2} \\
\nonumber &=\displaystyle\sum_{k=0}^{\infty} \frac{(\Lambda(t))^k}{k!}e^{-\Lambda(t)}\frac{k!}{[\Gamma(\frac{k+1}{2})]^2}\frac{(\sqrt{c^2t^2-x^2})^{k-1}}{(2ct)^k}\\
\nonumber &= \displaystyle\sum_{k=0}^{\infty}P\{N_1(t)=k\}P\{X(t)\in dx| N_1(t)=k\},
\end{eqnarray}
where $N_1(t)$ is the non-homogeneous Poisson distribution (\ref{uno}) with $\alpha = 1$. In such a way, we can infer the conditional
distribution of the motion performed by the projection on the line of the random planar motion previously considered.
Moreover, for $\alpha = 1$ and $\Lambda(t)= \lambda t$, we have that
\begin{equation}\label{sonin}
p(x,t)= e^{-\lambda t}\sum_{k=0}^{\infty}\left(\frac{\lambda}{2c}\right)^k
\frac{(\sqrt{c^2t^2-x^2})^{k-1}}{[\Gamma(\frac{k+1}{2})]^2},
\end{equation}
that coincides with formula (1.3) of \cite{ale2} where the projection of planar random motions with and infinite number of directions and homogeneous Poisson driven changes of direction was studied. The function appearing in (\ref{sonin}) is known as Sonine function.
The probability law (\ref{ciao}) describes a finite velocity motion on the line with random velocity. Indeed, in this case the distribution is completly concentrated in $|x|<ct$ because of the projection of the singular component of the planar distribution.
We finally consider the relation between fractional differential equations and the asbsolutely continuos component of the fractional telegraph process (\ref{due}). We observe that the function
\begin{eqnarray}
&f(x,y,t)= p_{ac}(x,y,t)E_{\alpha,1}(\lambda t)\\
\nonumber & =\frac{\lambda E_{\alpha,1}\left(\frac{\lambda}{c}
\sqrt{c^2t^2-(x^2+y^2)}\right)}{2\pi c\sqrt{c^2t^2-(x^2+y^2)}},
\end{eqnarray}
can be written in terms of the variable $w= \sqrt{c^2t^2-(x^2+y^2)}$.
Then, we have that $f(w)$ satisfies the following fractional differential equation
\begin{equation}
\frac{d^{\alpha}}{dw^{\alpha}}\left(w^{\alpha}f(w^{\alpha})\right)=
\frac{\lambda}{c}\left(w^{\alpha}f(w^{\alpha})\right),
\end{equation}
where the fractional derivative of order $\alpha \in(0,1]$ is in the Caputo sense. Indeed, we have that
\begin{equation}
w^{\alpha}f(w^{\alpha})= E_{\alpha,1}\left(\frac{\lambda}{c}w^{\alpha}\right),
\end{equation}
that is a well--known eigenfunction of the Caputo fractional derivative.
\section{Planar motions with random velocities where the number of changes of direction is given by fractional Poisson
distributions}
In this section we consider a planar random motion with random velocities. Our construction is based on the marginal distributions
of the projection of random flights with Dirichlet displacements in $R^d$ onto $R^2$ considered by De Gregorio and Orsingher in \cite{Ale}.
In this paper the authors considered random motions at finite velocity in $R^d$, with infinite possible directions uniformly distributed
on the hypersphere of unitary radius and changing directions at Poisson paced times. Two different Dirichlet distributions of the displacements
were considered and the explicit form of marginal distributions of the random flights were found.
Here we consider planar motions with random velocities, obtained from the projection of the random flights $\mathbf{X}_{d}(t)$ and $\mathbf{Y}_{d}(t)$, $t>0$, studied in \cite{Ale}
onto the plane. We recall from \cite{Ale} (Theorem 4, pag.695) that the marginal distributions of the projections of the processes
onto
$R^2$ are given by
\begin{eqnarray}
\nonumber &f^d_{\mathbf{X}_2}(\mathbf{x}_2,t;n)= \frac{\Gamma\left(\frac{n+1}{2}(d-1)+
\frac{1}{2}\right)}{\Gamma\left(\frac{(n+1)}{2}(d-1)-\frac{1}{2}\right)}
\frac{(c^2t^2-\|\mathbf{x}_2\|^2)^{\frac{n+1}{2}(d-1)-\frac{3}{2}}}{\pi(ct)^{(n+1)(d-1)-1}},\\
\nonumber &f^d_{\mathbf{Y}_2}(\mathbf{y}_2,t;n)= \frac{\Gamma\left((n+1)(\frac{d}{2}-1)
+1\right)}{\Gamma\left((n+1)(\frac{d}{2}-1)\right)}
\frac{(c^2t^2-\|\mathbf{y}_2\|^2)^{(n+1)(\frac{d}{2}-1)-1}}{\pi(ct)^{2(n+1)(\frac{d}{2}-1)}},
\end{eqnarray}
with $\|\mathbf{x}_2\|<ct$ and $\|\mathbf{y}_2\|<ct$. Here we construct exact probability distributions of planar motions
with random velocities, randomizing the number of changes of direction with the distribution (\ref{uno}), with a suitable choice
of the parameter $\alpha$ depending on the dimension $d$ of the original space.
Let us consider in detail, for the sake of clarity, the planar motion obtained by the projection of the random flight $\mathbf{Y}_{d}(t)$
of \cite{Ale}.
We randomize the number of changes of directions by means of the following adaptation of the distribution (\ref{uno})
\begin{equation}
P\{N_{d}(t)= n\}= \frac{1}{E_{\frac{d}{2}-1,\frac{d}{2}}(\Lambda(t))}\frac{\left(\Lambda(t)\right)^n}{\Gamma((n+1)(\frac{d}{2}-1)+1)},\nonumber
\end{equation}
where $n\geq 0$ and $d$ is the dimension of the original space of the random flight that we are projecting onto the plane.
Then, we have that the unconditional probability law of the planar motion can be obtained as follows
\begin{eqnarray}
\nonumber &p(\mathbf{y}_2,t)= \displaystyle \sum_{n=0}^{\infty}
P\{Y_1(t)\in dy_1, Y_2(t)\in dy_2| N_{d}(t)= n\}\\
\nonumber &\times P\{N_{d}(t)=n \}\\
\nonumber & = \frac{(c^2t^2-\|\mathbf{y}_2\|^2)^{\frac{d}{2}-2}}{\pi (ct)^{d-2}} \frac{E_{\frac{d}{2}-1,\frac{d}{2}-1}\left(\frac{\Lambda(t)}{(ct)^{d-2}}(c^2t^2-\|\mathbf{y}_2\|^2)^{\frac{d}{2}-1}\right)}{E_{\frac{d}{2}-1,\frac{d}{2}}(\Lambda(t))}.
\end{eqnarray}
An interesting case is for $d= 4$, where we have that
\begin{equation}\label{bell}
p(\mathbf{y}_2,t) = \frac{\Lambda(t)}{\pi(ct)^2}\frac{exp\bigg\{\frac{\Lambda(t)}{c^2t^2}(c^2t^2-\|\mathbf{y}_2\|^2)\bigg\}}{e^{\Lambda(t)}-1}.
\end{equation}
In equation (\ref{bell}) we used the fact that
\begin{eqnarray}
\nonumber & E_{1,2}(t)= \frac{e^t-1}{t}\\
\nonumber & E_{1,1}(t)= e^t.
\end{eqnarray}
A similar derivation of the explicit probability law can be given for the other case considered in \cite{Ale}.
|
\section{The Chow-Robbins game}
The following game was introduced by Yuan-Shih~Chow and Herbert Robbins \cite{CR} in 1964: We toss a coin repeatedly, and stop whenever we want. Our payoff is the proportion of heads up to that point, and we assume that we want to maximize the expected payoff.
Basic properties of this game, like the fact that there is an optimal strategy that stops with probability 1, were established in \cite{CR}.
Precise asymptotical results were obtained by Aryeh Dvoretzky \cite{D} and Larry Shepp \cite{S}.
In particular Shepp showed that for the optimal strategy, the proportion of heads required for stopping after $n$ coin tosses is asymptotically $$\frac12 + \frac{0.41996\dots}{\sqrt{n}},$$ where the constant is the root of a certain integral equation.
But as was pointed out more recently by Luis Medina and Doron Zeilberger \cite{MZ}, for a number of positions early in the game the optimal decisions were still not known rigorously.
Let $V(a,n)$ be the expected payoff under optimal play from position $(a, n)$, by which we mean $a$ heads out of $n$ coin flips. The game is suitable for computer analysis, but there is a fundamental problem in that it seems one has to do ``backward induction from infinity'' in order to determine $V(a,n)$. Clearly \begin{equation} \label{rec} V(a,n) = \max\left(\frac an, \frac{V(a, n+1) + V(a+1,n+1)}2\right),\end{equation} but the ``base case'' is at infinity.
\section{Lower bound on $V(a,n)$}
In position $(a,n)$ we can guarantee payoff $a/n$ by stopping. Moreover, if $a/n<1/2$, then by the recurrence of simple random walk on $\mathbb{Z}$, we can wait until the proportion of heads is at least $1/2$. Therefore \begin{equation} \label{lower} V(a,n) \geq \max\left(\frac{a}{n},\frac12\right).\end{equation}
We can recursively establish better lower bounds by starting from the inequality \eqref{lower} at a given ``horizon'', and then working our way backwards using \eqref{rec}. An obvious approach is letting the horizon consist of all positions with $n=N$ for some fixed $N$. In practice it is more efficient to use \eqref{rec} only for positions where in addition $a\approx n/2$, say when $\left| a - n/2\right| \leq c\sqrt{N}$ for some suitable constant $c$, and to resort to \eqref{lower} outside that range. This allows a greater value of $N$ at given computational resources.
If in this way we find that $V(a,n) > a/n$, then in position $(a,n)$, continuing is better than stopping. For instance it is straightforward to check (see the discussion in \cite{MZ}) that $V(2,3)>2/3$, from which it follows that with 2 heads versus 1 tails, we should continue.
The third column of Table~\ref{theTable} (in the Appendix) shows positions for which we have determined that continuing is better than stopping. These results are based on a calculation with a horizon stretching out to $n=10^7$. They agree with \cite[Section 5]{MZ} with one exception: Medina and Zeilberger conjecture based on calculations with a horizon of 50000 that, in the notation of \cite{D, MZ, S}, $\beta_{127} = 9$, meaning that the difference (number of heads minus number of tails) required in order to stop after 127 flips is 9. Accordingly they suggest stopping with 68--59, but our computation shows that continuing is slightly better.
On the other hand, in order to conclude that stopping is ever optimal, we need a nontrivial upper bound on $V(a,n)$. Clearly such an upper bound cannot come from \eqref{rec} alone, since that equation is satisfied by $V(a,n)\equiv 1$.
\section{Upper bound on $V(a,n)$}
We let $\tilde{V}(a,n)$ be the expected payoff from position $(a,n)$ under \emph{infinite clairvoyance}, that is, assuming we have complete knowledge of the results of the future coin flips and stop when we reach the maximum proportion of heads. Obviously $V(a,n) \leq \tilde{V}(a,n)$, so that any upper bound on $\tilde{V}(a,n)$ is also an upper bound on $V(a,n)$.
\begin{Thm} \label{T:GrandUnifiedInequality}
\begin{equation} \label{GrandUnifiedInequality} \tilde{V}(a,n) \leq \max\left(\frac{a}{n}, \frac12\right) + \min\left(\frac14\sqrt{\frac{\pi}{n}},\, \frac1{2\cdot\left|2a-n\right|}\right).\end{equation}
\end{Thm}
The first term of the right hand-side of \eqref{GrandUnifiedInequality} is equal to the lower bound \eqref{lower}, and thus the second term bounds the error in that approximation. The proof of Theorem~\ref{T:GrandUnifiedInequality} consists of Lemma~\ref{L:ollestrick} together with some calculations in the rest of Section~\ref{S:proof}.
Let us already here describe how we have used \eqref{GrandUnifiedInequality} computationally. We have computed upper bounds on $V(a,n)$ in a box stretching out to $n \leq N = 10^7$, and with height given by $\left|2a - n\right| \leq h$ for a fixed $h$ (thus the box includes points where $a$ deviates from $n/2$ by at most $h/2$). At the positions on the ``boundary'' of the box (more precisely, where $(a+1,n+1)$ or $(a,n+1)$ is outside the box), $V(a,n)$ has been estimated by \eqref{GrandUnifiedInequality}, whereas for the positions in the interior we have used \eqref{rec}, controlling the arithmetic so that all roundings go up, in order to achieve rigorous upper bounds.
The second term of the right hand-side of \eqref{GrandUnifiedInequality} gives two different upper bounds on the error in \eqref{lower}, where the bound $(1/4)\cdot\sqrt{\pi/n}$ is better close to the line $a=n/2$, while $1/(2\left|2a-n\right|)$ is the sharper one away from that line.
It seemed natural to choose the height $h$ of the box in such a way that these two bounds approximately coincide at the farther corners of the box, in other words so that $$\frac14\sqrt{\frac{\pi}{N}} \approx \frac1{2\cdot h},$$ that is, $h \approx (2/\sqrt{\pi})\cdot \sqrt{N}$. In our computations leading to the results of Table~\ref{theTable} (with $N=10^7$), we have taken $h=3568$. The second column of Table~\ref{theTable} lists positions for which we have determined that stopping is optimal. This includes 5 heads to 3 tails, a position discussed in \cite{MZ} and for which computational evidence \cite{MZ, W} strongly suggested that stopping should be optimal. To the best of our knowledge our computation provides the first rigorous verification of this fact.
\section{Proof of Theorem~\ref{T:GrandUnifiedInequality}} \label{S:proof}
For $a$ and $n$ as before, and $p\in [0,1]$, let $P(a,n,p)$ denote the probability that, starting from position $(a,n)$, at some point now or in the future the total proportion of heads will strictly exceed
|
$p$. In other words $P(a,n,p)$ is the probability of success starting from $(a,n)$ if instead of trying to maximize expected payoff, we try to achieve a proportion of heads exceeding $p$, and continue as long as this has not been achieved. When $p$ is rational, $P(a,n,p)$ is algebraic and can in principle be calculated with the method of \cite{Stadje}, but we need an inequality that can be analyzed averaging over $p$.
\begin{Lemma} \label{L:ollestrick}
Suppose that in position $(a,n)$, the nonnegative integer $k$ is such that at least $k$ more coin flips will be required in order to obtain a proportion of heads exceeding $p$. Then \begin{equation} \label{ineq} P(a,n,p) \leq \frac1{(2p)^k}.\end{equation}
\end{Lemma}
\begin{proof}
We can assume that $p>\max(a/n, 1/2)$, since otherwise the statement is trivial. From position $(a,n)$ condition on the event that the total proportion of heads will at some later point exceed $p$. Then, by the law of large numbers, there must be a \emph{maximal} $m$ such that after a total of $m$ coin flips the proportion of heads exceeds $p$. Conditioning further on $m$, the number of heads in coin flips number $n+1,\dots,m$ is determined, and all permutations of the outcomes of these $m-n$ coin flips are equally likely. The proportion of heads among these coin flips is at least $p$, so the (conditional) probability that coin flip $n+1$ results in heads is at least $p$. If $k>1$, then if coin flip $n+1$ was heads, the proportion of heads in flips $n+2,\dots,m$ is still at least $p$, so the probability of heads-heads in flips $n+1$ and $n+2$ is at least $p^2$ etc. Therefore the (conditional) probability that flips $n+1,\dots,n+k$ all result in heads is at least $p^k$, and since this holds for every $m$, we don't have to condition on a specific $m$, but only on the event that the proportion of heads will exceed $p$ at some point.
Since the unconditional probability of $k$ consecutive heads is $1/2^k$, the statement now follows from a simple calculation: On one hand,
$$Pr(\text{$k$ consecutive heads } | \text{ proportion $p$ is eventually exceeded}) \geq p^k.$$
On the other hand,
\begin{multline} \notag Pr(\text{$k$ consecutive heads } | \text{ proportion $p$ is eventually exceeded}) \\ \leq
\frac{Pr(\text{$k$ consecutive heads})}{Pr(\text{proportion $p$ eventually exceeded})} = \frac{(1/2)^k}{P(a,n,p)}.\end{multline}
Rearranging, we obtain \eqref{ineq}.
\end{proof}
Our next task is to use Lemma~\ref{L:ollestrick} to estimate $\tilde{V}(a,n)$. We have \begin{equation} \label{integration} \tilde{V}(a,n) = \int_0^1 P(a,n,p)\,dp = \max\left(\frac an, \frac12\right) + \int_{\max\left(\frac an, \frac12\right)}^1 P(a,n,p)\,dp.\end{equation}
If $p>\max(a/n, 1/2)$, then the requirement that at least $k$ more coin flips are needed to obtain a proportion of heads exceeding $p$ is equivalent to $$\frac{a+k-1}{n+k-1} \leq p,$$ which we rearrange as $$k\leq 1 + \frac{np-a}{1-p}.$$
Since there is always an integer $k$ in the interval $$\frac{np-a}{1-p} \leq k \leq 1 + \frac{np-a}{1-p},$$ we conclude using Lemma~\ref{L:ollestrick} that for $p$ in the range $\max(a/n,1/2) < p < 1$ of integration in \eqref{integration}, $$P(a,n,p) \leq \frac{1}{(2p)^{\frac{np-a}{1-p}}}.$$
It follows that $$\tilde{V}(a,n) \leq \max\left(\frac{a}{n},\frac12\right) + \int_{\max\left(\frac{a}{n},\frac12\right)}^1 \frac{dp}{(2p)^{\frac{np-a}{1-p}}}.$$
By the substitution $2p=1+t$ and the elementary inequality $$\frac{\log(1+t)}{1-t} \geq t,$$ we obtain
\begin{multline}\tilde{V}(a,n) \leq \max\left(\frac{a}{n},\frac12\right) + \frac12\int_{\max\left(\frac{2a-n}{n},0\right)}^1 \frac{dt}{(1+t)^{\frac{(1+t)n-2a}{1-t}}}
\\= \max\left(\frac{a}{n},\frac12\right) + \frac12\int_{\max\left(\frac{2a-n}{n},0\right)}^1 \exp\left(-\frac{(1+t)n-2a}{1-t}\cdot \log(1+t)\right)\,dt
\\ \leq \max\left(\frac{a}{n},\frac12\right) + \frac12\int_{\max\left(\frac{2a-n}{n},0\right)}^1 \exp\left(-(1+t)tn+2at\right)\,dt.
\end{multline}
By putting $u=t\sqrt{n}$ and replacing the upper bound of integration by infinity, we arrive at
\begin{equation} \label{maxcases} \tilde{V}(a,n) \leq \max\left(\frac{a}{n},\frac12\right) + \frac1{2\sqrt{n}}\int_{\max\left(\frac{2a-n}{\sqrt{n}},0\right)}^\infty \exp\left(-u^2 + \frac{2a-n}{\sqrt{n}}\cdot u\right)\,du.\end{equation}
Now notice that by the substitution $w= u - (2a-n)/\sqrt{n}$,
\begin{equation}
\int_{\frac{2a-n}{\sqrt{n}}}^\infty \exp\left(-u^2+\frac{2a-n}{\sqrt{n}}\cdot u\right)\, du = \int_0^\infty \exp\left(-w^2-\frac{2a-n}{\sqrt{n}}\cdot w\right)\, dw.\end{equation}
Therefore regardless of the sign of $2a-n$, \eqref{maxcases} can be written as
\begin{equation}\label{uniform} \tilde{V}(a,n) \leq \max\left(\frac{a}{n},\frac12\right) + \frac1{2\sqrt{n}}\int_0^\infty \exp\left(-u^2 - \frac{\left|2a-n\right|}{\sqrt{n}}\cdot u\right)\,du.\end{equation}
The bound \eqref{uniform} can be used directly in computations by first tabulating values of the integral, but we have chosen to simplify the error term further (instead spending computer resources on pushing the horizon).
We can discard either of the two terms inside the exponential in \eqref{uniform}. On one hand, the error term is at most
$$\frac1{2\sqrt{n}}\int_0^\infty \exp
|
rm RI} \times P(x | {\rm RI}),
\label{eq:sigma_x}
\end{equation}
where $\sigma_{\rm RI}$ is the cross section for a binary-single interaction to evolve as a RI, and $P(x | {\rm RI})$ is the
probability for $x$ to be an endstate given the interaction is a RI.
To estimate the inspiral cross section we must therefore
calculate the probability for an inspiral to form during
a resonant interaction, $P({\rm insp} | {\rm RI})$. This term is proportional to the probability for an IMS binary
to form with parameters $a',e$ inside the inspiral region (see Section \ref{sec:Formation of Inspirals in orbit space} above) doing a resonance.
In \cite{2014ApJ...784...71S} it was shown that the distribution in $a'$ and $e$ sampled in a resonance is approximately
flat at high eccentricity, as a result, the relative probability for a binary to form in some high eccentricity region scales with
the area of that region.
The majority of inspirals have a very
high eccentricity \citep{2014ApJ...784...71S}, therefore the probability for forming an inspiral is proportional to the area of the inspiral region.
This area is found by integrating $\epsilon_{\rm insp}$ from Equation \eqref{eq:e_insp} over $a'$ from $a' \approx 1$
to $a' \approx 2$ (when $a' \gtrsim 2$ the triple system can no longer be considered as a binary with a bound single).
The $a'$ dependent term in the brackets from Equation \eqref{eq:e_insp} integrates to a constant,
so the area, and thereby the inspiral probability, will simply scale as,
\begin{equation}
P({\rm insp} | {\rm RI}) \propto \mathcal{E}^{1/\beta} \left( {a_{0}}/{\mathcal{R}} \right)^{1/\beta - 1}.
\label{eq:P_insp}
\end{equation}
As a result, the inspiral probability, has dependence only on the strength of the loss term $\mathcal{E}$,
its slope $\beta$, and the compactness of the initial binary $(a_{0}/\mathcal{R})$.
In the HB limit $\sigma_{\rm RI}$ is proportional to the CI cross section $\sigma_{\rm CI}$
from Equation \eqref{eq:sigma_CI}.
Writing out Equation \eqref{eq:sigma_x} for inspirals we now finally find the inspiral cross section to scale as,
\begin{equation}
\sigma_{\rm insp} \propto \frac{m \mathcal{R}}{v^{2}_{\infty}} \left[ \mathcal{E}^{1/\beta} \left(\frac{a_{0}}{\mathcal{R}}\right)^{1/\beta} \right]
\label{eq:sigma_insp}
\end{equation}
From this relation we can conclude that
the cross section for any kind of inspiral \emph{always increases} with $a_{0}$ (since $\beta$ is always positive).
The rate of inspirals resulting from any pericenter-dependent loss term is therefore dominated by widely separated
binaries and not tight binaries, as one might naively guess.
This was illustrated for GW inspirals in \cite{2014ApJ...784...71S}, however, here we have generalized the framework to
show that this actually is a generic feature of any kind of inspiral, including tidal inspirals.
\subsection{Collision Cross Section}\label{sec:collisions}
The collision cross section, $\sigma_{\rm coll}$, can be estimated by a similar approach
as the one described for inspirals. As described in Section \ref{sec:Definition and Identification of Endstates}, we define a
collision to be when two objects pass
each other at a distance smaller than their total unperturbed radii without
inspiraling first. In the case of an IMS binary composed of a point-mass perturber and
a tidal object with radius $R$, a collision will therefore occur if the pericenter distance, $r_{\rm p}$, is smaller than $R$.
To estimate the cross section for such as collision we first need to
calculate the minimum eccentricity, $e_{\rm coll}$, a temporary formed binary
with semimajor axis $a'$ must have to collide.
For this we use the standard relation $r_{\rm p} = a_{0}a'(1-e)$ and substitute $r_{\rm p}$ with $R$, from which
we now find,
\begin{equation}
\epsilon_{\rm coll} = \frac{R}{a_{0}} \frac{1}{a'},
\label{eq:epsilon_coll}
\end{equation}
where $\epsilon_{\rm coll} \equiv 1-e_{\rm coll}$.
This relation defines the boundary of the collision region in
orbital phase space illustrated in Figure \ref{fig:ae_ill}.
As for the inspirals, integrating $\epsilon_{\rm coll}$ over $a'$ lead us to the relevant scaling for the collision
probability given a RI,
\begin{equation}
P({\rm coll} | {\rm RI}) \propto {R}/{a_{0}}.
\end{equation}
This can now be converted into a cross section using Equation \eqref{eq:sigma_x},
\begin{equation}
\sigma_{\rm coll} \propto \frac{mR}{v^{2}_{\infty}}.
\label{eq:sigma_coll}
\end{equation}
The collision cross section is therefore independent of $a_{0}$ and linear in $R$.
By comparing our analytical expressions for the tidal inspiral cross section (in which case $\mathcal{R}$
should be replaced by $R$) and the collision cross section we
observe a few interesting similarities. First, we see that the
tidal inspiral cross section approaches the collision cross section as $\beta \rightarrow \infty$.
In terms of cross sections, our simple $\beta$-model from Equation \eqref{eq:general_E_rp}
therefore seems to be the appropriate leading order extension for describing effects related to finite sizes,
including non-dissipative solid-sphere collisions. Second, we notice that the inspiral cross section is similar to the
collision cross section if the star is treated as a solid sphere with radius $\propto R(a_{0}/R)^{1/\beta}$
instead of just $R$. The relevant radius of the star is therefore not just a constant times its radius -- it further includes
a factor that scales with the energy of the few-body system it evolves in. This was also noticed by our simple scalings
in Section \ref{sec:Tidal Captures from Simple Scaling Relations}.
\subsection{Inspirals in the Collision Dominated Regime}\label{sec:Inspirals in the collision dominated limit}
The inspiral and collision regions in orbit space overlap as illustrated in Figure \ref{fig:ae_ill}, which means that
IMS binaries with high enough eccentricity will collide instead of spiraling in. We did not take this into account
when calculating the inspiral cross sections in Section \ref{sec:Inspiral cross sections}.
With the understanding of where collisions form from Section \ref{sec:collisions}, we can now correct for
this overlap. We only write out the solution for tidal inspirals, since the correction is never really important
for GW inspirals -- we therefore replace $\mathcal{R}$ with R below.
The asymptotic inspiral solution given by Equation \eqref{eq:sigma_insp} assumed
that inspirals can form in the full inspiral area, we can therefore write the collision corrected
solution, here denoted by $\sigma_{\rm insp-c}$, as a product of the asymptotic solution where collisions play no role, $\sigma_{\rm insp}$,
and a weight term specifying the fraction of the full inspiral area that is not overlapping with the collision area.
In Figure \ref{fig:ae_ill} this is the wavy region between the dashed and the solid line.
The collision corrected inspiral cross section can therefore be written as,
\begin{equation}
\sigma_{\rm insp-c} \approx \sigma_{\rm insp}\left[ \frac{\int_1^{a'_{\rm ic}} (\epsilon_{\rm insp}-\epsilon_{\rm coll})da'}{\int_1^{a'_{\rm u}} \epsilon_{\rm insp} da'}\right],
\label{eq:sigma_cc_insp_1}
\end{equation}
where $a'_{\rm ic}$ is where $\epsilon_{\rm insp}$ crosses $\epsilon_{\rm coll}$,
and $a'_{\rm u}$ is the maximum value for $a'$ ($a'_{\rm u} \approx 2$).
Assuming we know the normalizations of $\sigma_{\rm coll}$ and $\sigma_{\rm insp}$, one can now solve for the
full collision corrected inspiral cross section, $\sigma_{\rm insp-c}$. This will be done using numerical techniques
in the next section. However, even without normalizations, we can estimate how $\sigma_{\rm insp-c}$
scales with $a_{0}/R$ in the limit where collisions dominate. In this regime we know that $a'_{\rm ic}$ must be close to $1$
in which case $a'_{\rm ic}$ to leading order can be written as,
\begin{equation}
a'_{\rm ic} \approx 1+\left(\frac{4\mathcal{E}}{{\sqrt{3}}}\frac{a_{0}}{R}\right)^{2/3}.
\label{eq:ap_upper}
\end{equation}
In this limit the integrals in Equation \eqref{eq:sigma_cc_insp_1} can also be solved
by Taylor expanding around $\delta=0$, where we here define for convenience $\delta \equiv a'_{\rm ic}-1$.
As a result, the $a'^{1/\beta - 1}$ term in $\epsilon_{\rm insp}$ can be dropped which leaves us with the term $(a'-1)^{-3/(2\beta)}$ and $\epsilon_{\rm insp}$ can now
be trivially integrated.
The integral over $\epsilon_{\rm coll}$ is also easily found and will scale $\propto \text{ln}(1+\delta)$, which is $\approx \delta$ when $\delta \ll 1$.
By writing out the full expression in Equation \eqref{eq:sigma_cc_insp_1} following these assumptions,
we find the scaling $\sigma_{\rm insp-c} \propto a_{0}^{2/3}$, which holds in the low $a_{0}/R$ limit.
To summarize, the inspiral cross section scales differently with $a_{0}$ depending on if collisions are dominating $(a_{0}/R \rightarrow 1)$
or not $(a_{0}/R \rightarrow \infty)$, with specific scaling solutions given by,
\begin{equation}
\sigma_{\rm insp-c} \propto a_{0}^{1/\beta},\ (a_{0}/R) \rightarrow \infty
\end{equation}
\begin{equation}
\sigma_{\rm insp-c} \propto a_{0}^{2/3},\ (a_{0}/R) \rightarrow 1.
\end{equation}
Cross section results from a full integration of Equation \eqref{eq:sigma_cc_insp_1} including
correct normalizations are shown in Figure \ref{fig:crosssec_summfig}, and will be discussed in the following
section.
\subsection{Numerical Calibration of Analytical Cross Sections}\label{sec:Numerical Calibration of Analytical Cross Sections}
We here show the analytical cross sections with correct
normalizations estimated using our numerical simulations from Section \ref{sec: Numerical Scattering Results} with tides and GR.
As for the analytical results, the scalings presented in this section
are only valid in the equal mass case.
The cross sections are given in the following rescaled form for convenience,
\begin{equation}
\bar{\sigma} \equiv \sigma \left[ \frac{(m/M_{\odot})(\mathcal{R}/R_{\odot})}{(v_{\infty}/\text{km}\ \text{s}^{-1})^{2}}\right]^{-1}.
\end{equation}
Since the collision corrected tidal inspiral cross section from Section \ref{sec:Inspirals in the collision dominated limit}
has no closed form across the full interval in $a_{0}/R$, we instead present tidal inspirals plus collisions which, in our model,
scales as $(a_{0}/R)^{1/9}$ for $\text{log}(a_{0}/R)>1$ (see Figure \ref{fig:crosssec_summfig}).
\subsubsection{Cross Sections}
The analytical cross section for a tidal extended object and a point-mass perturber
to undergo a tidal inspiral \emph{or} a collision (the total coalescence rate) is given by,
\begin{equation}
\bar{\sigma}_{\rm insp,tid} + \bar{\sigma}_{\rm coll} \approx 727 \left( \frac{a_{0}}{R} \right)^{1/9} \ \text{AU}^{2},
\label{eq:sigma_insp_p}
\end{equation}
for $\text{log}(a_{0}/R)>1$. The normalization is here valid for polytropes with index $n=3$ (the exact value for $\gamma$
do not play a significant role here).
The cross section for two compact objects to undergo
a GW inspiral is,
\begin{equation}
\bar{\sigma}_{\rm insp,GW} \approx 2095 \left( \frac{a_{0}}{R_{s}} \right)^{2/7} \ \text{AU}^{2}.
\label{eq:sigma_insp_GW}
\end{equation}
A compact object is here either a NS or a BH -- a WD is not compact enough
for GWs to dominate over tides during close encounters.
For a collision between an extended tidal object and a point-mass perturber we find,
\begin{equation}
\bar{\sigma}_{\rm coll} \approx 924 \ \text{AU}^{2}.
\label{eq:sigma_coll_p}
\end{equation}
A few examples are given below.
\subsubsection{Examples}
To illustrate how to use the scaling relations from above, let us now consider three examples related to the
binary-single interaction between a NS($1.2M_{\odot}$)
and a [WD($1.2M_{\odot}, 0.006R_{\odot}$)-NS($1.2M_{\odot}$)] binary with
$a_{0}=5\text{AU} \approx 1075R_{\odot}$ in a cluster with $v_{\infty}=10\ \text{km}\ \text{s}^{-1}$.
\begin{itemize}
\item \emph{Tidal inspirals + collisions:} The total WD-NS coalescence cross section (tidal inspirals + collisions)
can be estimated using Equation \eqref{eq:sigma_insp_p} from which we find
${\sigma}_{\rm insp,tid} + {\sigma}_{\rm coll} \approx 2\cdot[1.2\cdot0.006/10^2]\cdot727\cdot(1075.0/0.006)^{1/9}\ \text{AU}^2 \approx 0.4\ \text{AU}^2$.
The factor $2$ in front accounts for the two WD-NS combinations due to the two NSs in the system.
\item \emph{GW inspirals:} The NS-NS GW inspiral cross section is found to be
${\sigma}_{\rm insp,GW} \approx 1\cdot[1.2\cdot(5.1{\cdot}10^{-6})/10^2]{\cdot}2095{\cdot}(1075.0/5.1{\cdot}10^{-6})^{2/7} \text{AU}^2 \approx 0.03 {\rm AU}^2$,
by using Equation \eqref{eq:sigma_insp_GW}.
\item \emph{Collisions:} The WD-NS collision cross section is found from Equation \eqref{eq:sigma_coll_p} to
be ${\sigma}_{\rm coll} \approx 2\cdot[1.2\cdot0.006/10^2]\cdot924\ \text{AU}^2 \approx 0.1\ \text{AU}^2$.
\end{itemize}
In this particular example we see that the inclusion of tides results in a total WD-NS coalescence cross section
that is about four times higher than the one estimated from the simple sticky star collision criterion.
One can compare these estimates with the upper left plot in Figure \ref{fig:cs_all}.
In an upcoming paper we extend this analytical framework to systems
where the WD can have any mass.
\subsubsection{Summary: Analytical Estimation of Cross Sections}
The calibrated cross sections from the section above including collision corrected tidal inspirals
are plotted and discussed in Figure \ref{fig:crosssec_summfig}.
The tidal and GW inspiral cross sections are also shown in Figure \ref{fig:cs_all} with dashed and dashed-dotted lines, respectively. We see that our derived scalings do indeed work
all the way from a WD to a MS star across almost four orders of magnitudes in $a_{0}$.
Our analytical predictions give valuable insight into how the collisions and inspirals
possibly scale around the HB limit.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{crossec_summaryfig}
\caption{Summary of outcome cross sections $\sigma$ related to finite size effects (collisions), tidal effects (tidal inspirals) and GR effects (GW inspirals)
arising from equal mass binary-single interactions. The black lines show the cross sections for an extended tidal object and a point-mass perturber to either
undergo a tidal inspiral (thick black solid line) or a sticky star collision (thick horizontal dashed line). The thin black line illustrates the analytical asymptotic solution
to the tidal inspiral cross section where the thin dashed line shows the collision + tidal inspiral cross section. The thick black line deviates from
the asymptotic solution due to the overlap between collisions and tidal inspirals in orbital phase space at especially small $a_{0}/R$ (see discussion
in Section \ref{sec:Inspirals in the collision dominated limit}). The tidal inspiral cross section shown here is valid for $n=3$ polytropes.
The red dash-dotted line shows the
|
cross section for two compact objects (NS or BH) to undergo a GW inspiral.
The left y-axis shows $\sigma$ in units of $[(m/M_{\odot})(\mathcal{R}/R_{\odot})/(v_{\infty}/\text{km}\ \text{s}^{-1})^{2}]$, where $v_{\infty}$ is the velocity
dispersion, $m$ is the mass of one of the (equal mass) interacting objects, and
$\mathcal{R}$ is the corresponding radius, which for tidal inspirals and collisions is the physical radius and for GW inspirals is
the Schwarzschild radius. A few examples are given in Section \ref{sec:Numerical Calibration of Analytical Cross Sections}.
The right y-axis shows $\sigma$ in units of $\sigma_{\rm coll}$.
The functional form of the cross sections are based on our analytical
framework from Section \ref{sec:Analytical Models}, where the normalizations are estimated using our simulations with tides and
GR from Section \ref{sec: Numerical Scattering Results}.
The three vertical dotted lines show from left to right $a_{\rm HB}/R$ for a system with $v_{\infty} = 10\ \text{km}\ \text{s}^{-1}$ and
$m = 1M_{\odot}$ (corresponding to $a_{\rm HB}\approx 13.3 \text{AU}$), when $R=1R_{\odot}$ (solar type star),
$R=0.0086R_{\odot}$ (WD) and $R=4.24{\cdot}10^{-6}$ (BH Schwarzschild radius), respectively.
The corresponding cross sections are only valid to the left of these lines. At this velocity dispersion, tidal inspirals with a solar type star can not
dominate the coalescence rate within the hard HB limit, but if the tidal object is a WD then tides can actually lead to an enhanced
coalescence rate by about a factor of four. The cross sections from this plot are overplotted our simulation data in Figure \ref{fig:cs_all}.
}
\label{fig:crosssec_summfig}
\end{figure*}
\section{Discussion}\label{sec:Discussion}
Our main findings, their consequences and relative importance in different dynamical systems
are discussed below.
\subsection{Stellar Collision Rate Not Enhanced By Tides}
Our initial motivation for this study was to explore if the modified dynamics
arising from tidal modes coupling to the orbital motion can enhance mergers or collisions
in chaotic binary-single interactions. Using full numerical simulations and analytical arguments, we have learned
that the most significant change when including tides is the
formation of tidal inspirals -- similar to tidal captures in the field.
The main reason why the collision or merger rate is not drastically
altered by tides is that this would require the resonant system to undergo (at least) two independent close passages;
one that first drains some of the orbital energy through tides without leading to an inspiral, and then one that results in the actual merger.
However, mergers -- and thereby close passages -- are relatively rare, therefore the rate of collisions following a previous close passage will happen only rarely. If the collision
probability is $P_{\rm coll}$ then the tidally induced collision probability will be of order $\approx P_{\rm coll}^2$. For example, in the WD case from Section \ref{sec:WDNSNS},
$P_{\rm coll} \approx 10^{-3}$ at $a_{0} = 1$AU. In fact, tides tend instead to decrease the number of collisions because
a system that could have evolved into a collision now can end as an inspiral. This is only seen at very low $a_{0}$.
We initially speculated that if tides could turn a fraction of the DIs into RIs (the encounter could be tidally captured into the triple system),
the collision probability will be enhanced. However, the effective cross section for this to happen is simply too small. If it did happen,
the maximum enhancement would still only be about a factor of two since the ratio between the number of DIs and RIs initially is
about unity in the equal mass case \citep{2014ApJ...784...71S}.
A barrier for forming actual mergers is also the angular momentum, $L$. Even our inspirals, which represent the
highest energy and momentum loss configurations in our simulations, do not collide due to the
requirement of $L$ to be (almost) conserved (see Section \ref{sec:Tidal Capture Example}).
For a more accurate description one must include mass loss and dissipation, which requires the use of hydrodynamical simulations \citep{2010MNRAS.402..105G}.
Further discussion on the collision rates in general N-body systems can be found in \citet{2012MNRAS.425.2369L,2015MNRAS.450.1724L}.
\subsection{Tidal Inspirals and Collisions in Cluster Systems}\label{sec:Inspiral Mergers Relative to Collisions}
Studies indicate that tidal captures
are more likely to merge than to form a stable binary; see e.g. discussion in \citet{1993PASP..105..973R}.
A key question is then at what fraction our 3-body tidal inspiral mergers contribute to the
stellar coalescence rate compared to the classical sticky star collisions.
We can use our analytical framework to gain some insight into this question
by considering the ratio between the tidal inspiral and collision cross sections given
by Equations \eqref{eq:sigma_insp} and \eqref{eq:sigma_coll}, respectively,
\begin{equation}
\frac{\sigma_{\rm insp}}{\sigma_{\rm coll}} \propto \left( \frac{a_{0}}{R} \right)^{1/\beta}.
\label{eq:sigma_insp_sigma_coll}
\end{equation}
This relation shows that the rate of inspirals relative to collisions
increases as the size of the interacting objects decreases
and as $a_0$ increases.
The maximum value of $\sigma_{\rm insp}/{\sigma_{\rm coll}}$
for an object with radius $R$, is set by the hard binary limit
$a_{\rm HB} \propto m/v_{\infty}^{2}$ (see Equation \ref{eq:v_c}), from which we derive
\begin{equation}
\text{max}\left( \frac{\sigma_{\rm insp}}{\sigma_{\rm coll}} \right) \propto \left( \frac{1}{v_{\infty}^2}\frac{m}{R}\right)^{1/\beta} \propto \left( \frac{v_{\rm esc}}{v_{\infty}}\right)^{2/\beta},
\label{eq:sigma_insp_sigma_coll}
\end{equation}
where $v_{\rm esc}$ is the escape velocity of the tidal object.
From the equation above we can conclude that the more compact the
interacting objects are (i.e., the larger $m/R$ is),
the more inspirals can form relative to collisions.
This is also seen in our simulation results described in Section \ref{sec: Numerical Scattering Results},
and in Figure \ref{fig:crosssec_summfig}.
The compactness $m/R$ required for an object to produce tidal inspirals with a point-mass perturber
at the same rate as collisions, can be read off Figure \ref{fig:crosssec_summfig}. This figure shows
that the tidal inspiral rate is similar to the collision rate when $\text{log}(a_{0}/R) \approx 3.5$.
If we use the hard binary value $a_{\rm HB}$ from
Equation \eqref{eq:v_c}, we find
\begin{equation}
\frac{m/M_{\odot}}{R/R_{\odot}} \approx 10^{-2} \left(\frac{v_{\infty}}{\text{km}\ \text{s}^{-1}}\right)^2 \;{\rm when}\; \sigma_{\rm coll} \approx \sigma_{\rm insp}(a_{\rm HB}).
\label{eq:mR_coll_eq_insp}
\end{equation}
This gives the critical value of $m/R$ in the HB limit. That is, if the tidal object has an $m/R$ larger than this value
then tidal inspirals can dominate over collisions.
Figure \ref{fig:mR Summery plot} shows the relation from Equation \eqref{eq:mR_coll_eq_insp}
for different values of $v_{\infty}$, together with some simplified mass-radius relations for MSs ($R \propto M^{0.8}$ -- dashed line) and
WDs (see \cite{Zalamea:2010eu} -- solid line).
We see that if the tidal object is a WD, tidal inspirals can be as important as collisions
in GC systems ($v_{\infty}=10\ \text{km}\ \text{s}^{-1}$) and might even also play a role in
galactic nuclei ($v_{\infty}=100\ \text{km}\ \text{s}^{-1}$).
If the tidal object is a MS star, the rate of inspirals is much
lower compared to the rate of collisions, and inspirals will only contribute to the coalescence
rate in clusters with a $\approx 1\ \text{km}\ \text{s}^{-1} $ dispersion.
Interestingly enough, low dispersion clusters do have a high fraction of
wide binaries and are also surprisingly dynamically active \cite[see discussion in e.g.][]{2013MNRAS.432.2474L}.
Again, to make this picture applicable for describing more realistic astrophysical scenarios we need to
carefully work out the unequal mass case.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{tides_eq_coll.pdf}
\caption{Relation between compactness $m/R$, velocity dispersion $v_{\infty}$, and the rate of tidal inspirals relative to collisions.
The thin solid grey lines show the combinations of $m$, $R$ and $v_{\infty}$ from Equation \eqref{eq:mR_coll_eq_insp}
that will result in an equal number of tidal inspirals and collisions in the HB limit ($a_{0} = a_{HB}$). We assume the equal mass
case and the tidal inspirals are here between an extended tidal object and a point-mass perturber.
Also shown are simplified mass-radius relationships for white dwarfs (solid line) and main sequence stars (dashed line).
If a given combination of $m$ and $R$ is to the right of a grey line, tidal inspirals will dominate over collisions in the HB limit for the
corresponding $v_{\infty}$. We see that WDs are the only objects compact enough to produce a significant number
of tidal inspirals relative to collisions in a typical GC ($10\ \text{km}\ \text{s}^{-1}$), where MS star tidal inspirals probably only contribute to the coalescence rate in
open clusters ($1\ \text{km}\ \text{s}^{-1} $).
}
\label{fig:mR Summery plot}
\end{figure}
\subsection{GW and Electromagnetic Signatures from Tidal Inspirals}
Tidal and GW inspirals are characterized by high eccentricity and low angular momentum (Figure \ref{fig:L2m}). The high eccentricity especially allows for multiple close passages before merger which will give rise to unique electromagnetic (EM) and GW observables, especially when the tidal object is
a WD \citep{2009PhRvD..80b4006P, 2011PhRvD..83f4002P, 2011PhRvD..84j4032P}. The GW signal will have a very rich spectrum compared to normal circular inspirals \citep{2007ApJ...665L..59W, 2010MNRAS.406.2749L},
which will reveal much more information about especially the equation of state of the WD.
As seen in Figure \ref{fig:L2m}, a space-borne GW
instrument like LISA will be sensitive to these WD-NS inspirals.
However, while there are plenty of interesting physics in high eccentricity WD-NS tidal inspirals and collisions, the rates are expected to be modest from the binary-single channel:
if we consider the $0.6M_{\odot}$ WD case from \cite{2014ApJ...784...71S} and assume that the inclusion of tides
enhances the resultant merger rate by a factor of $5$ (a $0.6M_{\odot}$ WD both has a lower polytropic
index $n$ and an $\alpha \sim 0$, which is expected to lead to more inspirals compared to a heavy WD),
then the expected rate of WD-NS tidal
inspirals will be around $\approx 50\ \text{yr}^{-1} \text{Gpc}^{-3}$.
The problem here is that the associated GW strain is far too weak for these sources to been seen
outside our own galaxy by LISA. More promising signatures could be thermonuclear optical transients
\citep{1986SvAL...12..152K, Lee:2007em, 2009MNRAS.399L.156R, Rosswog:2009wq, 2010ApJ...714L..52S,
2010Natur.465..322P, 2011ApJ...738...21W, 2012ApJ...746...62R, Metzger:2012ge, 2013ApJ...771...14H}, and high-energy transients
\citep{1998ApJ...502L...9F, 1999ApJ...520..650F, Lee:2007em}
that are expected to ensue when both light and heavy WDs are shocked in collisions or mergers with COs.
The exact rates of such encounters requires a detailed understanding of unequal mass scatterings involving WDs and COs with tides and GR, which we plan
to consider in future work.
\begin{figure}[tbp]
\centering
\includegraphics[width=\columnwidth]{rp_hist_WDNSNS.pdf}
\caption{Outcome cross section $\sigma$, as a function of the pericenter distance $r_{\rm p}$ of the endstate binary,
computed from a set of $5\times10^{4}$ binary-single interactions between a [NS($1.2M_{\odot}$), WD($1.2M_{\odot}$)] binary and a single
incoming NS($1.2M_{\odot}$).
The \emph{thin solid lines} show the distribution of WD-NS binaries from binary-unbound-single endstates (such as an exchange or a fly-by).
The \emph{thick solid lines} show the distribution of inspirals, where the left and right peak
is the NS-NS GW inspirals and WD-NS tidal inspirals, respectively.
These inspiral populations only appear when tides and GR are included in the EOM of the N-body system.
Inspirals are characterized by very low angular momentum
corresponding to a small pericenter when $e\sim1$, which makes them interesting sources for both EM and GW signals.
From the upper axis showing the corresponding GW frequency $f_{\rm GW}$, we see that GW inspirals fall within the LIGO sensitivity band, where
the tidal inspirals are closer to the LISA band. These results greatly motivates further studies of compact objects
undergoing a high eccentricity evolution. The cross section as a function of $a_{0}$ is shown for the same set in Figure \ref{fig:cs_all}.
}
\label{fig:L2m}
\end{figure}
\section{Conclusion}\label{sec:Conclusion}
We present the first systematic study of how dynamical tides affect the
interaction and relative outcomes in binary-single interactions. From performing a large set
of binary-single scatterings using an $N$-body code that includes tides and GR, we find that the inclusion of tides
leads to a population of tidal captures which are occurring during the chaotic
evolution of the triple interaction. We denote these captures \emph{tidal inspirals}, partly due to their similarity with
the GW inspirals studied in \citep{2014ApJ...784...71S}.
We confirm with analytical models
that the rate of tidal inspirals relative to the classical sticky star collision rate
increases with $(a_{0}/R)$, as a result, tides show the largest effect for widely separated binaries.
Since the upper limit on $a_{0}$
is set by the HB limit, which scales linearly with mass $m$, we conclude that the
compactness $m/R$ of the tidal object is the key factor
for determining if tides play a significant role or not in a given cluster environment: a larger compactness leads to more
tidal inspirals relative to collisions. As a result of these scalings, we find that the only tidal object which is compact enough to have
tidal inspirals dominating over collisions in a typical GC environment is a WD.
We further conclude that tides, from a dynamical perspective, do not seem to effect the dense stellar system as a whole,
as otherwise speculated in several previous studies \citep[e.g.][]{Fregeau:2004fj} -- although stellar finite sizes do
matter through collisions and dynamical kicks \citep{1986ApJ...306..552M}. However, the inclusion
of tides and GWs leads to a rare, but highly interesting population of eccentric binaries. The high eccentricity likely results in unique EM and GW
signals. While highly eccentric binaries can be created in single-single captures, it was illustrated
in \cite{2014ApJ...784...71S} that the binary-single channel is likely the dominant
formation path. These observations motivate further dynamical studies on few-body interactions involving
especially WDs and COs, as well as hydrodynamical studies on the outcome of highly eccentric captures.
While our estimated inspiral rate involving a heavy WD (1.2$M_{\odot}$) is still modest, we do expect the rate to be
significantly higher for lower mass WDs simply because they are more vulnerable to tidal deformations.
We are currently working on the analytical prescriptions for unequal mass encounters.
\acknowledgments{
It is a pleasure to thank D. Spergel, R. Cen, V. Paschalidis, C. Holcomb, T. Ilan, and F. Pretorius
for helpful discussions. Support for this work was provided by the David and Lucile Packard Foundation, UCMEXUS (CN-12-578) and NASA through an Einstein
Postdoctoral Fellowship grant number PF4-150127, awarded
by the Chandra X-ray Center, which is operated by the
Smithsonian Astrophysical Observatory for NASA under
contract NAS8-03060.
}
\bibliographystyle{apj}
|
\section{Introduction}
Analysis of quantum chaotic systems is often based on the statistical
properties of the spectrum of the Hamiltonian $H$
(in the case of autonomous
systems) or the Floquet operator $F$ (in the case of
periodically perturbed
systems). In general, quantized analogous of classically chaotic systems
display spectral fluctuations conforming to the predictions of random
matrices. Depending on the geometrical properties of the system one uses
orthogonal, unitary or symplectic ensemble \cite{Ha91,CC96}.
Another line of research deals with eigenstates of the analyzed quantum
system. One is interested in their localization properties, which can be
characterized by the eigenvector distribution
\cite{KMH88,Bo91,Iz90,HZ90}
the entropic localization length \cite{CGIS90} or the inverse
participation
ratio \cite{He87}. All this quantities, however, are based on the
expansion
of an eigenstate in a given basis $\{\vec{b}_i\}$, which may be chosen
arbitrarily. If one chooses (with a bad will), $\{\vec{b}_i\}$ as the
eigenbasis
of $F$, all these quantities carry no information whatsoever. One may ask,
therefore, to what extend the quantitative analysis based on the
eigenvector statistics is reliable.
Let $G$ denote a unitary operator, such that $\{ \vec{b}_i\}$ is its
eigenbasis.
We showed \cite{KZ91,Z92,Zy93} that the eigenvector statistics of a
quantum map $F$ conforms to
the prediction of random matrices, if operators $F$ and $G$
are {\sl relatively random}, i.e., their commutators are sufficiently
large.
In this paper we advocate an alternative method of solving the problems with
arbitrariness of the choice of the expansion basis. Instead of working in a
finite discrete basis, we shall use the coherent states expansion of the
eigenstates of $F$. For several examples of compact classical phase spaces
one may construct a canonical
family of the generalized coherent states
\cite{Pe86}. Localization properties of any pure quantum state may be
characterized by the Wehrl entropy, equal to the average log of its overlap
with a coherent state \cite{We78,We91}. We propose to describe the structure
of a given Floquet operator $F$ by the mean Wehrl entropy of its
eigenstates. This quantity, explicitly defined without any
arbitrariness, is shown to be a useful indicator of quantum chaos.
This paper is organized as follows. In section II we review the definition
of the Husimi distribution, stellar representation, and the Wehrl entropy.
For concreteness we work with the $SU(2)$ vector coherent states, linked to
the algebra of the angular momentum operator and corresponding to the
classical phase space isomorphic with the sphere. In section III we
define the
mean Wehrl entropy of eigenstates and present analytical results obtained
for low dimensional Hilbert spaces. Exemplary application of this quantity
to the analysis of the quantum map describing the model of the periodically
kicked top is provided in section IV.
\section{Husimi distribution and stellar representation}
Consider a compact classical phase space $\Omega$, a classical area
preserving map $\Theta:\Omega \to \Omega$ and a corresponding quantum map $F$
acting in an $N$-dimensional Hilbert space ${\cal H}_N$. A link between
classical and quantum mechanics can be established via a family of
generalized coherent states $|\alpha \rangle$. For several examples of
the classical phase spaces there exist a canonical family of coherent
states.
It forms an overcomplete basis and allows for an identity resolution
$\int_{\Omega} |\alpha \rangle \langle \alpha | d\alpha ={\bf 1}$. Any
mixed quantum state,
described by a density matrix $\rho$ can be represented by the
generalized Husimi distribution \cite{Hu40}, (Q-function)
\begin{equation}
H_{\rho}(\alpha ):= \langle \alpha |\rho |\alpha \rangle. \label{hus2}
\end{equation}
The standard normalization of the coherent states,
$\langle \alpha | \alpha\rangle =1$, assures that $\int_{\Omega}
H_{\rho}(\alpha)
d \alpha = 1$. For a pure quantum state $|\psi\rangle$ the Husimi
distribution is equal
to the overlap with a coherent state $H_{\psi}(\alpha ):=
|\langle \psi |\alpha \rangle|^2$. Let us note
that the Husimi distribution was
successfully applied to study dynamical properties of quantized
chaotic systems \cite{Ta86,Zy87}.
Consider a discrete partition of the unity into $n$ terms; $\sum_{i=1}^n
p_i=1$. The Shannon entropy $S_d=-\sum_{i=1}^n p_i\ln p_i$ characterizes
uniformity of this partition. In an analogous way one defines the Werhl
entropy of a quantum state $\rho$ \cite{We78}
\begin{equation}
S_{\rho} = - \int_{\Omega} H_{\rho}(\alpha) \ln [H_{\rho}(\alpha)]
d\alpha,
\label{wer1}
\end{equation}
in which the summation is replaced by the integration over the classical
space $\Omega$. This quantity characterizes the localization properties of a
quantum state in the classical phase space. It is small for coherent states
localized in the classical phase space $\Omega$ and large for the
delocalized states. The maximal Wehrl entropy corresponds to the maximally
mixed state $\rho_*$, proportional to the identity matrix,
for which the Husimi distribution is uniform.
Although the notions of the generalized coherent states, the Husimi
distribution, and the Wehrl entropy are well defined for several
classical
compact phase spaces, in this work we analyze in detail only the most
important case $\Omega=S^2$. This phase space is typical to
physical problems involving spins, due to the algebraic properties of the
angular momentum operator $J$. In this case one uses the family of spin
coherent states $|\vartheta,\varphi \rangle $ localized at points
$(\vartheta,\varphi)$ of the sphere $S^2$. These states, also called
$SU(2)$
vector coherent states, were introduced by Radcliffe \cite{r71} and Arecchi
{\sl et al.} \cite{a72} and are an example of the general group theoretic
construction of Perelomov \cite{Pe86}.
Consider an $N=2j+1$ dimensional representation of the angular momentum
operator $J$. For a reference state one usually takes the maximal eigenstate
$|j,j\rangle$ of the component $J_z$. This state, pointing toward the "north
pole" of the sphere, enjoys the minimal uncertainty. The vector
coherent state represents the reference state rotated by the angles $%
\vartheta$ and $\varphi$. Its expansion in the basis $|j,m\rangle$, $%
m=-j,\dots,+j$ reads \cite{VS95}
\begin{eqnarray}
|\vartheta , \varphi \rangle =\sum_{m=-j}^{m=j}
\sin ^{j-m}({\frac \vartheta 2})\cos ^{j+m}({\frac \vartheta 2})
\times \nonumber \\
\exp \Bigl(i(j-m)\varphi \Bigr)
\Bigl[ \Bigl({\ {\ {{ {2j \atop j-m}
} } } }\Bigr)\Bigr]^{1/2}|j,m\rangle, \label{thetrot}
\end{eqnarray}
where
$\int_0^{2 \pi} d\varphi
\int_0^{\pi} \sin \vartheta d\vartheta
|\vartheta, \varphi \rangle \langle \vartheta,\varphi |
(2j+1)/4\pi ={\bf 1}$.
For the $SU(2)$ coherent state the distribution (\ref{hus2}) equals in this
case $H_{\psi}(\vartheta,\varphi):= |\langle \psi | \vartheta,
\varphi
\rangle|^2$. Two different spin coherent states overlap unless they are
directed into two antipodal points on the sphere. The Husimi representation
of a spin coherent state has thus one zero (degenerated $N-1$ times)
localized at the antipodal point. In general,
any pure quantum state can be
uniquely described by the set of $N-1$ points distributed over the sphere.
Some of these zeros may be degenerated, just as in the case of a coherent
state. This method of characterizing a pure quantum state is called the
stellar representation \cite{Penr,Leb}.
In the analyzed case of the classical phase space isomorphic with the
sphere $S^2$ the Wehrl
entropy (\ref{wer1}) of a state $\rho$ equals
\begin{equation}
S_{\rho} = - {\frac{2j +1 }{4 \pi}} \int_0^{\pi}
\! \! \sin \vartheta d\vartheta \!
\int_0^{2\pi} \! d\varphi H_{\rho}(\vartheta,\varphi) \ln \Bigl[
H_{\rho}(\vartheta,\varphi)\Bigr], \label{wehr2}
\end{equation}
since the measure element $d \alpha$ is equal to
$(2j+1) \sin \vartheta d \vartheta d \varphi /4 \pi$.
Under this normalization the entropy of the maximally mixed state $\rho_*$
equals to $\ln N$.
The Husimi distribution of an eigenstate $|j,m\rangle $ may be computed
directly from the definition (\ref{thetrot}). Due to the rotational symmetry
it does not depend on the azimuthal angle $\varphi $
\begin{equation}
H_{|j,m\rangle }(\vartheta )= \sin^{2(j-m)}(\vartheta / 2 )\cos
^{2(j+m)}( \vartheta / 2 )\Bigl({ {2j \atop {j-m}}
}\Bigr), \label{husjz}
\end{equation}
which simplifies the computation of the Wehrl entropy. Simple integration
gives for the reference state $|j,j\rangle $
\begin{equation}
S_{{\rm coh}}={\frac{N-1}{N}}={\frac{2j}{2j+1}}. \label{lieb}
\end{equation}
Due to the rotational invariance the Wehrl entropy is the same for
any other coherent state.
\medskip
\begin{tabular}{|c|c|c|c|c|}
\hline
$N$ & $j$ & $m$ & $S_{|j,m\rangle }$ & ${\bar S}_{J_z}$ \\ \hline\hline
$2$ & $1/2$ & $1/2$ & $1/2$ & $1/2=0.5$ \\ \hline\hline
$3$ & $1$ & $1$ & $2/3$ & $1-\frac{\ln 2}{3} \approx 0.769$
\\ \cline{3-4}
& & $0$ & $5/3-\ln 2$ & \\ \hline\hline
$4$ & $3/2$ & $3/2$ & $3/4$ & $\frac{3}{2}-\frac{\ln 3}{2}
\approx 0.951$ \\ \cline{3-4}
& & $1/2$ & $9/4-\ln 3$ & \\ \hline\hline
& & $2$ & $4/5$ & \\ \cline{3-4}
$5$& $2$ & $1$ & $79/30-\ln 4$ & $2-\frac{\ln 96}{5}
\approx 1.087 $ \\ \cline{3-4}
& & $0$ & $47/15-\ln 6$ & \\ \hline\hline
& & $5/2$ & $5/6$ & \\ \cline{3-4}
$6$& $5/2$ & $3/2$ & $35/12-\ln 5$ &
$\frac{5}{2}-\frac{1}{3} \ln 50\approx 1.196$ \\
\cline{3-4}
& & $1/2$ & $15/4-\ln 10$ & \\ \hline
\end{tabular}
\smallskip
Table 1. Wehrl entropy $S_{|j,m\rangle}$ for the eigenstates of $J_z$
and its mean ${\bar S}_{J_z}$
for $N=2,3,4,5,6$. Due to the geometrical symmetry
$S_{|j,m\rangle}=S_{|j,-m\rangle}$.
\medskip
The Wehrl entropies for other eigenstates of $J_{z}$ are collected in
Tab. 1
for some lower values of $N$. These results may be also obtained from
the general formulae provided by Lee \cite{Le88} for the Wehrl entropy
of the
pure states in the stellar representation. Eigenstate $|j,m\rangle $ is
represented by $j+m$ zeros at the south pole and $j-m$ zeros at the
north
poles. For $j=1/2$ ($N=2$) all the states are $SU(2)$ coherent, so their
entropies are equal. For $j=1$ ($N=3$) the coherent state $|1,1\rangle $ is
characterized by the smallest entropy, while the state $|1,0\rangle $ by the
largest (among the pure states).
The larger $N$, the more place for a
various behaviour of pure states, measured by the values of $S$. The
axis of the Wehrl entropy is drawn schematically in Fig.1.
\vskip -1.5cm
\begin{figure}
\hspace*{-1.6cm}
\vspace*{-2.7cm}
\epsfxsize=9.5cm
\epsfbox{meanrys1.ps} \\
\caption{Axis of Wehrl entropy for pure states
of $N$ dimensional Hilbert space;
a) $N=2$ for which
${\bar S}_{\rm min}={\bar S}_{\rm max}=1/2$;
b) $N=3$ for which
${\bar S}_{\rm min}=2/3$, $\bar{S}_{J_z}\approx 0.77$,
$\langle S \rangle_3 \approx 0.83$, $S_{\rm max}\approx
0.973$;
c) $N=4$ for which
${\bar S}_{\rm min}=3/4$, $\bar{S}_{J_z}\approx 0.95$,
$\langle S \rangle_4 \approx 1.08$, $S_{\rm max}\approx
1.24$;
and d) $N>>1$, where ${\bar S}_{\rm min}=1-1/N$
while $\langle S \rangle_N \approx \ln N -0.423$.
}
\label{f1}
\end{figure}
It has been conjectured by Lieb \cite{Li78} that vector coherent
states are
characterized by the minimal value of the Wehrl entropy $S_{{\rm min}}=S_{%
{\rm coh}}$, the minimum taken over all mixed states. For partial results
in the direction to prove this conjecture see
\cite{Scutar,Le88,Schupp}. It
was also conjectured \cite{Le88} that the states with possibly regular
distribution of all $N-1$ zeros of the Husimi function on the sphere
are characterized by the
largest possible Wehrl entropy among all pure states $S_{\rm max}$.
Such
|
a distribution of zeros is easy to specify for
$(N-1)=4,6,8,12,20$, which correspond to the Platonian polyhedra.
For $N=2$ all pure states are coherent, so $S_{\rm min}=S_{\rm
max}=1/2$. For $N=3$ the maximal Wehrl entropy
$S_{\rm max}=5/3-\ln 2\approx 0.97$ is achieved for the state
$|1,0\rangle$, for which the two zeros of Husimi function are localized
at the opposite poles of the sphere. For $N=4$ the state with three
zeros located at the equilateral
triangle inscribed in a great circle is characterized by
$S_{\rm max}=21/8-2\ln 2\approx 1.24$. It will be interesting to
find
such maximally delocalized pure states for larger values of $N$, and to
study the dependence $S_{\rm max}$ of $N$.
Let us emphasize that for $N>>1$ the pure states exhibiting small Wehrl
entropy, (of the order of $S_{\rm min}$), are not typical. In the
stellar
representation coherent states correspond to coalescence of all $N-1$ zeros
of Husimi distribution in one point. In a typical situation the density
of the zeros is close to uniform on the sphere,
and the Wehrl entropy of such delocalized pure states is large.
A random state can be generated according to the natural uniform
masure on the space of pure
states by taking any vector of a $N \times N$ random matrix
distributed according to the Haar measure on $U(N)$.
Averaging over this measure
one may compute the mean Wehrl entropy $\langle
S\rangle_N $ of the pure states belonging to the $N$ dimensional
Hilbert space.
Such integration was performed in \cite{KMH88,J91,SZ98}
in a slightly different context leading to
\begin{equation}
\left\langle S\right\rangle_N=\Psi \left( N+1\right) -\Psi \left(
2\right) =\sum_{n=2}^{N}{\frac{1}{n}}, \label{wehmean}
\end{equation}
where $\Psi $ denotes the digamma function. Note that another normalization
of the coherent states used in Ref. \cite{SZ98}, leads to results shifted by
a constant $- \ln N$. Such a normalization
allows one for a direct comparison between the entropies
describing the states of various $N$.
In the asymptotic limit $N\rightarrow \infty $ the mean
entropy $\langle S \rangle_N$
behaves as $\ln N+\gamma -1\sim \ln N-0.42278$, which is close to
the maximal possible Wehrl entropy
for mixed states $S_{\rho _{\ast }}=\ln N$. This result is
schematically marked in Fig 1d.
\section{Mean Wehrl entropy of eigenstates of quantum map}
Consider a quantum pure state in the $N$-dimensional Hilbert space. Its
Wehrl entropy computed in the vector coherent states representation may
vary from $1-1/N$, for a coherent state, to the number of order of
$\ln N$,
for the typical delocalized state. This difference suggests a simple
measure
of localization of eigenstates of a quantum map $F$. Denoting its
eigenstates by $|\psi_i\rangle$; $i=1,\dots,N$ we define the mean Wehrl
entropy of eigenstates
\begin{equation}
{\bar{S}_F} ={\frac{1 }{N}} \sum_{i=1}^N S_{|\psi_i \rangle }.
\label{meawer}
\end{equation}
This quantity may be straightforwardly computed numerically for an
arbitrary quantum
map $F$. For quantum analogues of classically chaotic systems exhibiting no
time reversal symmetry all eigenstates are delocalized. In this case the
mean Wehrl entropy of eigenvectors ${\bar S}_F$ fluctuates around
$\langle S \rangle_N \sim \ln N$.
In the opposite case of an integrable dynamics the eigenstates are, at least
partially, localized. A simple example is provided by any Hamiltonian
diagonal in the $J_{z}$ basis (or the basis of any other component of
$J$). The mean Werhl entropy
$\bar{S}_{J_z}$ is given in table 1 for some values of $N$.
Further analysis shows \cite{SKZ99} that for larger $N$ the mean entropy
behaves as ${1 \over 2} \ln N$. This result has a simple
interpretation. Let us
divide the surface of the sphere into $N\sim \hbar ^{-1}$ cells. A typical
eigenstate of $J_{z}$ is localized in a longitudinal strip of a constant
polar angle $\theta $ and covers $\sqrt{N}$ of the cells, so its entropy
is of the order of $\ln \sqrt{N}$.
The quantity $\bar S$ is well--defined in a generic
case of operators $F$ with a nondegenerate spectrum.
In the case of the degeneracy, there
exists a freedom of choosing the eigenvectors; to cure this lack of
uniqueness we define ${\bar S}_F$ as the minimum over all possible sets
of eigenvectors of $F$.
Having a general definition of the mean Wehrl entropy of eigenvectors
of an arbitrary unitary operator, one
may pose a question, for which operators $F_{\rm min}$
($F_{\rm max}$) of a fixed $N$ with a nondegenerate spectrum
this quantity is the smallest
(the largest). It is clear that $\bar{S}_{F_{\rm min}}$ is larger than
$S_{\rm coh}$ (for $N>2$), since the set of any $N$ coherent states
does not form an orthogonal basis. On the other hand, the
minimum is smaller
than $\bar{S}_{J_z}$,
as explicitly demonstrated in Appendix for $N=3$.
The value $\bar {S}_{F_{\rm max}}$ is larger than the average over the
random unitary matrices
$\langle S \rangle_{U(N)}=\langle S \rangle_N$ and smaller than
$S_{\rho_*}=\ln N$.
The mean Werhl entropy of the eigenstates ${\bar S}_F$
may be related
with the eigenvector statistics of the operator $F$. Let us expand a
given coherent state in the
eigenbasis of the Floquet operator,
$|\alpha \rangle=\sum_{i=1}^N c_i(\alpha) |\psi_i\rangle $. The
dynamical properties of a quantum system are characterized locally \cite
{Zy90} by the Shannon entropy $S_s(\alpha):=-\sum_{i=1}^N
|c_i(\alpha)|^2 \ln |c_i(\alpha)|^2$.
The mean Wehrl entropy may be thus written as an average over the phase
space
\begin{equation}
{\bar{S}_F} ={\frac{1 }{N}} \int_{\Omega} S_s(\alpha) d\alpha.
\label{meawe2}
\end{equation}
This link is particularly useful to analyze the influence of the time
reversal symmetry. In presence of any generalized antiunitary symmetry
the operator $F$ may be described by the circular orthogonal ensemble
(COE).
There exists a symmetry line in the phase space and the coherent states
located along this line display eigenvector statistics typical of COE
\cite{Zy91}. This symmetry is also visible in the stellar representation
of the eigenstates and manifests itself by a clustering of zeros
of Husimi functions \cite{BBL,BKZ97}. However, a typical coherent
state does not exhibit such a symmetry and its eigenvector statistics is
typical to the circular unitary ensemble
(CUE). Thus for a system with the time-reversal symmetry the mean Wehrl
entropy will be slightly smaller than for the analogous system with the
time reversal symmetry broken, but much larger than the Shannon entropy
of real eigenvectors of matrices pertaining to the orthogonal ensemble.
\section{Mean Wehrl entropy for the kicked top}
In order to demonstrate usefulness of the mean Wehrl entropy in the
analysis of quantum chaotic systems we present numerical results
obtained for the periodically kicked top. This model is very suitable
for investigation of quantum chaos \cite{HKS,Ha91}. Classical dynamics
takes place on the sphere, while the quantum map is defined in terms of
the components of the angular momentum operator $J$. The size of the
Hilbert space is determined by the quantum number $j$ and equals
$N=2j+1$. One step evolution
operator reads $F_o=\exp(-ipJ_z)\exp(-ikJ_x^2/2j)$. For $p=1.7$ the
classical system becomes fully chaotic for the kicking strength
$k\approx 6$ \cite{HKS}. This system possesses a generalized antiunitary
symmetry and can be described by the orthogonal ensemble. The time
reversal symmetry may be broken by an additional kick \cite{Ha91}. The
system $F_u=F_o \exp(-ik'J_y^2/2j)$ pertains to CUE and
will be called the unitary top.
\vskip -0.2cm
\begin{figure}
\hspace*{-0.6cm}
\vspace*{0.4cm}
\epsfxsize=9.0cm
\epsfbox{meanrys2.ps} \\
\caption{Husimi distribution of exemplary eigenstates of the
Floquet
operator of the orthogonal kicked top for $N=62$ in the
dominantly regular
regime ($k=0.5$), a) and b), and chaotic regime ($k=8.0$),
c) and d).
The sphere is represented in a rectangular projection with
$t=\cos\vartheta$. }
\label{f2}
\end{figure}
Fig. 2 presents the Husimi distributions of two eigenstates of $F_o$
for $p=1.7$
in the regime of regular motion $(k=0.5)$ and two, for which
the classical dynamics is chaotic $(k=8.0$). In the quasiregular case
the eigenstates are localized close to parallel strips,
covered uniformly by the eigenstates of $J_z$. On the other
hand, the eigenstates of the chaotic map are delocalized at the entire
sphere. These differences are well characterized by the values of the
Wehrl entropies, equal correspondingly: a) $2.77$, b) $2.66$; and
c) $3.72$, d) $3.80$. The data, obtained for $N=62$, may be compared
with the mean entropy of the unperturbed system,
$\bar{S}_{J_z}\approx 2.465$,
the mean Wehrl entropy of chaotic system without time reversal symetry,
$\langle S \rangle_{62}\approx 3.712$, and the maximal entropy of
the mixed state, $S_{\rho_*}=\ln 62 \approx 4.1271$.
The above eigenstates are typical for both systems, and the other $60$
states display a similar character. The properties of all eigenstates
are thus described by the mean Wehrl entropy of eigenstates $\bar
{S}_F$. The dependence of this quantity on the kicking strength $k$ is
presented in Fig. 3. To show a relative difference between the entropies
typical to the regular dynamics we use the scaled coefficient
\begin{equation}
\mu (F) := { {\bar{S}}_F - {\bar{S}}_{J_z} \over
\langle S \rangle_N - {\bar{S}}_{J_z} }.
\label{gamma}
\end{equation}
Per definition $\mu$ is equal to zero, if $F$ is diagonal in the
$J_z$ basis, which corresponds to the integrability. In the chaotic
regime $F$ is well described by CUE and $\mu$ is close to unity.
This is indeed the case for the unitary top with $k'=k/2$ and $k>6$.
The growth of $\mu$ is bounded and therefore it cannot, in general,
follow
the increase of the classical Kolmogorov--Sinai entropy $\Lambda$ (the
Lapunov exponent averaged over the phase space), which for the
classical system grows with the kicking strength $k$ \cite{Zy93}.
The data for the orthogonal top fluctuate below unity, due to existence
of the symmetry line. The difference between the coefficients $\mu$
obtained for both models does not depend on the kicking strength,
but decreases with $N$ and vanish in the classical limit $N\to \infty$.
\vskip -1.4cm
\begin{figure}
\hspace*{-1.6cm}
\vspace*{-6.6cm}
\epsfxsize=9.9cm
\epsfbox{meanrys3.ps} \\
\caption{Scaled mean Wehrl entropy $\mu$ of the eigenvectors
of the Floquet operator for the
kicked top as a function of the kicking strength $k$ for
$N=62$. The
data are obtained for two models: unitary top $(\bullet)$ and
orthogonal top $(\diamond)$.
The crosses denote values of the classical Kolmogorov-Sinai entropy
$\Lambda$, which characterize
the transition to chaos in the classical analogue of the
orthogonal top.
}
\label{f3}
\end{figure}
\section{Concluding remarks}
The Wehrl entropy of a given state characterizes its localization in the
classical phase space. We have shown that the mean Wehrl entropy
${\bar S}_F$ of eigenstates of a given evolution operator
$F$ may serve as a useful
indicator of quantum chaos. Let us emphasize that this quantity, linked
to the classical phase space by a family of coherent states, does not
depend on the choice of basis. This contrasts the others quantities,
like eigenvector statistics, localization entropy, inverse participation
ratio, often used to study the properties of eigenvectors.
It will be interesting to find the unitary operators (or rather the
repers) for which ${\bar S}_F$ is the smallest or the largest.
The mean Wehrl entropy of eigenstates enables one to detect the
transition from regular motion to chaotic dynamics. On the other hand, it
is not related to the classical Kolmogorov--Sinai entropy (or
to the Lapunov exponent), so it cannot be
used to measure the degree of chaos in quantum systems. Such a link with
the classical dynamics is established for
the {\sl coherent states dynamical entropy} of a given quantum map
\cite{SZ95,SZ98},
but this quantity is much more difficult to calculate.
Both these quantities characterize the
{\sl global} dynamical properties of a quantum system, in contrast to
the entropy of Mirbach and Korsch \cite{MK95,MK98}, which describes the
{\sl local} features.
Mean Wehrl entropy characterizes the structure of eigenvectors of $F$,
and is not related at all to the spectrum of this operator. Thus it is
possible to construct a unitary operator with a Poissonian spectrum and
the delocalized eigenvectors. Or conversely, one may find an operator
with spectrum typical to CUE and all eigenstates localized.
This shows that the relevant information concerning the dynamical
properties of a quantum system described by an unitary evolution
operator $F$ is contained as well in its spectrum and in its eigenstates.
I am indebted to W. S{\l}omczy{\'n}ski for
fruitful discussions and a constant interest in the progress of this
research. I am also thankful to M. Ku{\' s} and P. Pako{\' n}ski for
helpful remarks.
It is a pleasure to thank Bernhard Mehlig for the invitation to Dresden
and the Center for Complex Systems for a support during the workshop.
Financial support from Polski Komitet Bada{\'n}
Naukowych in Warsaw under the grant no 2P-03B/00915
is gratefully acknowledged.
|
\section*{S1 Realization with superconducting qubits}
\label{superconducting qubits}
We consider the case of two superconducting transmon qubits with a fixed capacitive coupling $g$ far less than the resonance frequency of either qubit.
The Hamiltonian of the system can be written as
\begin{eqnarray}
H &=& \omega_1(t) a^\dagger a + \frac{\alpha_1}{2}a^\dagger a^\dagger a a + \omega_2 b^\dagger b + \frac{\alpha_2}{2}b^\dagger b^\dagger b b + g(a^\dagger b + b^\dagger a),
\end{eqnarray}
where $\alpha_i$ is the anharmonicity parameter of qubit $i$ and we work in units where $\hbar=1$. Each qubit's resonance frequency and the coupling between them may be written as
\begin{eqnarray}
\omega_i=\sqrt{8E_{Ji}E_{Ci}}-E_{Ci}
\end{eqnarray}
\begin{eqnarray}
g=\frac{E_{Cc}}{\sqrt{2}}(\frac{E_{J1}E_{J2}}{E_{C1}E_{C2}})^{1/4}
\end{eqnarray}
provided $E_{Ji} \gg E_{Ci}$, where $E_{Ji}$ and $E_{Ci}\approx-\alpha_i$ are the Josephson and charging energies of qubit $i$ respectively, with $E_{Cc}$ the charging energy of the coupling capacitor \cite{Koch2007S, Didier2018S}. For sufficiently large anharmonicity, we may truncate the above Hamiltonian and obtain equation (1). To realize the scheme outlined in the main text, the most straightforward approach is to use one tunable-frequency qubit with resonance frequency $\omega_1(t)$ and one fixed-frequency qubit with $\omega_2$. We consider an asymmetric transmon for which the two Josephson junctions which comprise its SQUID loop have different Josephson energies. The frequency of qubit $1$ is tunable by varying the applied flux $\Phi$ through the loop, given that
\begin{eqnarray}
E_{J1}(\Phi)=E_{J1}^{\rm max}\cos{\Big{(}\frac{\pi\Phi}{\Phi_0}\Big{)}\sqrt{1+d^2\tan^2{\Big{(}\frac{\pi\Phi}{\Phi_0}\Big{)}}}},
\end{eqnarray}
where $E_{J1}^{\rm max}$ is the total, maximum Josephson energy of the loop, $\Phi_0$ is the magnetic flux quantum, and $d$ is a measure of the junction asymmetry \cite{Hutchings2017S}. The applied flux $\Phi$ may be varied in time to satisfy the requirements on the time-dependent frequency. As an example, when considering
the accelerated dynamics in Fig.~\ref{system_10_4_20}e and the decelerated dynamics in Fig.~\ref{VFF_10_25_20}b, the flux should be smoothly tuned from $\Phi(0)=0$ to $\Phi(T_{F})=0.5\Phi_0$ in a time $T_{F}=0.9g^{-1}$ or $T_{F}=1.1g^{-1}$.
As one particular example, the accelerated and decelerated dynamics can be closely replicated for a total evolution time on the order of $10^2$ns for $E_{J1}^{\rm max}=$ 30GHz, $E_{J2}=27.7$GHz, $E_{C1,2}=203$MHz, $g=9$MHz, and $d=0.85$, which is comparable to modern gate times with transmon qubits and far shorter than standard relaxation and dephasing times typically on the order of tens of microseconds.
\section*{S2 Single qubit}
\label{Single qubit}
In this section, we show that the dynamics examined in the main text can be emulated by a superconducting transmon qubit under a drive field.
The Hamiltonian of the system is written as
\begin{eqnarray}
\frac{H}{\hbar}= \omega(t) a^\dagger a+ \frac{\alpha_0}{2}a^\dagger a^\dagger a a+ 2\Omega \cos(\omega_{\rm d}t)(a + a^\dagger) ,
\end{eqnarray}
where $\omega$ and $\alpha_0$ are the angular frequency and anharmonicity parameter of the transmon, and $\Omega$ and $\omega_{\rm d}$ are the Rabi frequency and angular frequency of the drive field. Now we move to a rotating frame with frequency $\omega_d$ and use the rotating wave approximation (RWA) to obtain
\begin{eqnarray}
\frac{H_{\rm RWA}}{\hbar}=\Delta(t) a^\dagger a+ \frac{\alpha_0}{2}a^\dagger a^\dagger a a+ \Omega (a + a^\dagger),
\end{eqnarray}
where $\Delta(t) = \omega(t) - \omega_{\rm d}$.
We assume that $|\Delta|, \Omega \ll | \alpha_0 |$.
Then the system can be approximated as a two level system for which the Hamiltonian is represented as
\begin{eqnarray}
H_{\rm RWA}=\frac{\hbar \Delta}{2} \sigma_z + {\hbar \Omega} \sigma_x,
\end{eqnarray}
where we shifted the origin of the energy by $\hbar\Delta/2$.
This is effectively the same as the system for which the dynamics is governed by equation~(\ref{SE_ref_10_24_20}).
Therefore, the accelerated and decelerated dynamics examined in the main text can be emulated by this single qubit system under a drive, although the detuning should be sufficiently smaller than $\alpha_0$.
\section*{S3 Role of additional phase}
\label{Role of additional phase}
We discuss the role of additional phase by showing how the relative phase between $\phi_m$ and $\phi_l$ influences the time dependence of the population.
Using equation~(\ref{SE_ref_10_24_20}), the time derivative of the population $|\phi_m|^2$ can be written as
\begin{eqnarray}
\frac{d}{dt}|\phi_m|^2 = -ig\phi_m^\ast \phi_l + ig\phi_m \phi_l^\ast.
\label{dpop_2_26_21}
\end{eqnarray}
Equation~(\ref{dpop_2_26_21}) can be rewritten as
\begin{eqnarray}
\frac{d}{dt}|\phi_m|^2 = -2g\tilde\phi_m \tilde\phi_l \sin(\theta_m - \theta_l),
\label{dpop_2_26_21v2}
\end{eqnarray}
where $\tilde\phi_m$ and $\theta_m$ are the intensity and the phase of $\phi_m$, that is,
$\phi_m=\tilde\phi_m e^{i\theta_m}$ where $\tilde\phi_m, \theta_m \in R$.
Thus, it is seen that the relative phase $\theta_m - \theta_l$ affects the rate of change of the population as well as the intensity $\tilde\phi_m$ and coupling $g$.
The intensity of the wave function and the coupling of the accelerated or decelerated dynamics are the same as in the reference dynamics in our formalism.
Therefore, deformation of the phase via additional phase is required for acceleration and deceleration.
\section*{S4 Tunable coupling}
\label{Tunable coupling}
The fast-forward scaling theory can be extended straightforwardly to the case in which the coupling strength is tunable.
The reference dynamics develops under time-dependent $g$ and $\omega_m$.
The Schr\"{o}dinger equation is represented by
\begin{eqnarray}
i\frac{d}{dt}\phi_{m}(t) = g(t) \phi_{l}(t) + \omega_m(t) \phi_{m}(t).
\label{SE_ref_1_31_21}
\end{eqnarray}
We extend the formalism to obtain $g$ and $\omega_m$ which accelerate, decelerate, or reverse the system evolution relative to the reference dynamics.
If $g$ and $\omega_m$ can be perfectly controlled, we can use a trivial scaling property as explained later.
However, the controllability of the parameters is limited in speed and range by device parameters and control hardware.
|
Therefore, it will be meaningful to develop fast-forward scaling theory also for the case with a tunable coupling as the theory would then provide various ways to generate a target state.
We assume that the wave function, $ \phi_m^{\rm FF}(t)$, of the accelerated, decelerated, or reversed dynamics has the same form as equation~(\ref{phiFF_1_31_21}).
We assume that $\phi_m^{\rm FF}$ is a solution of the Schr\"{o}dinger equation:
\begin{eqnarray}
i\frac{d}{dt}\phi_m^{\rm FF}(t) = g_{\rm FF}(t) \phi_{l}^{\rm FF}(t) + \omega_m^{\rm FF}(t) \phi_m^{\rm FF}(t).
\label{SEFF_1_31_21_v2}
\end{eqnarray}
In the same manner as used for equations (\ref{f_10_10_20}) and (\ref{V_10_10_20}), we can obtain two equations:
\begin{eqnarray}
\alpha(t) g(t) {\rm Im} [ \phi_m^\ast \phi_{l}] = g^{\rm FF}(t) {\rm Im}\{ \phi_m^\ast \phi_l \exp[i(f_l-f_m)]\} \nonumber\\
\label{f_1_31_21}
\end{eqnarray}
and
\begin{eqnarray}
\omega_m^{\rm FF}(t) &=& {\rm Re} \Big\{ \frac{\phi_l}{\phi_m} \Big{(}\alpha(t) g(t)- g^{\rm FF}(t)\exp[i(f_l-f_m)] \Big{)} \Big\}\nonumber\\
&& + \alpha(t) \omega_m(\Lambda(t)) - \frac{df_m}{dt},
\label{V_1_31_21}
\end{eqnarray}
where $l\ne m$, and $\phi_{m(l)}$ and $f_{m(l)}$ abbreviate $\phi_{m(l)}(\Lambda(t))$ and $f_{m(l)}(t)$, respectively.
Equations~(\ref{f_1_31_21}) and (\ref{V_1_31_21}) have a trivial solution $g^{\rm FF}(t)=\alpha(t) g(t)$, $\omega_m^{\rm FF}(t)=\alpha(t) \omega_m(\Lambda(t))$ and $f_m(t)=0$.
The corresponding dynamics is simply scaled with respect to time without any additional phase.
However, in general, $g^{\rm FF}(t)$ can be chosen to be different from $\alpha(t) g(t)$.
Thus, equations~(\ref{f_1_31_21}) and (\ref{V_1_31_21}), which encompass the simply scaled dynamics,
provide us with various choices for the time dependence of the control parameters for acceleration, deceleration, or reversal.
\section*{S5 Gaps between trajectories}
\label{Gaps between trajectories}
In order to examine the mechanism by which the gaps between trajectories manifest, we rewrite equation~(\ref{f_10_10_20}) as
\begin{eqnarray}
\alpha(t) b(t) = b(t) \cos(f_l(t)-f_m(t)) + a(t) \sin(f_l(t)-f_m(t)),
\label{fver2_1_30_21}
\end{eqnarray}
where $a(t)={\rm Re}[\phi_m^\ast \phi_{l}]$ and $b(t)={\rm Im}[\phi_m^\ast \phi_{l}]$, and
$\phi_{m(l)}$ abbreviates $\phi_{m(l)}(\Lambda(t))$ in this section.
Equation~(\ref{fver2_1_30_21}) can be rewritten as
\begin{eqnarray}
\frac{\alpha(t) b(t)}{r(t)} = \sin(f_l(t)-f_m(t)+\theta(t)),
\label{fver3_1_30_21}
\end{eqnarray}
where $r(t)=\sqrt{a(t)^2 + b(t)^2}$ and $\tan\theta(t) = b(t)/a(t)$.
Equation~(\ref{fver3_1_30_21}) has at most two solutions of $f_l(t)-f_m(t)$ for $-\pi \le f_l(t)-f_m(t) < \pi$.
When $\phi_m^\ast \phi_{l}$ is purely imaginary $a(t)=0$, $b(t)\ne 0$, and $\alpha(t)=1$, two solutions are degenerate at $f_l-f_m=0$.
Figure~\ref{phase_log_com_10_10_20} shows $\ln|\beta^{\rm FF}|$ for various values of $T_{\rm F}$ for $f_1(t)=0$.
In Fig.~\ref{phase_log_com_10_10_20}a for $T_{\rm F}=g^{-1}$, there is a trivial trajectory at $f_2=0$ given that $\alpha(t)=1$. This trajectory corresponds to the reference dynamics.
The intersections of the trajectories at $f_2=0$ in Fig.~\ref{phase_log_com_10_10_20}a correspond to the degeneration points where $\phi_m^\ast \phi_{l}$ is purely imaginary.
The intersections are disconnected and the gap between the trajectories opens in Fig.~\ref{phase_log_com_10_10_20}b--e where $\alpha(t)\ne 1$ for $0<t<T_{\rm F}$.
The gap between the trajectories opens horizontally for the decelerated dynamics as seen in Fig.~\ref{phase_log_com_10_10_20}b and \ref{phase_log_com_10_10_20}c which correspond to $\alpha(t)<1$.
On the other hand, the gap between the trajectories opens vertically for the acceleration as seen in Fig.~\ref{phase_log_com_10_10_20}d and \ref{phase_log_com_10_10_20}e which correspond to $\alpha(t)>1$.
This is because there are two solutions of $f_2(t)$ for $\alpha(t)<1$, while there is no solution of $f_2(t)$ for $\alpha(t)>1$, when $\phi_m^\ast \phi_{l}$ is purely imaginary.
\begin{figure}
\begin{center}
\includegraphics[width=15.0cm]{figure7.eps}
\end{center}
\caption{{\bf Gap opening between speed-controlled trajectories.}
{\bf a-e,} $\ln|\beta^{\rm FF}|$ as a function of $f_2$ and $t$ for $T_{\rm F}$ indicated in the panels.
Other parameters used are the same as in Fig.~\ref{system_10_4_20}b.
On the brown lines, $f_2=0$ and $f_2=\pm 2\pi$, in {\bf a}, $\beta^{\rm FF}(t,f_2)=0$ ($\ln|\beta^{\rm FF}(t,f_2)|=-\infty$). These lines correspond to the reference dynamics.
{\bf b,c} correspond to deceleration, and {\bf d,e} to acceleration.
The arrows in {\bf b} and {\bf d} represent the direction in which the gap between the trajectories opens.
}
\label{phase_log_com_10_10_20}
\end{figure}
\section*{S6 Shift of $\omega$}
\label{Relative phase}
We show that differences between $\omega_1(t)$ and $\omega_2(t)$ give rise to changes in the overall phase of the wave function.
We assume that $\phi_m$ satisfies equation~(\ref{SE_ref_10_24_20}).
Now, we introduce $\bar\phi_m(t)$ defined by
\begin{eqnarray}
\bar\phi_m(t) = \phi_m(t) e^{i\theta(t)},
\end{eqnarray}
where $\theta(t)$ is independent of $m$.
Using equation~(\ref{SE_ref_10_24_20}), we obtain
\begin{eqnarray}
i\frac{d}{dt}\bar\phi_{m}(t) = g \bar\phi_{l}(t) + \big{(}\omega_m(t) - \dot\theta(t)\big{)} \bar\phi_{m}(t),
\end{eqnarray}
This result represents that the wave function simply acquires additional global phase when $\omega_1(t)$ and $\omega_2(t)$ are shifted by the same amount, $- \dot\theta(t)$.
|
\section{Introduction}
Experiments to map the cosmic background radiation (CBR) have stimulated renewed
interest in diffuse Galactic emission.
Sensitive observations of variations in the microwave sky brightness have
revealed a component of the 14-90 GHz microwave continuum
which is correlated with
$100\,\mu{\rm m}$ thermal emission from interstellar dust
(Kogut et al.\ 1996;
de Oliveira-Costa et al.\ 1997;
Leitch et al.\ 1997).
The origin of this ``anomalous" emission has been of great interest.
While the observed frequency-dependence appears consistent
with free-free emission
(Kogut et al.\ 1996; Leitch et al.\ 1997),
it is difficult to reconcile the observed intensities with free-free
emission from interstellar gas (see \S\ref{sec:freefree} below).
A recent investigation of the rotational dynamics of very small interstellar
grains (Draine \& Lazarian 1998; hereinafter DL98)
leads us instead to propose that this 10-100 GHz
``anomalous'' component of the diffuse Galactic background
is produced by electric
dipole rotational emission from very small dust grains under normal
interstellar conditions.
Below we describe briefly our assumptions concerning the interstellar
grain properties (\S\ref{sec:grain_props}), the dynamics governing the
rotation of small grains (\S\ref{sec:rotation}), the predicted emission
spectrum and intensity (\S\ref{sec:prediction}) and how it compares with
observations (\S\ref{sec:observations}), and why the observed emission cannot
be attributed to free-free (\S\ref{sec:freefree}).
In \S\ref{sec:discussion} we discuss implications for future CBR studies.
\section{Grain Properties\label{sec:grain_props}}
The observed emission from interstellar diffuse clouds at 12 and 25$\,\mu{\rm m}$
(Boulanger \& P\'erault 1988) is believed to be thermal emission from
grains which are small enough that absorption of a single photon can
raise the grain temperature to $\sim 150\,{\rm K}$ for the $25\,\mu{\rm m}$ emission,
and $\sim300\,{\rm K}$ for the $12\,\mu{\rm m}$ emission.
Such grains contain $\sim10^2-10^3$ atoms, and must be sufficiently
numerous to account for $\sim$20\% of the total rate of absorption of
energy from starlight.\footnote{
Assuming a Debye temperature $\Theta=420\,{\rm K}$, a 6 eV photon can
heat a particle with N=277 atoms to $T=200\,{\rm K}$.
}
A substantial fraction of these very small grains may be hydrocarbon
molecules, as indicated by emission bands
at 6.2, 7.7, 8.6, and 11.3$\,\mu{\rm m}$ observed from diffuse clouds by
{\it IRTS} (Onaka et al.\ 1996) and {\it ISO} (Mattila et al.\ 1996).
If the grains are primarily carbonaceous, they must contain $\sim 3-10\%$
of the carbon in the interstellar medium
[in the model of D\'esert et al.\ (1990),
polycyclic aromatic hydrocarbon molecules contain $\sim$9\% of the carbon].
The size distribution is uncertain.
As in DL98, for our standard model ``A''
we assume a grain size distribution consisting of a
power law $dn/da\propto a^{-3.5}$
(Mathis, Rumpl, \& Nordsieck 1977; Draine \& Lee 1984)
plus a log-normal distribution containing
5\% of the carbon;
50\% of the mass
in the log-normal distribution (i.e., 2.5\% of the C atoms)
is in grains with $N < 160$ atoms.
The grains are assumed to be disk-shaped for $N<120$, and spherical
for $N>120$.
The microwave emission from a spinning grain depends on the component
of the electric
dipole moment perpendicular to the angular velocity.
Following DL98, we assume that a neutral grain has a dipole moment
$\mu = N^{1/2}\beta$, where $N$ is the number of atoms in the grain.
Recognizing that there will be a range of dipole moments for grains of
a given $N$, we assume that 25\% of the grains have
$\beta=0.5\beta_0$, 50\% have $\beta=\beta_0$, and 25\% have $\beta=2\beta_0$.
For our standard models we will take $\beta_0=0.4\,{\rm debye}$, but since $\beta$
is uncertain we will examine the effects of varying $\beta_0$.
If the grain has a charge $Ze$, then it acquires an additional
electric dipole component arising from the fact that the centroid of
the grain charge will in general be displaced from the center of mass
(Purcell 1979); we follow DL98 in assuming a characteristic displacement
of $0.1a_x$, where $a_x$ is the rms distance of the grain surface from
the center of mass, and in computing the grain charge distribution $f(Z)$
including collisional charging and photoelectric emission.
Finally we assume that the dipole moment is, on average, uncorrelated
with the spin axis, so that the mean-square dipole moment transverse
to the spin axis is
\begin{equation}
\label{eq:mueff}
\mu_\perp^2= (2/3)
\left( \beta^2 N + 0.01 Z^2e^2a_x^2 \right) ~~~.
\end{equation}
We will show results for 6 different grain models.
Models B and C differ from our preferred model A by having $\beta_0$
decreased or increased by a factor 2.
Model D differs by having twice as many small grains in the log-normal
component.
In model E even the smallest grains are spherical, whereas in model F
the smallest grains are assumed to be linear.
\section{Rotation of Interstellar Grains \label{sec:rotation}}
Rotation of small interstellar grains has been discussed previously,
including the
fact that the rate of rotation of very small grains would be limited by
electric dipole emission (Draine \& Salpeter 1979).
Ferrara \& Dettmar (1994) recently estimated the rotational emission from
very small grains, noting that it could be observable up to $\sim$100 GHz,
but they did not self-consistently evaluate the emission spectrum including
the effects of radiative damping.
A number of distinct physical processes act to excite and damp rotation of
interstellar grains, including collisions with neutral atoms, collisions with
ions and long-range interactions with passing ions (Anderson \& Watson 1993),
rotational emission of electric dipole radiation,
emission of infrared radiation, formation of ${\rm H}_2$ molecules on the
grain surface, and photoelectric emission.
We use the rates for these processes summarized in
DL98; the results reported
here assume that ${\rm H}_2$ does {\it not} form on small grains, so that
there is no contribution by ${\rm H}_2$ formation to the rotation.
As discussed in DL98, observed rates of ${\rm H}_2$ formation on grains limit
the ${\rm H}_2$ formation torques so that they can make at most a minor contribution
to the rotation of the very small grains of interest here.
Emission in the 10-100 GHz region is dominated by grains containing
$N\ltsim10^3$ atoms.
For these very small grains under CNM conditions
(see Table \protect{\ref{tab:phases}}), rotational excitation is dominated by
direct collisions with ions and ``plasma drag''.
The very smallest grains ($N\ltsim 150$) have their rotation damped
primarily by electric dipole emission; for $150\ltsim N\ltsim 10^3$ plasma
drag dominates.
\section{Predicted Emission\label{sec:prediction}}
Following DL98, we solve for
the rms rotation rate $\langle \nu^2\rangle^{1/2}$, which depends on
the local environmental conditions (density $n_{\rm H}$, gas temperature
$T$, fractional ionization $n_e/n_{\rm H}$, and the intensity of the
starlight background), the
grain size, and the assumed value of $\beta$ in eq.(\ref{eq:mueff}).
Then, assuming a Maxwellian distribution of angular velocities for each
grain size and $\beta$,
we integrate over the size distribution to obtain the
emission spectrum for the grain population.
Interstellar gas is found in a number of characteristic
physical states (see, e.g., McKee 1990).
By mass, most of the gas and dust is found in the
``Warm Ionized Medium'' (WIM),
``Warm Neutral Medium'' (WNM),
``Cold Neutral Medium'' (CNM),
or in molecular clouds (MC).
In the diffuse regions (WIM,WNM,CNM) the bulk of the
dust is heated by the general
starlight background to temperatures $\sim18\,{\rm K}$ where it radiates
strongly at $\sim100\,\mu{\rm m}$ -- thus the $100\,\mu{\rm m}$ emission traces
the mass of the WIM, WNM, and CNM material.
Relatively little molecular material is present at $|b|>30\deg$ where
sensitive CBR observations are directed.
Hence we consider only the WIM, WNM, and CNM phases in the present
discussion.
In Table 1 we list the assumed properties for each of the interstellar
medium components which we consider here.
For $b>30\arcdeg$, $21\,{\rm cm}$ observations (Heiles 1976) indicate
$N_{\rm H}({\rm WNM})+N_{\rm H}({\rm CNM})\approx 3.4\times10^{20}(\csc b-0.2)\,{\rm cm}^{-2}$,
divided approximately equally between WNM and CNM phases
(Dickey, Salpeter, \& Terzian 1978).
Dispersion measures toward pulsars in 4 globular clusters
(Reynolds 1991) indicate
an ionized component
$N_{\rm H}({\rm WIM})\approx 5\times10^{19}\csc b\,{\rm cm}^{-2}$, after attributing $\sim$20\%
of the electrons to hot ionized gas (McKee 1990).
In Figure \ref{fig:specs} we show the predicted rotational
emissivity per H nucleon,
where we
have taken the weighted average of emission from CNM, WNM, and WIM,
according to the mass fractions in
Table \ref{tab:phases}.
The heavy curve is for our preferred grain model A; the results shown
for grain models B-F serve to illustrate
the uncertainty in the predicted emission.
All models have the grain emissivity peaking at about $\sim$30 GHz; when
the smallest grains are taken to be spheres (model E),
the emission peak shifts to
$\sim$35 GHz because of the reduced moment of inertia of these smallest
grains.
In addition to the rotational emission from the very small grains,
we expect continuum emission from the vibrational modes of the
larger dust grains, which are thermally excited according to the
grain vibrational temperature $T_d$.
The vibrational emission per H atom is assumed to vary as $j_\nu/n_{\rm H} = A \nu^\alpha
B_\nu(T_d)$.
We show the emissivity for $\alpha=1.5$, 1.7, and 2;
in each case
we adjust $A$ and $T_d$ so that
$\nu j_\nu$ peaks at $\lambda=140\,\mu{\rm m}$,
with a peak emissivity
$4\pi\nu j_\nu/n_{\rm H} = 3\times 10^{-24}\,{\rm ergs}\,{\rm s}^{-1}{\rm H}^{-1}$
(Wright et al.\ 1991; cf. Fig. 5 of Draine 1995).
The resulting values of $\alpha
|
$ and $T_d$ are within the range found
by Reach et al.\ (1995).
We will use $\alpha=1.7$ to estimate the thermal emissivity; by comparison
with the estimates for $\alpha=1.5$ and $\alpha=2$ we see that the
thermal emission at $\sim$100 GHz is uncertain by at least a factor $\sim2$.
We expect the thermal emission to
dominate for $\nu\gtsim70$ GHz,
with the rotational emission dominant at lower frequencies.
\section{Comparison With Observations\label{sec:observations}}
Figure \ref{fig:speca} shows the emission per H nucleon
for our preferred parameters
(model A, $\alpha=1.7$), for each phase as well as the weighted average
over the three phases.
Also shown are the observational results of
Kogut et al.\ (1996) (based on cross-correlation of {\it COBE} DMR 31.5, 53, and 90 GHz
maps with {\it COBE} DIRBE 100$\,\mu{\rm m}$ maps);
de Oliveira-Costa et al.\ (1997) (based on cross-correlation of
Saskatoon 30 and 40 GHz maps with {\it COBE} DIRBE 100$\,\mu{\rm m}$ maps;
and Leitch et al.\ (1997) (based on cross-correlation of
Owens Valley mapping at 14.5 and 32 GHz
with {\it IRAS} 100$\,\mu{\rm m}$ maps.
All three papers reported strong positive correlations of
microwave emission with 100$\,\mu{\rm m}$ thermal emission from dust.
The observed correlation of 100$\,\mu{\rm m}$ emission with
21cm emission,
$I_\nu(100\,\mu{\rm m})\approx 0.85{\rm MJy}\,{\rm sr}^{-1} (N_{\rm H}/10^{20}\,{\rm cm}^{-2})$
(Boulanger and P\'erault 1988), allows us to infer the excess
microwave emission per H atom, as shown in Fig.\ \ref{fig:speca}.
While Kogut
et al.\ attribute only $\sim$50\% of the 90 GHz signal to thermal emission,
we estimate that the 90 GHz signal is predominantly vibrational emission
from dust. The rotational emission which we predict appears to be in
good agreement with the 30-50 GHz measurements by Kogut et al.,
de Oliveira-Costa et al., and Leitch et al.
We conclude that rotational emission from small dust grains accounts
for a substantial fraction of the ``anomalous'' Galactic emission at
30-50 GHz.
Leitch et al.\ also report excess emission at 14.5 GHz which is correlated
with 100$\,\mu{\rm m}$ emission from dust.
For our model A, rotational emission from dust grains
accounts for only $\sim$30\% of the reported excess
emission at 14.5 GHz.
Additional emission at 14.5 GHz could be obtained by
changes in our adopted parameters: model B, in which dipole moments
are increased by a factor of two,
would be in good agreement with the 14.5 GHz result of Leitch et al.
However, the assumed dipole moment is larger than we consider likely, so
we do not favor this model.
We could of course also
improve agreement by increasing the number of small grains in the
size range ($N\approx 100$) primarily radiating near 14.5 GHz.
We note that there may be systematic
variations in the small grain population from one region to another.
The signal reported by Leitch et al.\ at 30 GHz is about a factor of
$\sim$3 larger than the results of Kogut et al.\ and de Oliveira-Costa et al.,
both of which average over much larger areas than the
observations of Leitch et al.
Additional observations to measure more precisely the Galactic emission
will be of great value.
We also note that
although Leitch et al.\ found no correlation of 14.5 GHz excess with
synchrotron maps at 408 MHz, synchrotron emission and rotational emission from
dust are expected to contribute approximately equally to the diffuse
background near 14 GHz (see Fig. \ref{fig:back}).
We conjecture that some of the 100$\,\mu{\rm m}$-correlated 14.5 GHz radiation
observed by Leitch et al.\ may be synchrotron emission which is enhanced
by increased magnetic field strengths near concentrations of gas
and dust.
\section{Why Not Free-Free? \label{sec:freefree}}
Based on the observed frequency-dependence of the excess radiation,
Kogut et al.\ proposed that it was a combination of thermal
emission from dust plus free-free emission from ionized hydrogen.
Leitch et al.\ reached the same conclusion based on their
14.5 and 32 GHz measurements.
Leitch et al.\ noted that if the proposed spatially-varying
free-free emission originated
in gas with $T\ltsim10^4\,{\rm K}$, it would be accompanied by H$\alpha$
emission at least 60 times stronger than the observed variations
in the H$\alpha$ sky brightness
(Gaustad et al.\ 1996) on the same angular scales.
Noting that the ratio of H$\alpha$ to
free-free radio continuum drops as the gas temperature is
increased above $\sim3\times10^4\,{\rm K}$,
Leitch et al.\ proposed that the ``anomalous'' emission
originated in gas at $T\gtsim10^6\,{\rm K}$,
However, this proposal appears to be untenable.
According to Leitch et al.,
the observed emission excess would require an emission measure
$EM \approx 130 T_6^{0.4}\,{\rm cm}^{-6}\,{\rm pc}$ with
$T_6\equiv(T/10^6\,{\rm K}) \gtsim 1$
near the North Celestial Pole (NCP,
$l\approx123\arcdeg$, $b\approx27.4\arcdeg$).
Kogut et al.\ and de Oliveira-Costa et al.\ reported a weaker signal at
30 GHz, corresponding to an emission measure
$\sim 40 T_6^{0.4}\,{\rm cm}^{-6}\,{\rm pc}$ toward the NCP.
The DMR observations (Kogut et al.\ 1996) cover a substantial fraction
of the sky, and would imply
an emission measure normal to
the galactic disk
$\sim 2\sin27\arcdeg\times40 T_6^{0.4}\,{\rm cm}^{-6}\,{\rm pc}$.
For $T\gtsim10^6\,{\rm K}$ the radiative cooling rate can be approximated
by $\sim1\times10^{-22}(T_6^{-1}+0.3T_6^{1/2})n_{\rm H} n_e \,{\rm ergs}\,{\rm cm}^3\,{\rm s}^{-1}$
(cf. Bohringer \& Hensler 1989)
so the power radiated per disk area would
be $\sim 30(T_6^{-0.6}+0.3T_6^{0.9})L_{\odot}\,{\rm pc}^{-2}$,
far in excess of the
$0.3L_{\odot}\,{\rm pc}^{-2}$ energy input for one $10^{51}\,{\rm ergs}$ supernova
per 100 yr per
10 kpc radius disk.
Evidently the proposed attribution of the ``anomalous emission'' to
bremsstrahlung from
hot gas can be rejected on energetic grounds.
\section{Discussion\label{sec:discussion}}
Very small dust grains have been invoked previously to explain the
observed 3 - 60$\,\mu{\rm m}$ emission from interstellar dust
(Leger \& Puget 1984; Draine \& Anderson 1985; D\'esert, Boulanger,
\& Puget 1990).
We have calculated the spectrum of rotational emission expected from
such dust grains,
and shown that it should be detectable
in the 10 -- 100 GHz region.
In fact, we argue that this emission has already been detected by
Kogut et al.\ (1996), de Oliveira-Costa et al.\ (1997), and Leitch et al.\ (1997).
Rotational emission from very small grains and ``thermal" (i.e., vibrational)
emission by
larger dust grains is an important foreground
which will have to be subtracted in future sensitive experiments to
measure
angular structure in the CBR.
To illustrate the relative importance of the emission from dust,
in Fig. \ref{fig:back} we show the estimated rms variations in Galactic
emission near the
NCP ($l=123\arcdeg$, $b=27.4\arcdeg$),
representative of intermediate galactic latitudes.
The HI column density is taken to be $N({\rm H}^0)=6.05\times10^{20}\,{\rm cm}^{-2}$
(Hartmann \& Burton 1997), plus
an additional column density $N({\rm H}^+)\sim1.3\times10^{20}\,{\rm cm}^{-2}$ of WIM.
We take 20\% as an estimate of the rms variations in column density
on angular scales of a few degrees.
We also show the synchrotron background near the NCP, based on a total antenna
temperature of $3.55\,{\rm K}$ at 1.42 GHz
(Reich 1982) and $I_\nu\propto\nu^{-1}$
at higher frequencies.
The synchrotron background is smoother than the HI; we take 5\% as
an estimate of the rms synchrotron variations on angular scales of
a few degrees.
From the standpoint of minimizing confusion with
non-CBR backgrounds, 70-100 GHz appears to be the optimal frequency window.
The ``thermal'' emission originates in larger grains, which are known to
be partially aligned with their long axes perpendicular to the local
magnetic field;
above 100 GHz most of the emission is thermal, and we
estimate that this will be $\sim5\%$ linearly polarized perpendicular to the
magnetic field (Dotson 1996 has observed up to 7\% polarization at
100$\,\mu{\rm m}$ toward M17).
Below $\sim$50 GHz most of the emission is rotational.
At this time it is not clear whether the angular momentum vectors of
very small dust grains will
be aligned with the galactic magnetic field.
Ultraviolet observations of interstellar polarization
place limits on such alignment
(Kim \& Martin 1995) but do not require it to be zero.
If the grain angular momenta tend to be aligned with the galactic
magnetic field, then the rotational emission will tend to be polarized
perpendicular to the magnetic field.
Physical processes which could produce alignment of these small grains
will be discussed in future work (Lazarian \& Draine 1998).
Even modest ($\sim$1\%) polarization of the dust emission could
interfere with efforts to measure the small polarization of the
CBR introduced by cosmological density fluctuations.
According to our model, the $\sim30\,$GHz emission and diffuse $12\,\mu{\rm m}$
emission should be correlated, as both originate in grains containing
$\sim10^2$ atoms; future satellite observations of the diffuse
background can test this.
Future observations by both ground-based experiments and satellites
such as {\it MAP} and {\it PLANCK} will
be able to characterize more precisely the intensity and spectrum of
emission from interstellar dust grains in the 10-200 GHz region.
Measurements of the 10-100 GHz emission
will constrain the abundance of very small grains, including
spatial variations, as well as their possible alignment.
\acknowledgements
We thank Tom Herbig, Ang\'elica de Oliveira-Costa,
Lyman Page, Alex Refregier, David Spergel,
Max Tegmark, and
David Wilkinson for helpful discussions.
We thank
Robert Lupton for the availability of the SM package.
B.T.D. acknowledges the support
of NSF grants AST-9319283 and AST-9619429, and
A.L. the support of NASA grant NAG5-2858.
|
\section{Introduction}
When studying interval maps from the point of view of ergodic theory, the first task is to find a suitable invariant measure $\mu$, preferably one that is absolutely continuous with respect to the Lebesgue measure. If the map is piecewise linear it is easy to obtain such a measure if
the map admits a Markov partition (see \cite{FB81}). If no Markov partition exists, the question becomes more delicate. A relatively simple expression for the invariant measure, however, can still be found if the system satisfies the so called {\em matching condition}. Matching occurs when the orbits of the left and right limits of the discontinuity (critical) points of the map meet after a number of iterations and if at that time also their (one-sided) derivatives are equal. Often results that hold for Markov maps are also valid in case the system has matching. For instance, matching causes the invariant density to be piecewise smooth or constant. For the family
$T_\alpha : {\mathbb S}^1 \to {\mathbb S}^1$,
\begin{equation}\label{eq:T}
T_\alpha: x \mapsto \beta x + \alpha \pmod 1, \qquad \alpha \in [0,1), \, \beta \text{ fixed},
\end{equation}
formula (1.4) in \cite{FL96} states that
the $T_\alpha$-invariant density is
$$
h(x) := \frac{d\mu(x)}{dx} =
\sum_{T^n_\alpha(0^-) < x} \beta^{-n} - \sum_{T^n_\alpha(0^+) < x} \beta^{-n}.
$$
Matching then immediately implies that $h$ is piecewise constant, but this result holds in much greater generality, see \cite{BCMP}. A further example can be found in \cite{KS12}, where a certain construction involving generalised $\beta$-transformations is proved to give a multiple tiling in case the generalised $\beta$-transformation has a Markov partition or matching.
\vskip .2cm
Especially in the case of $\alpha$-continued fraction maps, the concept of matching has proven to be very
useful. In \cite{NN08,KSS12,CT12,CT13} matching was used to study the entropy as a function of $\alpha$.
In \cite{DKS09} the authors used matching to give a description of the invariant density for the
$\alpha$-Rosen continued fractions. The article \cite{BORG13} investigated the entropy and invariant
measure for a certain two-parameter family of
piecewise linear interval maps and conjectured about a relation between these and matching. In several parametrised families
where matching was studied, it turned out that matching occurs prevalently, i.e., for (in some sense) typical parameters. For example, in \cite{KSS12} it is shown that the set of $\alpha$'s for which the $\alpha$-continued fraction map has matching, has full Lebesgue measure. This fact is proved in \cite{CT12} as well, where it is also shown that the complement of this set has Hausdorff dimension $1$.
\vskip .2cm
For piecewise linear transformations, prevalent matching appears to be rare: for instance it can occur in the
family \eqref{eq:T} only if the slope $\beta$ is an algebraic integer, see \eqref{q:0orbit}. It is likely that $\beta$ must
satisfy even stronger requirements for matching to hold: so far matching has only been observed for values
of the slope $\beta$ that are Pisot numbers (i.e., algebraic numbers $\beta > 1$ that have all Galois
conjugates strictly inside the unit disk) or Salem numbers (i.e., algebraic numbers $\beta > 1$ that have
all Galois conjugates in the closed unit disk with at least one on the boundary).
For instance if $\beta$ is the Salem number satisfying $\beta^4-\beta^3-\beta^2-\beta+1 = 0$ and if $\alpha \in \big(\frac1{\beta^4+1}, \frac{\beta}{\beta^4+1} \big)$, then the points 0 and 1 have the following orbits under $T_\alpha$:
\begin{equation}\label{eq:salem4}
\begin{array}{ccccccc}
0 & \to & \alpha & \to & (\beta+1)\alpha & \to & (\beta^2 + \beta + 1)\alpha,\\
1 & \to & \beta + \alpha -1 & \to & (\beta+1)\alpha + \frac1{\beta}-\frac1{\beta^2} & \to & (\beta^2 + \beta + 1)\alpha - \frac1{\beta}.
\end{array}
\end{equation}
Lemma~\ref{l:1overbeta} below now implies that there is matching after four steps. For this choice of the slope numerical experiments suggests matching
is prevalent,
but the underlying structure is not yet clear to us.
\vskip .2cm
The object of study of this paper is the two-parameter family of circle maps from \eqref{eq:T}. These are called {\em generalised} or {\em shifted $\beta$-transformations} and they are well studied in the literature. It follows immediately from the results of Li and Yorke (\cite{LY78}) that these maps have a unique absolutely continuous invariant measure, that is ergodic. Hofbauer (\cite{Hof81b}) proved that this measure is the unique measure of maximal entropy with entropy $\log \beta$. Topological entropy is studied in \cite{FP08}. In \cite{FL96,FL97} Flatto and Lagarias studied the lap-counting function of this map. The relation with normal numbers is investigated in \cite{FP09,Sch97,Sch11}. More recently, Li, Sahlsten and Samuel studied the cases in which the corresponding $\beta$-shift is of finite type (\cite{LSS16}).
\vskip .2cm
In this paper we are interested in the size of the set of $\alpha$'s for which, given a $\beta>1$, the map $T_{\alpha}: x \mapsto \beta x + \alpha \pmod 1$ has matching. We call this set $A_{\beta}$, i.e.,
\begin{equation}\label{q:A}
A_{\beta} = \{ \alpha \in [0,1]\, : \, T_{\alpha} \text{ does not have matching} \}.
\end{equation}
The first main result pertains to quadratic irrationals:
\begin{theorem}\label{t:quadratic}
Let $\beta$ be a quadratic algebraic number. Then $T_\alpha$ exhibits matching if and only if
$\beta$ is Pisot. In this case $\beta^2 -k\beta \pm d = 0$ for some $d \in \mathbb N$ and
$k > d\pm 1$, and $dim_H(A_\beta) = \frac{\log d}{\log \beta}$.
\end{theorem}
This result is proved in Section~\ref{sec:nonunit}. The methods for $+d$ and $-d$ are quite similar with the second
family being a bit more difficult than the first.
Section~\ref{s:minusd} includes the values of $\beta$ satifying $\beta^2-k\beta - 1$, which are sometimes
called the metallic means.
In Section~\ref{sec:multinacci} we make some observations about the multinacci Pisot numbers (defined as the leading root of $\beta^k - \beta^{k-1} - \dots - \beta -1 = 0$). The second main result of this paper states that in the case $k = 3$ (the tribonacci number) the non-matching set has Hausdorff dimension strictly between 0 and 1. Results on the Hausdorff dimension or Lebesgue measure for $k \ge 4$ however remain illusive.
\section{Numerical evidence}\label{s:numerical}
We have numerical evidence of prevalence of matching in the family \eqref{eq:T} for more values of $\beta$ then we have currently proofs for.
Values of the slope $\beta$ for which matching seems to be prevalent include many Pisot numbers such as the multinacci numbers and the plastic constant (root of $x^3=x+1$). Let us mention that matching also occurs for some Salem numbers, including the famous Lehmer's constant (i.e., the root of the polynomial equation $x^{10}+x^9-x^7-x^6-x^5-x^4-x^3+x+1 = 0$, conjecturally the smallest Salem number), even if it is still unclear whether matching is prevalent here. The same holds for the Salem number used in \eqref{eq:salem4}.
One can also use the numerical evidence to make a guess about the box dimension of the bifurcation set $A_{\beta}$ given in (\ref{q:A}). Indeed, if $A\subset [0,1]$ is a full measure open set, then the box dimension of $[0,1]\setminus A$ equals the abscissa of convergence $s^*$ of the series
$$ \Phi(s):=\sum_{J\in \mathcal{A}} |J|^s$$
where $ \mathcal{A}$ is the family of connected components of $A$. Let us fix a value $b>1$ and set $a_k:=\#\{J\in \mathcal{A} : -k-1\leq \log_b |J| < -k \}$ then $\Phi(s) \asymp \sum_k a_k b^{-sk}$ and hence $s^*=\limsup_{k\to \infty} \frac{1}{k}\log_b a_k$. This means that we can deduce information about the box dimension from a statistic of the sizes of matching intervals.
The parameter $b$ in the above construction can be chosen freely; popular choices are $b=2$, $b=e$ and $b=10$. However, sometimes a clever choice of $b$ can be the key to make the growth rate of the sequence $\log_b a_k$ apparent from its very first elements.
Indeed, in the case of generalised $\beta$-transformations like in (\ref{eq:T}) a natural choice of the base is $b=\beta$, since it seems that if on an interval $J$ matching occurs in $k$ steps then for the size $|J|$ we have $|J| \leq c \beta^{-k}$ (see figure).
\begin{figure}[h]
\centering
\includegraphics[scale=0.45]{kvss-tribo.png}
\caption{This plot refers to the tribonacci value of $\beta$ and represents $-\log_\beta |J|$ (on the $y$-axis) against the matching index $k$ (on the $x$-axis): a linear dependence is quite apparent.
}
\label{k-vs-size}
\end{figure}
What actually happens in these constant slope cases is that there are many matching intervals of the very same size, and these ``modal'' sizes form a sequence $s_n \sim \beta^{-n}$.
This phenomenon is most evident when $\beta$ is a quadratic irrational; for instance in the case $\beta=2+\sqrt{2}$ all matching intervals seem to follow a very regular pattern, namely there is a decreasing sequence $s_n$ (the ``sizes'') such that:
\begin{enumerate}
\item[(i)] $-\frac{1}{n}\log_\beta s_n \to 1$ as $n\to \infty$ (note that here the log is in base $\beta=2+\sqrt{2}$);
\item[(ii)] if $J$ is a matching interval then $|J|=s_n$ for some $n$;
\item[(iii)] calling $a_n$ the cardinality of the matching intervals of size $s_n$
one gets the sequence1, 2, 6, 12, 30, 54, 126, 240, 504, 990, 2046, 4020, 8190, $\ldots$
which turns out to be known as A038199 on OEIS (see \cite{OEIS}).
Moreover $\frac{1}{n}\log (a_n) \to 2$ as $n \to \infty$.
\end{enumerate}
\begin{figure}[h]
\centering
\begin{tikzpicture}[gnuplot]
\gpsolidlines
\path (0.000,0.000) rectangle (12.700,10.160);
\gpcolor{color=gp lt color border}
\gpsetlinetype{gp lt border}
\gpsetlinewidth{1.00}
\draw[gp path] (1.748,0.616)--(1.928,0.616);
\draw[gp path] (12.147,0.616)--(11.967,0.616);
\node[gp node right] at (1.564,0.616) { 1};
\draw[gp path] (1.748,1.168)--(1.838,1.168);
\draw[gp path] (12.147,1.168)--(12.057,1.168);
\draw[gp path] (1.748,1.899)--(1.838,1.899);
\draw[gp path] (12.147,1.899)--(12.057,1.899);
\draw[gp path] (1.748,2.273)--(1.838,2.273);
\draw[gp path] (12.147,2.273)--(12.057,2.273);
\draw[gp path] (1.748,2.451)--(1.928,2.451);
\draw[gp path] (12.147,2.451)--(11.967,2.451);
\node[gp node right] at (1.564,2.451) { 10};
\draw[gp path] (1.748,3.003)--(1.838,3.003);
\draw[gp path] (12.147,3.003)--(12.057,3.003);
\draw[gp path] (1.748,3.734)--(1.838,3.734);
\draw[gp path] (12.147,3.734)--(12.057,3.734);
\draw[gp path] (1.748,4.108)--(1.838,4.108);
\draw[gp path] (12.147,4.108)--(12.057,4.108);
\draw[gp path] (1.748,4.286)--(1.928,4.286);
\draw[gp path] (12.147,4.286)--(11.967,4.286);
\node[gp node right] at (1.564,4.286) { 100};
\draw[gp path] (1.748,4.838)--(1.838,4.838);
\draw[gp path] (12.147,4.838)--(12.057,4.838);
\draw[gp path] (1.748,5.569)--(1.838,5.569);
\draw[gp path] (12.147,5.569)--(12.057,5.569);
\draw[gp path] (1.748,5.943)--(1.838,5.943);
\draw[gp path] (12.147,5.943)--(12.057,5.943);
\draw[gp path] (1.748,6.121)--(1.928,6.121);
\draw[gp path] (12.147,6.121)--(11.967,6.121);
\node[gp node right] at (1.564,6.121) { 1000};
\draw[gp path] (1.748,6.673)--(1.838,6.673);
\draw[gp path] (12.147,6.673)--(12.057,6.673);
\draw[gp path] (1.748,7.404)--(1.838,7.404);
\draw[gp path] (12.147,7.404)--(12.057,7.404);
\draw[gp path] (1.748,7.778)--(1.838,7.778);
\draw[gp path] (12.147,7.778)--(12.057,7.778);
\draw[gp path] (1.748,7.956)--(1.928,7.956);
\draw[gp path] (12.147,7.956)--(11.967,7.956);
\node[gp node right] at (1.564,7.956) { 10000};
\draw[gp path] (1.748,8.508)--(1.838,8.508);
\draw[gp path] (12.147,8.508)--(12.057,8.508);
\draw[gp path] (1.748,9.239)--(1.838,9.239);
\draw[gp path] (12.147,9.239)--(12.057,9.239);
\draw[gp path] (1.748,9.613)--(1.838,9.613);
\draw[gp path] (12.147,9.613)--(12.057,9.613);
\draw[gp path] (1.748,9.791)--(1.928,9.791);
\draw[gp path] (12.147,9.791)--(11.967,9.791);
\node[gp node right] at (1.564,9.791) { 100000};
\draw[gp path] (1.748,0.616)--(1.748,0.796);
\draw[gp path] (1.748,9.791)--(1.748,9.611);
\node[gp node center] at (1.748,0.308) {-5};
\draw[gp path] (2.693,0.616)--(2.693,0.796);
\draw[gp path] (2.693,9.791)--(2.693,9.611);
\node[gp node center] at (2.693,0.308) { 0};
\draw[gp path] (3.639,0.616)--(3.639,0.796);
\draw[gp path] (3.639,9.791)--(3.639,9.611);
\node[gp node center] at (3.639,0.308) { 5};
\draw[gp path] (4.584,0.616)--(4.584,0.796);
\draw[gp path] (4.584,9.791)--(4.584,9.611);
\node[gp node center] at (4.584,0.308) { 10};
\draw[gp path] (5.529,0.616)--(5.529,0.796);
\draw[gp path] (5.529,9.791)--(5.529,9.611);
\node[gp node center] at (5.529,0.308) { 15};
\draw[gp path] (6.475,0.616)--(6.475,0.796);
\draw[gp path] (6.475,9.791)--(6.475,9.611);
\node[gp node center] at (6.475,0.308) { 20};
\draw[gp path] (7.420,0.616)--(7.420,0.796);
\draw[gp path] (7.420,9.791)--(7.420,9.611);
\node[gp node center] at (7.420,0.308) { 25};
\draw[gp path] (8.366,0.616)--(8.366,0.796);
\draw[gp path] (8.366,9.791)--(8.366,9.611);
\node[gp node center] at (8.366,0.308) { 30};
\draw[gp path] (9.311,0.616)--(9.311,0.796);
\draw[gp path] (9.311,9.791)--(9.311,9.611);
\node[gp node center] at (9.311,0.308) { 35};
\draw[gp path] (10.256,0.616)--(10.256,0.796);
\draw[gp path] (10.256,9.791)--(10.256,9.611);
\node[gp node center] at (10.256,0.308) { 40};
\draw[gp path] (11.202,0.616)--(11.202,0.796);
\draw[gp path] (11.202,9.791)--(11.202,9.611);
\node[gp node center] at (11.202,0.308) { 45};
\draw[gp path] (12.147,0.616)--(12.147,0.796);
\draw[gp path] (12.147,9.791)--(12.147,9.611);
\node[gp node center] at (12.147,0.308) { 50};
\draw[gp path] (1.748,9.791)--(1.748,0.616)--(12.147,0.616)--(12.147,9.791)--cycle;
\gpcolor{color=gp lt color 1}
\gpsetlinetype{gp lt plot 1}
\draw[gp path] (3.058,0.616)--(3.058,1.168);
\draw[gp path] (3.257,0.616)--(3.257,2.044);
\draw[gp path] (3.449,0.616)--(3.449,2.596);
\draw[gp path] (3.638,0.616)--(3.638,3.327);
\draw[gp path] (3.828,0.616)--(3.828,3.795);
\draw[gp path] (4.017,0.616)--(4.017,4.470);
\draw[gp path] (4.206,0.616)--(4.206,4.984);
\draw[gp path] (4.395,0.616)--(4.395,5.575);
\draw[gp path] (4.584,0.616)--(4.584,6.113);
\draw[gp path] (4.773,0.616)--(4.773,6.692);
\draw[gp path] (4.962,0.616)--(4.962,7.230);
\draw[gp path] (5.151,0.616)--(5.151,7.797);
\draw[gp path] (5.340,0.616)--(5.340,8.252);
\draw[gp path] (5.529,0.616)--(5.529,8.447);
\draw[gp path] (5.719,0.616)--(5.719,8.146);
\draw[gp path] (5.908,0.616)--(5.908,7.817);
\draw[gp path] (6.097,0.616)--(6.097,7.442);
\draw[gp path] (6.286,0.616)--(6.286,7.067);
\draw[gp path] (6.475,0.616)--(6.475,6.659);
\draw[gp path] (6.664,0.616)--(6.664,6.254);
\draw[gp path] (6.853,0.616)--(6.853,5.891);
\draw[gp path] (7.042,0.616)--(7.042,5.592);
\draw[gp path] (7.231,0.616)--(7.231,5.205);
\draw[gp path] (7.420,0.616)--(7.420,4.793);
\draw[gp path] (7.609,0.616)--(7.609,4.513);
\draw[gp path] (7.798,0.616)--(7.798,4.184);
\draw[gp path] (7.987,0.616)--(7.987,4.024);
\draw[gp path] (8.176,0.616)--(8.176,3.701);
\draw[gp path] (8.366,0.616)--(8.366,3.353);
\draw[gp path] (8.555,0.616)--(8.555,2.874);
\draw[gp path] (8.744,0.616)--(8.744,2.596);
\draw[gp path] (8.933,0.616)--(8.933,2.451);
\draw[gp path] (9.122,0.616)--(9.122,1.899);
\draw[gp path] (9.311,0.616)--(9.311,2.451);
\draw[gp path] (9.500,0.616)--(9.500,2.044);
\draw[gp path] (9.689,0.616)--(9.689,1.168);
\draw[gp path] (9.878,0.616)--(9.878,1.899);
\draw[gp path] (10.256,0.616)--(10.256,1.168);
\gpsetpointsize{4.00}
\gppoint{gp mark 10}{(2.587,0.616)}
\gppoint{gp mark 10}{(2.829,0.616)}
\gppoint{gp mark 10}{(3.058,1.168)}
\gppoint{gp mark 10}{(3.257,2.044)}
\gppoint{gp mark 10}{(3.449,2.596)}
\gppoint{gp mark 10}{(3.638,3.327)}
\gppoint{gp mark 10}{(3.828,3.795)}
\gppoint{gp mark 10}{(4.017,4.470)}
\gppoint{gp mark 10}{(4.206,4.984)}
\gppoint{gp mark 10}{(4.395,5.575)}
\gppoint{gp mark 10}{(4.584,6.113)}
\gppoint{gp mark 10}{(4.773,6.692)}
\gppoint{gp mark 10}{(4.962,7.230)}
\gppoint{gp mark 10}{(5.151,7.797)}
\gppoint{gp mark 10}{(5.340,8.252)}
\gppoint{gp mark 10}{(5.529,8.447)}
\gppoint{gp mark 10}{(5.719,8.146)}
\gppoint{gp mark 10}{(5.908,7.817)}
\gppoint{gp mark 10}{(6.097,7.442)}
\gppoint{gp mark 10}{(6.286,7.067)}
\gppoint{gp mark 10}{(6.475,6.659)}
\gppoint{gp mark 10}{(6.664,6.254)}
\gppoint{gp mark 10}{(6.853,5.891)}
\gppoint{gp mark 10}{(7.042,5.592)}
\gppoint{gp mark 10}{(7.231,5.205)}
\gppoint{gp mark 10}{(7.420,4.793)}
\gppoint{gp mark 10}{(7.609,4.513)}
\gppoint{gp mark 10}{(7.798,4.184)}
\gppoint{gp mark 10}{(7.987,4.024)}
\gppoint{gp mark 10}{(8.176,3.701)}
\gppoint{gp mark 10}{(8.366,3.353)}
\gppoint{gp mark 10}{(8.555,2.874)}
\gppoint{gp mark 10}{(8.744,2.596)}
\gppoint{gp mark 10}{(8.933,2.451)}
\gppoint{gp mark 10}{(9.122,1.899)}
\gppoint{gp mark 10}{(9.311,2.451)}
\gppoint{gp mark 10}{(9.500,2.044)}
\gppoint{gp mark 10}{(9.689,1.168)}
\gppoint{gp mark 10}{(9.878,1.899)}
\gppoint{gp mark 10}{(10.067,0.616)}
\gppoint{gp mark 10}{(10.256,1.168)}
\gppoint{gp mark 10}{(10.445,0.616)}
\gppoint{gp mark 10}{(10.823,0.616)}
\gppoint{gp mark 10}{(11.391,0.616)}
\gppoint{gp mark 10}{(11.580,0.616)}
\gpcolor{color=gp lt color border}
\node[gp node center] at (2.587,0.800) {\tiny 1};
\node[gp node center] at (2.829,0.800) {\tiny 1};
\node[gp node center] at (3.058,1.352) {\tiny 2};
\node[gp node center] at (3.257,2.228) {\tiny 6};
\node[gp node center] at (3.449,2.780) {\tiny 12};
\node[gp node center] at (3.638,3.511) {\tiny 30};
\node[gp node center] at (3.828,3.979) {\tiny 54};
\node[gp node center] at (4.017,4.654) {\tiny 126};
\node[gp node center] at (4.206,5.168) {\tiny 240};
\node[gp node center] at (4.395,5.759) {\tiny 504};
\node[gp node center] at (4.584,6.297) {\tiny 990};
\node[gp node center] at (4.773,6.876) {\tiny 2046};
\node[gp node center] at (4.962,7.414) {\tiny 4020};
\node[gp node center] at (5.151,7.981) {\tiny 8190};
\node[gp node center] at (5.340,8.436) {\tiny 14496};
\node[gp node center] at (5.529,8.631) {\tiny 18514};
\node[gp node center] at (5.719,8.330) {\tiny 12698};
\node[gp node center] at (5.908,8.001) {\tiny 8395};
\node[gp node center] at (6.097,7.626) {\tiny 5245};
\node[gp node center] at (6.286,7.251) {\tiny 3279};
\node[gp node center] at (6.475,6.843) {\tiny 1965};
\node[gp node center] at (6.664,6.438) {\tiny 1182};
\node[gp node center] at (6.853,6.075) {\tiny 749};
\node[gp node center] at (7.042,5.776) {\tiny 515};
\node[gp node center] at (7.231,5.389) {\tiny 317};
\node[gp node center] at (7.420,4.977) {\tiny 189};
\node[gp node center] at (7.609,4.697) {\tiny 133};
\node[gp node center] at (7.798,4.368) {\tiny 88};
\node[gp node center] at (7.987,4.208) {\tiny 72};
\node[gp node center] at (8.176,3.885) {\tiny 48};
\node[gp node center] at (8.366,3.537) {\tiny 31};
\node[gp node center] at (8.555,3.058) {\tiny 17};
\node[gp node center] at (8.744,2.780) {\tiny 12};
\node[gp node center] at (8.933,2.635) {\tiny 10};
\node[gp node center] at (9.122,2.083) {\tiny 5};
\node[gp node center] at (9.311,2.635) {\tiny 10};
\node[gp node center] at (9.500,2.228) {\tiny 6};
\node[gp node center] at (9.689,1.352) {\tiny 2};
\node[gp node center] at (9.878,2.083) {\tiny 5};
\node[gp node center] at (10.067,0.800) {\tiny 1};
\node[gp node center] at (10.256,1.352) {\tiny 2};
\node[gp node center] at (10.445,0.800) {\tiny 1};
\node[gp node center] at (10.823,0.800) {\tiny 1};
\node[gp node center] at (11.391,0.800) {\tiny 1};
\node[gp node center] at (11.580,0.800) {\tiny 1};
\gpsetlinetype{gp lt border}
\draw[gp path] (1.748,9.791)--(1.748,0.616)--(12.147,0.616)--(12.147,9.791)--cycle;
\gpdefrectangularnode{gp plot 1}{\pgfpoint{1.748cm}{0.616cm}}{\pgfpoint{12.147cm}{9.791cm}}
\end{tikzpicture}
\caption{This plot shows the frequencies of each size: each vertical bar is placed
on values $-\log_\beta s_n$, the height of each bar represents the frequency $ a_n$ and is displayed on a logarithmic scale. On
the top of each bar we have recorded the value of $a_n$: these values match perfectly with the sequence A038199 up to the 14th element (after this threshold the numerical data are likely to be incomplete). The clear decaying behaviour which becomes
apparent after $n=15$ might also relate to the fact that, since the box dimension of the bifurcation set is smaller than one, the probability that a point taken randomly in parameter space falls in some matching interval of size of order $\beta^{-n}$ decays exponentially as $n\to \infty$.
}
\label{frequencies}
\end{figure}
Thus one is led to conclude that the box dimension in this case is
\[ \lim_{n \to \infty}-\frac{1}{n}\log_\beta( a_n ) = \log(2)/\log(2+\sqrt{2})=0.5644763825...
\]
This case is actually covered by Theorem~\ref{t:quadratic}, which shows with a different argument that for $\beta=2+\sqrt{2}$ one indeed gets that $dim_H(A_\beta) = \frac{\log 2}{\log (2+\sqrt{2})}$.
\begin{remark}\label{rm:totient}
{\rm
For $\beta = \frac{1+\sqrt{5}}{2}$ and $s_n \approx \beta^{-n}$, we find that $a_n = \phi(n)$ is Euler's totient function.
In Section~\ref{sec:nonunit} we relate this case to degree one circle maps $g_\alpha$ with plateaus,
and matching with rotation numbers $\rho = \frac{m}{n} \in \mathbb Q$. This occurs for a parameter interval of length $\approx \beta^{-n}$, and the number of integers $m < n$ such that $\frac{m}{n}$ is in lowest terms is Euler's totient function. Naturally, the numerics indicate that $\dim_B(A_\beta) = 0$.
}\end{remark}
\begin{figure}[h]
\centering
\begin{tikzpicture}[gnuplot]
\gpsolidlines
\path (0.000,0.000) rectangle (12.700,5.080);
\gpcolor{color=gp lt color border}
\gpsetlinetype{gp lt border}
\gpsetlinewidth{1.00}
\draw[gp path] (1.564,0.616)--(1.744,0.616);
\draw[gp path] (12.147,0.616)--(11.967,0.616);
\node[gp node right] at (1.380,0.616) { 1};
\draw[gp path] (1.564,0.924)--(1.654,0.924);
\draw[gp path] (12.147,0.924)--(12.057,0.924);
\draw[gp path] (1.564,1.104)--(1.654,1.104);
\draw[gp path] (12.147,1.104)--(12.057,1.104);
\draw[gp path] (1.564,1.232)--(1.654,1.232);
\draw[gp path] (12.147,1.232)--(12.057,1.232);
\draw[gp path] (1.564,1.332)--(1.654,1.332);
\draw[gp path] (12.147,1.332)--(12.057,1.332);
\draw[gp path] (1.564,1.413)--(1.654,1.413);
\draw[gp path] (12.147,1.413)--(12.057,1.413);
\draw[gp path] (1.564,1.481)--(1.654,1.481);
\draw[gp path] (12.147,1.481)--(12.057,1.481);
\draw[gp path] (1.564,1.541)--(1.654,1.541);
\draw[gp path] (12.147,1.541)--(12.057,1.541);
\draw[gp path] (1.564,1.593)--(1.654,1.593);
\draw[gp path] (12.147,1.593)--(12.057,1.593);
\draw[gp path] (1.564,1.640)--(1.744,1.640);
\draw[gp path] (12.147,1.640)--(11.967,1.640);
\node[gp node right] at (1.380,1.640) { 10};
\draw[gp path] (1.564,1.948)--(1.654,1.948);
\draw[gp path] (12.147,1.948)--(12.057,1.948);
\draw[gp path] (1.564,2.128)--(1.654,2.128);
\draw[gp path] (12.147,2.128)--(12.057,2.128);
\draw[gp path] (1.564,2.256)--(1.654,2.256);
\draw[gp path] (12.147,2.256)--(12.057,2.256);
\draw[gp path] (1.564,2.355)--(1.654,2.355);
\draw[gp path] (12.147,2.355)--(12.057,2.355);
\draw[gp path] (1.564,2.436)--(1.654,2.436);
\draw[gp path] (12.147,2.436)--(12.057,2.436);
\draw[gp path] (1.564,2.505)--(1.654,2.505);
\draw[gp path] (12.147,2.505)--(12.057,2.505);
\draw[gp path] (1.564,2.564)--(1.654,2.564);
\draw[gp path] (12.147,2.564)--(12.057,2.564);
\draw[gp path] (1.564,2.617)--(1.654,2.617);
\draw[gp path] (12.147,2.617)--(12.057,2.617);
\draw[gp path] (1.564,2.664)--(1.744,2.664);
\draw[gp path] (12.147,2.664)--(11.967,2.664);
\node[gp node right] at (1.380,2.664) { 100};
\draw[gp path] (1.564,2.972)--(1.654,2.972);
\draw[gp path] (12.147,2.972)--(12.057,2.972);
\draw[gp path] (1.564,3.152)--(1.654,3.152);
\draw[gp path] (12.147,3.152)--(12.057,3.152);
\draw[gp path] (1.564,3.280)--(1.654,3.280);
\draw[gp path] (12.147,3.280)--(12.057,3.280);
\draw[gp path] (1.564,3.379)--(1.654,3.379);
\draw[gp path] (12.147,3.379)--(12.057,3.379);
\draw[gp path] (1.564,3.460)--(1.654,3.460);
\draw[gp path] (12.147,3.460)--(12.057,3.460);
\draw[gp path] (1.564,3.529)--(1.654,3.529);
\draw[gp path] (12.147,3.529)--(12.057,3.529);
\draw[gp path] (1.564,3.588)--(1.654,3.588);
\draw[gp path] (12.147,3.588)--(12.057,3.588);
\draw[gp path] (1.564,3.640)--(1.654,3.640);
\draw[gp path] (12.147,3.640)--(12.057,3.640);
\draw[gp path] (1.564,3.687)--(1.744,3.687);
\draw[gp path] (12.147,3.687)--(11.967,3.687);
\node[gp node right] at (1.380,3.687) { 1000};
\draw[gp path] (1.564,3.995)--(1.654,3.995);
\draw[gp path] (12.147,3.995)--(12.057,3.995);
\draw[gp path] (1.564,4.176)--(1.654,4.176);
\draw[gp path] (12.147,4.176)--(12.057,4.176);
\draw[gp path] (1.564,4.304)--(1.654,4.304);
\draw[gp path] (12.147,4.304)--(12.057,4.304);
\draw[gp path] (1.564,4.403)--(1.654,4.403);
\draw[gp path] (12.147,4.403)--(12.057,4.403);
\draw[gp path] (1.564,4.484)--(1.654,4.484);
\draw[gp path] (12.147,4.484)--(12.057,4.484);
\draw[gp path] (1.564,4.552)--(1.654,4.552);
\draw[gp path] (12.147,4.552)--(12.057,4.552);
\draw[gp path] (1.564,4.612)--(1.654,4.612);
\draw[gp path] (12.147,4.612)--(12.057,4.612);
\draw[gp path] (1.564,4.664)--(1.654,4.664);
\draw[gp path] (12.147,4.664)--(12.057,4.664);
\draw[gp path] (1.564,4.711)--(1.744,4.711);
\draw[gp path] (12.147,4.711)--(11.967,4.711);
\node[gp node right] at (1.380,4.711) { 10000};
\draw[gp path] (1.564,0.616)--(1.564,0.796);
\draw[gp path] (1.564,4.711)--(1.564,4.531);
\node[gp node center] at (1.564,0.308) { 0};
\draw[gp path] (3.681,0.616)--(3.681,0.796);
\draw[gp path] (3.681,4.711)--(3.681,4.531);
\node[gp node center] at (3.681,0.308) { 10};
\draw[gp path] (5.797,0.616)--(5.797,0.796);
\draw[gp path] (5.797,4.711)--(5.797,4.531);
\node[gp node center] at (5.797,0.308) { 20};
\draw[gp path] (7.914,0.616)--(7.914,0.796);
\draw[gp path] (7.914,4.711)--(7.914,4.531);
\node[gp node center] at (7.914,0.308) { 30};
\draw[gp path] (10.030,0.616)--(10.030,0.796);
\draw[gp path] (10.030,4.711)--(10.030,4.531);
\node[gp node center] at (10.030,0.308) { 40};
\draw[gp path] (12.147,0.616)--(12.147,0.796);
\draw[gp path] (12.147,4.711)--(12.147,4.531);
\node[gp node center] at (12.147,0.308) { 50};
\draw[gp path] (1.564,4.711)--(1.564,0.616)--(12.147,0.616)--(12.147,4.711)--cycle;
\node[gp node right] at (10.679,4.377) {tribonacci};
\gpcolor{color=gp lt color 6}
\gpsetlinetype{gp lt plot 6}
\draw[gp path] (10.863,4.377)--(11.779,4.377);
\draw[gp path] (1.987,0.616)--(2.199,0.616)--(2.411,0.616)--(2.834,1.332)--(3.046,1.332)%
--(3.257,1.413)--(3.469,1.901)--(3.681,2.081)--(3.892,2.465)--(4.104,2.581)--(4.316,2.748)%
--(4.527,2.993)--(4.739,3.206)--(4.951,3.391)--(5.162,3.536)--(5.374,3.746)--(5.586,3.552)%
--(5.797,3.509)--(6.009,3.441)--(6.221,3.425)--(6.432,3.345)--(6.644,3.324)--(6.855,3.246)%
--(7.067,3.361)--(7.279,3.095)--(7.490,3.080)--(7.702,3.034)--(7.914,2.917)--(8.125,2.920)%
--(8.337,2.826)--(8.549,2.983)--(8.760,2.770)--(8.972,2.702)--(9.184,2.726)--(9.395,2.636)%
--(9.607,2.586)--(9.819,2.536)--(10.030,2.626)--(10.242,2.421)--(10.454,2.421)--(10.665,2.373)%
--(10.877,2.390)--(11.089,2.113)--(11.300,2.267)--(11.512,2.429)--(11.724,2.209)--(11.935,2.113)%
--(12.147,2.047);
\gpsetpointsize{4.00}
\gppoint{gp mark 7}{(1.987,0.616)}
\gppoint{gp mark 7}{(2.199,0.616)}
\gppoint{gp mark 7}{(2.411,0.616)}
\gppoint{gp mark 7}{(2.834,1.332)}
\gppoint{gp mark 7}{(3.046,1.332)}
\gppoint{gp mark 7}{(3.257,1.413)}
\gppoint{gp mark 7}{(3.469,1.901)}
\gppoint{gp mark 7}{(3.681,2.081)}
\gppoint{gp mark 7}{(3.892,2.465)}
\gppoint{gp mark 7}{(4.104,2.581)}
\gppoint{gp mark 7}{(4.316,2.748)}
\gppoint{gp mark 7}{(4.527,2.993)}
\gppoint{gp mark 7}{(4.739,3.206)}
\gppoint{gp mark 7}{(4.951,3.391)}
\gppoint{gp mark 7}{(5.162,3.536)}
\gppoint{gp mark 7}{(5.374,3.746)}
\gppoint{gp mark 7}{(5.586,3.552)}
\gppoint{gp mark 7}{(5.797,3.509)}
\gppoint{gp mark 7}{(6.009,3.441)}
\gppoint{gp mark 7}{(6.221,3.425)}
\gppoint{gp mark 7}{(6.432,3.345)}
\gppoint{gp mark 7}{(6.644,3.324)}
\gppoint{gp mark 7}{(6.855,3.246)}
\gppoint{gp mark 7}{(7.067,3.361)}
\gppoint{gp mark 7}{(7.279,3.095)}
\gppoint{gp mark 7}{(7.490,3.080)}
\gppoint{gp mark 7}{(7.702,3.034)}
\gppoint{gp mark 7}{(7.914,2.917)}
\gppoint{gp mark 7}{(8.125,2.920)}
\gppoint{gp mark 7}{(8.337,2.826)}
\gppoint{gp mark 7}{(8.549,2.983)}
\gppoint{gp mark 7}{(8.760,2.770)}
\gppoint{gp mark 7}{(8.972,2.702)}
\gppoint{gp mark 7}{(9.184,2.726)}
\gppoint{gp mark 7}{(9.395,2.636)}
\gppoint{gp mark 7}{(9.607,2.586)}
\gppoint{gp mark 7}{(9.819,2.536)}
\gppoint{gp mark 7}{(10.030,2.626)}
\gppoint{gp mark 7}{(10.242,2.421)}
\gppoint{gp mark 7}{(10.454,2.421)}
\gppoint{gp mark 7}{(10.665,2.373)}
\gppoint{gp mark 7}{(10.877,2.390)}
\gppoint{gp mark 7}{(11.089,2.113)}
\gppoint{gp mark 7}{(11.300,2.267)}
\gppoint{gp mark 7}{(11.512,2.429)}
\gppoint{gp mark 7}{(11.724,2.209)}
\gppoint{gp mark 7}{(11.935,2.113)}
\gppoint{gp mark 7}{(12.147,2.047)}
\gppoint{gp mark 7}{(11.321,4.377)}
\gpcolor{color=gp lt color border}
\node[gp node right] at (10.679,4.069) {tetrabonacci};
\gpcolor{color=gp lt color 2}
\gpsetlinetype{gp lt plot 2}
\draw[gp path] (10.863,4.069)--(11.779,4.069);
\draw[gp path] (2.199,0.924)--(2.411,1.332)--(2.622,1.232)--(3.046,1.232)--(3.257,2.184)%
--(3.469,2.157)--(3.681,2.197)--(3.892,2.444)--(4.104,2.706)--(4.316,2.886)--(4.527,3.002)%
--(4.739,3.271)--(4.951,3.594)--(5.162,3.789)--(5.374,3.829)--(5.586,3.944)--(5.797,4.106)%
--(6.009,4.269)--(6.221,4.061)--(6.432,4.022)--(6.644,4.001)--(6.855,3.966)--(7.067,3.888)%
--(7.279,3.843)--(7.490,3.789)--(7.702,3.716)--(7.914,3.656)--(8.125,3.609)--(8.337,3.607)%
--(8.549,3.526)--(8.760,3.429)--(8.972,3.375)--(9.184,3.349)--(9.395,3.289)--(9.607,3.238)%
--(9.819,3.318)--(10.030,3.175)--(10.242,3.120)--(10.454,3.045)--(10.665,2.985)--(10.877,2.985)%
--(11.089,2.899)--(11.300,2.878)--(11.512,2.816)--(11.724,2.756)--(11.935,2.752)--(12.147,2.626);
\gppoint{gp mark 11}{(2.199,0.924)}
\gppoint{gp mark 11}{(2.411,1.332)}
\gppoint{gp mark 11}{(2.622,1.232)}
\gppoint{gp mark 11}{(3.046,1.232)}
\gppoint{gp mark 11}{(3.257,2.184)}
\gppoint{gp mark 11}{(3.469,2.157)}
\gppoint{gp mark 11}{(3.681,2.197)}
\gppoint{gp mark 11}{(3.892,2.444)}
\gppoint{gp mark 11}{(4.104,2.706)}
\gppoint{gp mark 11}{(4.316,2.886)}
\gppoint{gp mark 11}{(4.527,3.002)}
\gppoint{gp mark 11}{(4.739,3.271)}
\gppoint{gp mark 11}{(4.951,3.594)}
\gppoint{gp mark 11}{(5.162,3.789)}
\gppoint{gp mark 11}{(5.374,3.829)}
\gppoint{gp mark 11}{(5.586,3.944)}
\gppoint{gp mark 11}{(5.797,4.106)}
\gppoint{gp mark 11}{(6.009,4.269)}
\gppoint{gp mark 11}{(6.221,4.061)}
\gppoint{gp mark 11}{(6.432,4.022)}
\gppoint{gp mark 11}{(6.644,4.001)}
\gppoint{gp mark 11}{(6.855,3.966)}
\gppoint{gp mark 11}{(7.067,3.888)}
\gppoint{gp mark 11}{(7.279,3.843)}
\gppoint{gp mark 11}{(7.490,3.789)}
\gppoint{gp mark 11}{(7.702,3.716)}
\gppoint{gp mark 11}{(7.914,3.656)}
\gppoint{gp mark 11}{(8.125,3.609)}
\gppoint{gp mark 11}{(8.337,3.607)}
\gppoint{gp mark 11}{(8.549,3.526)}
\gppoint{gp mark 11}{(8.760,3.429)}
\gppoint{gp mark 11}{(8.972,3.375)}
\gppoint{gp mark 11}{(9.184,3.349)}
\gppoint{gp mark 11}{(9.395,3.289)}
\gppoint{gp mark 11}{(9.607,3.238)}
\gppoint{gp mark 11}{(9.819,3.318)}
\gppoint{gp mark 11}{(10.030,3.175)}
\gppoint{gp mark 11}{(10.242,3.120)}
\gppoint{gp mark 11}{(10.454,3.045)}
\gppoint{gp mark 11}{(10.665,2.985)}
\gppoint{gp mark 11}{(10.877,2.985)}
\gppoint{gp mark 11}{(11.089,2.899)}
\gppoint{gp mark 11}{(11.300,2.878)}
\gppoint{gp mark 11}{(11.512,2.816)}
\gppoint{gp mark 11}{(11.724,2.756)}
\gppoint{gp mark 11}{(11.935,2.752)}
\gppoint{gp mark 11}{(12.147,2.626)}
\gppoint{gp mark 11}{(11.321,4.069)}
\gpcolor{color=gp lt color border}
\gpsetlinetype{gp lt border}
\draw[gp path] (1.564,4.711)--(1.564,0.616)--(12.147,0.616)--(12.147,4.711)--cycle;
\gpdefrectangularnode{gp plot 1}{\pgfpoint{1.564cm}{0.616cm}}{\pgfpoint{12.147cm}{4.711cm}}
\end{tikzpicture}
\caption{ Here we show the frequencies of sizes of matching intervals in the cases when the slope $\beta$ is the tribonacci or the tetrabonacci number: the $y$-coordinate represents (on a logarithmic scale) the number $ a_k$ of intervals of size of order $2^{-k}$ while the $x$-coordinate represents $k$. It is quite evident that in both cases the graph shows that $a_k$ grows exponentially in the beginning, and the decaying behaviour that can be seen after $k\geq 20$ is due to the fact that in this range our list of interval of size of order $2^{-k}$ is not complete any more.}
\label{quabo-tribo}
\end{figure}
In summary, the following table gives values for the box-dimension of the bifurcation set $A_\beta$ that
we obtained numerically.
For the tribonacci number we prove in Section~\ref{sec:multinacci} that the Hausdorff dimension
of $A_\beta$ is strictly between $0$ and $1$.
\[
\begin{array}{|c|r|l|}
\hline
\beta & \text{minimal polynomial} & \dim_B(A_\beta) \\[1mm]
\hline
\text{tribonacci} & \beta^3-\beta^2-\beta-1 = 0 & 0.66... \\[1mm]
\text{tetrabonacci} & \beta^4-\beta^3-\beta^2-\beta-1 = 0 & 0.76...\\[1mm]
\text{plastic} & \beta^3-\beta-1 = 0 & 0.93... \\[1mm]
\hline
\end{array}
\]
\section{The $\beta x + \alpha \pmod 1$ transformation}
For $\beta >1$ and $\alpha \in [0,1]$, the $\beta x+ \alpha$ (mod 1)-transformation is the map on ${\mathbb S}^1 = \mathbb R \slash \mathbb Z$ given by $x \mapsto \beta x + \alpha$ (mod 1). In what follows we will always assume that $\beta$ is given and we consider the family of maps $\{ T_{\alpha}: {\mathbb S}^1 \to {\mathbb S}^1 \}_{\alpha \in [0,1]}$ defined by $T_{\alpha} (x) = \beta x + \alpha$ (mod 1). The {\em critical orbits} of the map $T_{\alpha}$ are the orbits of $0^+ = \lim_{x \downarrow 0} x$ and $0^- = \lim_{x \uparrow 1}x$, i.e., the sets $\{ T_{\alpha}^n (0^+) \}_{n \ge 0}$ and $\{ T_{\alpha}^n (0^-) \}_{n \ge 0}$. For each combination of $\beta$ and $\alpha$, there is a largest integer $k\ge 0$, such that $\frac{k-\alpha}{\beta} < 1$.
This means that for each $n \ge 1$ there are integers $a_i, b_i \in \{0, 1, \ldots, k+1\}$ such that
\begin{equation}\label{q:0orbit}
\begin{array}{rcl}
T_{\alpha}^n (0^+) &=& (\beta^{n-1} + \cdots + 1)\alpha - a_1\beta^{n-2} - \cdots - a_{n-2}\beta-a_{n-1}, \\
T_{\alpha}^n(0^-) &=& (\beta^{n-1}+\cdots + 1)\alpha + \beta^n-b_1\beta^{n-1} - \cdots - b_{n-1}\beta - b_n.
\end{array}
\end{equation}
The map $T_{\alpha}$ has a {\em Markov partition} if there exists a finite number of disjoint intervals $I_j \subseteq [0,1]$, $1\le j \le n$, such that
\begin{itemize}
\item $\overline{\bigcup_{j=1}^n I_j} = [0,1]$ and
\item for each $1 \le j,k \le n$ either $I_j \subseteq T_{\alpha}(I_k)$ or $I_j \cap T_{\alpha} (I_k) = \emptyset$.
\end{itemize}
This happens if and only if the orbits of $0^+$ and $0^-$ are finite. In particular there need to be integers $n,m$ such that $T_{\alpha}^n(0^+)=T_{\alpha}^m(0^+)$, which by the above
implies that $\alpha \in \mathbb Q(\beta)$. If $\beta$ is an algebraic integer, then this is a countable set. It happens more frequently that $T_{\alpha}$ has matching. We say that the map $T_{\alpha}$ has {\em matching}
if there is an $m \ge 1$ such that $T^m_{\alpha} (0^+) = T^m_{\alpha} (0^-)$. This implies the existence of an $m$ such that
\[ \beta^m - b_1\beta^{m-1} - (b_2-a_1)\beta^{m-2} - \cdots - (b_{m-1}-a_{m-2})\beta - (b_m-a_{m-1}) =0,\]
which means that $\beta$ is an algebraic integer.
\begin{remark}{\rm
When defining matching for piecewise linear maps, one usually also asks for the derivative
of $T^m_{\alpha}$ to be equal in the points $0^+$ and $0^-$. Since the maps $T_{\alpha}$
have constant slope, this condition is automatically satisfied in our case by stipulating
that the number of iterates at matching is the same for $0^+$ and $0^-$.
}\end{remark}
Figure~\ref{f:variousalpha} illustrates that having a Markov partition does not exclude having
matching and vice versa.
\begin{figure}[h]
\centering
\subfigure[Markov, no matching, $\beta^4-\beta^3-\beta^2-\beta+1=0$, $\alpha=0$]{
\begin{tikzpicture}[scale=3.2]
\draw(0,0)node[left]{\small $0$}--(.5807,0)node[below]{\small $\frac{1-\alpha}{\beta}$}--(1,0)node[below]{\small$1$}--(1,1)--(0,1)node[left]{\small $1$}--(0,0);
\draw[thick, purple!50!black](0,0)--(.5807,1) (.5807,0)--(1,.7221);
\draw[dotted](0,0)--(1,1)(.5807,0)--(.5807,1);
\draw[dashed](1,.7221)--(.7221,.7221)--(.7221,.2435)--(.2435,.2435)--(.2435,.4193)--(.4193,.4193)--(.4193,.7221)--(.7221,.7221);
\end{tikzpicture}}
\hspace{1cm}
\subfigure[Markov and matching, $\beta^2-\beta-1=0$, $\alpha = \frac1{\beta^3}$]{
\begin{tikzpicture}[scale=3.2]
\draw(0,0)node[left]{\small $0$}--(.4721,0)node[below]{\small $\frac{1-\alpha}{\beta}$}--(1,0)node[below]{\small$1$}--(1,1)--(0,1)node[left]{\small $1$}--(0,0);
\draw[thick, purple!50!black](0,.2361)--(.4721,1) (.4721,0)--(1,.8541);
\draw[dotted](0,0)--(1,1)(.4721,0)--(.4721,1);
\draw[dashed](0,.2361)--(.2361,.2361)--(.2361,.618)--(.618,.618)--(.618,.2361)--(.2361,.2361)(1,.8541)--(.8541,.8541)--(.8541,.618)--(.618,.618);
\end{tikzpicture}}
\hspace{1cm}
\subfigure[Not Markov, matching, $\beta^2-2\beta-1=0$, $\alpha = \pi-3$]{
\begin{tikzpicture}[scale=3.2]
\draw[white](1.1,0)--(1.1,1);
\draw(0,0)node[left]{\small $0$}--(.3556,0)node[below]{\small $\frac{1-\alpha}{\beta}$}--(.7698,0)node[below]{\small $\frac{2-\alpha}{\beta}$}--(1,0)node[below]{\small$1$}--(1,1)--(0,1)node[left]{\small $1$}--(0,0);
\draw[thick, purple!50!black](0,.1416)--(.3556,1) (.3556,0)--(.7698,1)(.7698,0)--(1,.558);
\draw[dotted](0,0)--(1,1)(.3556,0)--(.3556,1)(.7698,0)--(.7698,1);
\draw[dashed](0,.1416)--(.1416,.1416)--(.1416,.4834)--(.4834,.4834)--(.4834,.3087)--(.3087,.3087)--(.3087,0)(1,.5558)--(.5558,.5558)--(.5558,.4834)--(.4834,.4834);
\end{tikzpicture}}
\caption{The $\beta x + \alpha$ (mod 1)-transformation for various values of $\alpha$ and $\beta$. In (a) the points $0^+$ and $0^-$ are mapped to different periodic orbits by $T_{\alpha}$. In (b) we have $T^2_{\alpha}(0^-) = T^2_{\alpha}(0^+)$ and both points are part of the same 2-periodic cycle. In (c) we have taken $\alpha = \pi-3$ and $\beta$ equals the tribonacci number and we see that $T^2_{\alpha}(0^+) = T^2_{\alpha}(0^-)$.}
\label{f:variousalpha}
\end{figure}
Write
\[ \Delta(0) = \Big[0, \frac{1-\alpha}{\beta} \Big), \ \Delta(k) = \Big[ \frac{k-\alpha}{\beta}, 1 \Big], \
\Delta(i) = \Big[ \frac{i-\alpha}{\beta}, \frac{i+1-\alpha}{\beta} \Big), \, 1 \le i \le k-1.\]
and let $m$ denote the one-dimensional Lebesgue measure. Then $m\big( \Delta(i) \big) =\frac1{\beta}$ for all $1 \le i \le k-1$ and $m \big( \Delta(0) \big),\ m\big( \Delta(k) \big) \le \frac1{\beta}$ with the property that
\[ m \big( \Delta(0) \big) + m\big( \Delta(k) \big) = 1- \frac{k-1}{\beta}.\]
We define the cylinder sets for $T_{\alpha}$ as follows. For each $n \ge 1$ and $e_1 \cdots e_n \in \{0,1, \ldots, k\}^n$, write
\[ \Delta_{\alpha}(e_1 \cdots e_n) = \Delta(e_1 \cdots e_n) = \Delta(e_1) \cap T^{-1}_{\alpha} \Delta(e_2) \cap \cdots \cap T^{-(n-1)}_{\alpha} \Delta(e_n),\]
whenever this is non-empty. We have the following result.
\begin{lemma}\label{l:1overbeta}
Let $\beta >1$ and $\alpha \in [0,1]$ be given. Then
$|T^n_{\alpha} (0^+) - T^n_{\alpha}(0^-) | = \frac{j}{\beta}$ for some $j \in \mathbb N$
if and only if matching occurs at iterate $n+1$.
\end{lemma}
\begin{proof}
For every $y \in [0,1)$, the preimage set $T_\alpha^{-1}(y)$ consists
of points in which every pair is $j/\beta$ apart for some $j \in \mathbb N$.
Hence it is necessary and sufficient that
$|T^n_{\alpha} (0^+) - T^n_{\alpha}(0^-) | = j/\beta$ for matching
to occur at iterate $n+1$.
\end{proof}
\begin{remark}\label{rem:pw_affine}{\rm In what follows, to determine the value of $A_{\beta}$ we consider functions $w: [0,1] \to [0,1], \alpha \mapsto w(\alpha)$ on the parameter space of $\alpha$'s. The map $T_\alpha$ and its iterates are piecewise affine, also as function of $\alpha$. Using the chain rule repeatedly, we get
\begin{eqnarray*}
\frac{d}{d\alpha} T^n_\alpha \big( w(\alpha) \big) &=&
\frac{\partial}{\partial \alpha} T_\alpha \big(T^{n-1}_\alpha (w(\alpha))\big) +
\frac{\partial}{\partial x} T_\alpha \big(T^{n-1}_\alpha w(\alpha))\big)
\frac{d}{d\alpha} T^{n-1}_\alpha \big(w(\alpha )\big) \\
&=& 1 + \beta \frac{d}{d\alpha} T^{n-1}_\alpha \big(w(\alpha) \big) \\
&\vdots & \nonumber \\
&=& 1+\beta + \beta^2 + \dots + \beta^{n-1} + \beta^n \frac{d}{d\alpha} w(\alpha)
= \frac{\beta^n-1}{\beta-1} + \beta^n \frac{d}{d\alpha} w(\alpha).
\end{eqnarray*}
In particular, if $w(\alpha)$ is $n$-periodic, then
$\frac{\beta^n-1}{\beta-1} + \beta^n \frac{d}{d\alpha} w(\alpha)
= \frac{d}{d\alpha} w(\alpha)$, so
$\frac{d}{d\alpha} w(\alpha) = - \frac{1}{\beta-1}$, independently of the period $n$.
Similarly, if $T_\alpha^n \big(w(\alpha) \big)$ is independent of $\alpha$, then
$\frac{d}{d\alpha} w(\alpha) = - \frac{1}{\beta-1}(1-\frac{1}{\beta^n})$, whereas if
$T_\alpha^n \big(w(\alpha)\big) \in \partial \Delta(1)$, then
$\frac{d}{d\alpha} w(\alpha) = - \frac{1}{\beta-1}(1+\frac{1}{\beta^{n+1}})$.
Finally, if $w(\alpha) \equiv 0^+$ is constant,
then $\frac{d}{d\alpha} T^n_\alpha ( 0^+) = \frac{\beta^n-1}{\beta-1}$.
}\end{remark}
\section{Quadratic Pisot Numbers}\label{sec:nonunit}
Solving $T^j(0^-)-T^j(0^-)=0$ using equation \eqref{q:0orbit}, we observe that
matching can only occur if $\beta$ is an algebraic integer.
In this section we look at quadratic Pisot integers; these are the leading roots
of the equations
\begin{equation}\label{eq:quap}
\beta^2-k\beta\pm d = 0, \qquad k,d \in \mathbb N,\ k > d\pm 1.
\end{equation}
The condition $k > d\pm1$ ensures that the algebraic conjugate of $\beta$ lies in
$(-1,0)$ and $(0,1)$ respectively.
If this inequality fails, then no matching occurs:
\begin{prop}
If $\beta>1$ is an irrational quadratic integer, but not Pisot, then there is no matching.
\end{prop}
\begin{proof}
Let $\beta^2 \pm k\beta \pm d=0$, $k \geq 0$, $d \geq 1$, be the characteristic equation of $\beta$.
Since $\beta$ is a quadratic irrational, we have
$T_\alpha^j(0^-)-T_\alpha^j(0^+)= n\beta +m$ for some integers $n,m$
(which depend on $j$ and $\alpha$).
If there were matching for $T_\alpha$ then, by Lemma~\ref{l:1overbeta}, there exists $\ell \in \mathbb Z$ with
$|\ell|<\beta $ such that $n\beta+m=\ell/\beta$ which amounts to
$n\beta^2+m\beta-\ell=0$.
However, since $\beta \notin \mathbb Q$, this last equation must be an integer multiple
of the characteristic equation of $\beta$, therefore
$|\ell|=|n|d$ and thus $d<\beta$.
Note that the linear term of the characteristic equation cannot be $+k\beta$ ($k\geq0$).
Indeed if this were the case then the characteristic equation would lead to $\beta+k=\pm \frac{d}{\beta}$, which contradicts the fact that $\beta>1$.
Now if the characteristic equation is $\beta^2 - k\beta - d=0$, then $\beta -k=\frac{d}{\beta} \in (0,1)$. Hence
$$
\beta = \frac{k}{2} + \sqrt{\Big(\frac{k}{2}\Big)^2 + d} < k+1
$$
which reduces to $k>d-1$ (i.e. $\beta$ is Pisot).
If on the other hand the characteristic equation is $\beta^2 - k\beta + d=0$,
then $\beta -k=-\frac{d}{\beta} \in (-1,0)$. Hence
$$
\beta = \frac{k}{2} + \sqrt{\Big(\frac{k}{2}\Big)^2 - d} > k-1
$$
which reduces to $k>d+1$ (i.e., $\beta$ is Pisot).
This proves that matching for some $\alpha$ forces the slope $\beta$ to be Pisot.
\end{proof}
\subsection{Case $+d$}\label{s:plusd}
In the first part of this section, let $\beta = \beta(k,d) = \frac{k}{2} + \sqrt{(\frac{k}{2})^2-d}$
denote the leading root
of $\beta^2-k\beta+d = 0$ for positive integers $k > d+1$.
\begin{theorem}
The bifurcation set $A_\beta$ has Hausdorff dimension $\dim_H(A_\beta) = \frac{\log d}{\log \beta}$.
\end{theorem}
\begin{proof}
Clearly $k-1 < \beta < k$, so $T_\alpha$ has either $k$ or $k+1$ branches depending on
whether $\alpha < k-\beta$ or $\alpha \geq k-\beta$. More precisely,
\[ T_\alpha(0^-) - T_\alpha(0^+) = \begin{cases}
\beta - (k-1) & \text{ if } \alpha \in [0,k-\beta) \quad \text{($k$ branches);}\\
\beta - k = -\frac{d}{\beta}& \text{ if } \alpha \in [k-\beta,1) \quad \text{($k+1$ branches).}
\end{cases}\]
In the latter case, we have matching in the next iterate due
to Lemma~\ref{l:1overbeta}.
\vskip .2cm
Let $\gamma := \beta - (k-1)$ and note that $\frac{k-1-d}{\beta} < \gamma < \frac{k-d}{\beta}$,
and $\beta\gamma - (k-1-d) = \gamma$, $\gamma-1 = -\frac{d}{\beta}$. It follows that if $x \in \Delta(i) \cap [0,1-\gamma)$, then
\[ T_\alpha(x+\gamma) = \begin{cases}
T_\alpha(x) + \beta \gamma -(k-d) = T_\alpha(x) - \frac{d}{\beta}
& \text{ if } x+\gamma \in \Delta(i+k-d) ;\\
T_\alpha(x) + \beta \gamma -(k-1-d) = T_\alpha(x) + \gamma
& \text{ if } x+\gamma \in \Delta(i+k-1-d).
\end{cases}\]
In the first case we have matching in the next step, and in
the second case, the difference $\gamma$ remains unchanged. The transition
graph for the differences $T^j(0^-) - T^j(0^+)$ is shown in Figure~\ref{fig:plusd}
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[scale=2]
\node[rectangle, draw=black] at (0,0) {1};
\node[rectangle, draw=black] at (1.3,0) {$\gamma$};
\node[rectangle, draw=black] at (2.7,0) {$-d/\beta$};
\node[rectangle, draw=black] at (4.6,0) {\small matching};
\draw[->] (0.3,0)--(1,0);
\draw[->] (1.5,0)--(2.3,0);
\draw[->] (3.2,0)--(4,0);
\draw[->] (1.5,0.1) .. controls (1.7,.4) and (1.4,0.5) .. (1.3,0.2);
\node at (1.67,.3) {\small $d$};
\end{tikzpicture}
\caption{The transition graph for the root of $\beta^2 - k\beta + d=0$.
The number $d$ stands for the $d$ possible $i \in \{ 0, \dots, d-1\}$ such that
$T_\alpha^j(0^-) \in \Delta(i)$ and $T_\alpha^j(0^+) = T_\alpha^j(0^-) + \gamma \in \Delta(i+k-1-d)$.
(In fact, $i = d$ is also possible, but $\Delta(0)$ and $\Delta(d)$ together form
a single branch in the circle map $g_\alpha$ below.)}
\label{fig:plusd}
\end{center}
\end{figure}
Define for $i = 0,\dots, d-1$ the ``forbidden regions''
\[
V_i := \{ x \in \Delta(i) : x+\gamma \in \Delta(i+k-d) \}
= \Big[ \frac{i+k-d-\alpha}{\beta} - \gamma\ , \ \frac{i+1-\alpha}{\beta}\Big).
\]
Note that $T_\alpha(\frac{i+k-d-\alpha}{\beta} - \gamma) = k-\beta$ and
$T_\alpha(k-\beta) = \alpha$. Define $g_\alpha:[0,k-\beta] \to [0,k-\beta]$ as
\[ g_\alpha(x) := \min\{ T_\alpha(x), k-\beta \} = \begin{cases}
k-\beta & \text { if } x \in V := \bigcup_{i=0}^{d-1} V_i; \\
T_\alpha(x) & \text{ otherwise.}
\end{cases}\]
\begin{figure}[h]
\centering
\subfigure[$T_\alpha$]{
\begin{tikzpicture}[scale=3.2]
\filldraw[yellow!30] (.092,0) rectangle (.163,1);
\filldraw[yellow!30] (.325,0) rectangle (.395,1);
\filldraw[yellow!30] (.557,0) rectangle (.628,1);
\draw(0,0)node[left]{\small $0$}--(.12,0)node[below]{\small $V_0$}--(.36,0)node[below]{\small $V_1$}--(.585,0)node[below]{\small $V_2$}--(1,0)node[below]{\small$1$}--(1,1)--(0,1)node[left]{\small $1$}--(0,.697)node[left]{\small $k-\beta$}--(0,.3)node[left]{\small $\alpha$}--(0,0);
\draw[thick, purple!50!black](0,.3)--(.163,1)(.163,0)--(.395,1)(.395,0)--(.628,1)(.628,0)--(.86,1) (.86,0)--(1,.603);
\draw[dotted](0,0)--(1,1)(.163,0)--(.163,1)(.395,0)--(.395,1)(.628,0)--(.628,1)(.86,0)--(.86,1);
\draw[dashed](0,.697)--(.697,.697)--(.697,0);
\draw[dotted](.092,0)--(.092,1)(.557,0)--(.557,1)(.325,0)--(.325,1);
\end{tikzpicture}}
\hspace{1cm}
\subfigure[$g_\alpha$]{
\begin{tikzpicture}[scale=3.2]
\draw(0,0)node[left]{\small $0$}--(.12,0)node[below]{\small $V_0$}--(.36,0)node[below]{\small $V_1$}--(.585,0)node[below]{\small $V_2$}--(.697,0)--(.697,.697)--(0,.697)node[left]{\small $k-\beta$}--(0,.3)node[left]{\small $\alpha$}--(0,0);
\draw[very thick, purple!50!black](0,.3)--(.092,.697)--(.163,.697)(.163,0)--(.325,.697)--(.395,.697)(.395,0)--(.557,.697)--(.628,.697)(.628,0)--(.697,.3);
\draw[dotted](0,0)--(.697,.697)(0,.3)--(.697,.3)(.163,0)--(.163,.697)(.395,0)--(.395,.697)(.628,0)--(.628,.697)(.092,0)--(.092,.697)(.557,0)--(.557,.697)(.325,0)--(.325,.697);
\end{tikzpicture}}
\caption{The maps $T_\alpha$ and $g_\alpha$ for $\beta$ satisfying $\beta^2-5\beta+3$ and $\alpha=0.3$.}
\end{figure}
After identifying $0 \sim k-\beta$ we obtain a circle $S$
of length $m(S) = k-\beta$, and $g_\alpha:S \to S$ becomes a non-decreasing degree $d$
circle endomorphism with $d$ plateaus $V_0, \dots, V_{d-1}$, and slope $\beta$ elsewhere. Hence,
if $T_\alpha$ has no matching, then
\[
X_\alpha = \{ x \in S : g_\alpha^n(x) \notin \bigcup_{i=0}^{d-1} V_i \text{ for all } n \in \mathbb N\}
\]
is a $T_\alpha$-invariant set, and all invariant probability measures on it have the same
Lyapunov exponent $\int \log T'_\alpha d\mu = \log \beta$. Using the dimension formula
$\dim_H(\mu) = h(\mu)/\int \log T'_\alpha d\mu$ (see \cite[Proposition 4]{L81}
and \cite{Y82}),
and maximizing the entropy over all such measures, we find $\dim_H(X_\alpha) = \frac{\log d}{\log \beta}$.
In fact, $X_\alpha$ can be covered by $a_n = O(d^n)$ intervals length $\le \beta^{-n}$.
If $T_{\alpha}$ does not have matching, then for each $n \ge 1$ there is a
maximal interval $J=J(\alpha)$, such that $T_{\alpha}(0^+) \in J$ and
$\frac{d}{dx}g_{\alpha}^n(x)= \beta^n$ on $J$. Hence $m(J) \le \beta^{-n}$ and moreover,
for each point $w \in \partial J$ there is an $m \le n$ and a point $z \in \partial V$,
such that $T_{\alpha}^m (w) = z$.
Note that the points $w$ and $z \in \partial J$ depend on $\alpha$, in the following manner.
Given a non-matching parameter, let $U$ be its neighbouhood on which the function
$w : \alpha \mapsto w(\alpha)$ is continuous and such that there is an
$m \in \mathbb N$ with $T_\alpha^m\big(w(\alpha)\big) =: z(\alpha) \in \partial V$
for all $\alpha \in U$.
The definition of $V = V(\alpha)$ gives that
$\frac{d}{d\alpha}\partial V(\alpha) = -\frac1{\beta}$. Using Remark~\ref{rem:pw_affine} we find
\[
\frac{d}{d\alpha} z(\alpha) = \frac{d}{d\alpha} T^m_\alpha \big(w(\alpha) \big)
= \frac{\beta^m-1}{\beta-1} + \beta^m \frac{d}{d\alpha} w(\alpha) = -\frac1{\beta}.
\]
This implies that $\frac{d}{d\alpha} w(\alpha) = -\frac{1}{\beta-1}(1+\frac{\beta-2}{\beta^{m+1}}) < 0$
for all $m \geq 0$ and $\beta > 1$.
Hence, $J(\alpha)$ is an interval of length $\leq \beta^{-n}$ and $\partial J(\alpha)$
moves to the left as $\alpha$ increases. At the same time, $T_\alpha(0^+) = \alpha$ moves
to the right with speed $1$ as $\alpha$ increases.
Therefore $U$ is an interval with $\frac1C m(J) \leq m(U) \leq C m(J)$, where $C > 0$
depends on $\beta$ but not on $\alpha$ or $U$.
This proves that the upper box dimension $\overline{\dim}_B(A_\beta) = \frac{\log d}{\log \beta}$,
and in particular $\dim_H(A_\beta) = 0$ for $d = 1$.
For the lower bound estimate of $\dim_H(A_\beta)$ and $d \geq 2$,
we introduce symbolic dynamics, assigning the labels $i = 0, \dots, d-2$ to
the intervals
$$
Z_i := [\ (i+k-d-\alpha)/\beta\ ,\ (i+1+k-d-\alpha)/\beta) \ ),
$$
that is $V_i$ together the component of $S \setminus V$ directly
to the right of it, and the label $d-1$ to the remaining
interval: $Z_{d-1} = S \setminus \cup_{i=0}^{d-2} Z_i$.
Therefore we have $g_\alpha(Z_i) = g_\alpha(Z_i\setminus V_i) = S$.
Let $\Sigma = \{ 0, \dots, d-1\}^{\mathbb N_0}$ with metric
$d_\beta(x,y) = \beta^{1-m}$ for $m = \inf\{ i \geq 0 : x_i \neq y_i\}$.
Then each $n$-cylinder has diameter $\beta^{-n}$,
the Hausdorff dimension $\dim_H(\Sigma) = \frac{\log d}{\log \beta}$,
and the usual coding map $\pi_\alpha : S \to \Sigma$, when restricted to $X_\alpha$,
is injective.
\begin{lemma}
Given an $n$-cylinder $[e_0,\dots, e_{n-1}] \subset \Sigma$, there is a set
$C_{e_0\dots e_{n-1}} \subset S$ consisting of at most $n$ half-open intervals of
combined length $\beta^{-n} m(S)$ such that
$\pi_\alpha(C_{e_0\dots e_{n-1}}) = [e_0,\dots, e_{n-1}]$ and
$g_\alpha^n:C_{e_0\dots e_{n-1}} \to S$ is onto with slope $\beta^n$.
\end{lemma}
\begin{proof}
The proof is by induction. For $e_0 \in \{0, \dots, d-1\}$, let $C_{e_0} := Z_{e_0} \setminus V_{e_0}$ be
the domain of $S\setminus V$ with label $e_0$.
This interval has length $m(S)/\beta$, is half-open and clearly $g_\alpha:C_{e_0} \to S$
is onto with slope $\beta$.
Assume now by induction that for the $n$-cylinder $[e_0,\dots, e_{n-1}]$, the set $C_{e_0\dots e_{n-1}}$ is
constructed, with $\leq n$ half-open components $C^j_{e_0\dots e_{n-1}}$ so that $\sum_j m(C^j_{e_0\dots e_{n-1}})
= m(S) \beta^{-n}$. In particular, $g_\alpha^n(C^j_{e_0\dots e_{n-1}})$ are pairwise disjoint, since any
overlap would result in $\sum_j m(C^j_{e_0\dots e_{n-1}}) < m(S) \beta^{-n}$.
Let $e_n \in \{0, \dots, d-1\}$ be arbitrary and let
$C^j_{e_0\dots e_{n-1}e_n}$ be the subset of $C^j_{e_0\dots e_{n-1}}$ that
$g_\alpha^n$ maps into $Z_{e_n} \setminus V_{e_n}$.
Then $C^j_{e_0\dots e_{n-1}e_n}$ consists of two or one half-open intervals, depending on whether
$g_\alpha^n(C^j_{e_0\dots e_{n-1}e_n}) \owns g_\alpha(V) = n-\beta \sim 0$ or not.
But since $g_\alpha^n(C^j_{e_0\dots e_{n-1}})$ are pairwise disjoint, only
one of the $C^j_{e_0\dots e_{n-1}e_n}$ can have two components,
and therefore $C_{e_0\dots e_{n-1}e_n} := \cup_j C^j_{e_0\dots e_{n-1}e_n}$
has at most $n+1$ components.
Furthermore $g_\alpha^{n+1}(C_{e_0\dots e_{n-1}e_n}) = S$ and the slope
is $\beta^{n+1}$.
(See also \cite[Lemma 4.1]{BS16} for this result in a simpler context.)
\end{proof}
It follows that $\pi_\alpha$ is a Lipschitz map such that
for each $n$ and each $n$-cylinder $[e_0,\dots, e_{n-1}]$, the set $\pi_\alpha^{-1}([e_0,\dots, e_{n-1}]) \cap X_\alpha$
is contained in one or two intervals of combined length $m(S) \beta^{1-n}$.
This suffices to conclude (cf.\ \cite[Lemma 5.3]{BS16}) that $X_\alpha$
and $\pi_\alpha(X_\alpha)$ have the same Hausdorff dimension
$\frac{\log d}{\log \beta}$.
Let $G_n:[0,k-\beta) \to S$, $\alpha \mapsto g_\alpha^n(0)$. Then the $U$ from above is a maximal interval
on which $G_n$ is continuous, and such that all $\alpha \in U$ have the same coding for the iterates
$G_m(\alpha)$, $1 \leq m \leq n$, with respect to the labelling of the $Z_i = Z_i(\alpha)$.
This allows us to define a coding map $\pi:A_\beta \to \Sigma$.
As is the case for $\pi_\alpha$, given any $n$-cylinder $[e_0,\dots e_{n-1}]$,
the preimage $\pi^{-1}([e_0,\dots e_{k-1}])$ consists of at most $n$ intervals, say $U^j$,
with $G_k(\cup_j U^j) = S$,
so the combined length satisfies $\frac1C \beta^{-n} \leq m(\cup U^j) \leq C \beta^{-n}$.
This gives a one-to-one correspondence between the components $J(\alpha)$ and
intervals $U$, and hence $A_\beta$ can be covered by $a_n$ such intervals.
More importantly, $\pi$ is Lipschitz, and its inverse on each $n$-cylinder has (at most $n$)
uniformly Lipschitz branches.
It follows that $\dim_H(A_\beta) = \dim_H(\pi(A_\beta))$ (cf.\ \cite[Lemma 5.3]{BS16}), and
since $\Sigma \setminus \pi(A_\beta)$ is countable, also
$\dim_H(A_\beta) = \dim_H(\Sigma) = \frac{\log d}{\log \beta}$.
\end{proof}
\begin{remark}
Observe that if $g_\alpha^n(0^+) \in \bigcup_{i=0}^{d-1} V_i$, then $0^+$ is $n+1$-periodic
under $g_\alpha$. Let $a_n$ be the number of periodic points under the $d$-fold doubling map
of prime period $n+1$. Therefore there are $a_n$ parameter intervals such that matching occurs at
$n+1$ iterates, and these have length $\sim \beta^{-n}$.
If $d = 1$, then $a_n = \phi(n)$ is Euler's totient function, see Remark~\ref{rm:totient},
and for $d = 2$, $k=4$, so $\beta = 2 + \sqrt{2}$, then $(a_n)_{n \in \mathbb N}$ is
exactly the sequence A038199 on OEIS \cite{OEIS}, see
the observation in Section~\ref{s:numerical} for $\beta = 2 + \sqrt{2}$, which in fact
holds for any other quadratic integer with $d = 2$. In fact, there is a
fixed sequence $(a_n)_{n\in\mathbb N}$ for each value of $d \in \mathbb N$.
\end{remark}
\subsection{Case $-d$}\label{s:minusd}
Now we deal with the case $\beta = \beta(k,d) = \frac{k}{2} + \sqrt{(\frac{k}{2})^2+d}$,
which is the leading root
of $\beta^2-k\beta-d = 0$ for positive integers $k > d-1$.
\begin{theorem}
The bifurcation set $A_\beta$ has Hausdorff dimension $\dim_H(A_\beta) = \frac{\log d}{\log \beta}$.
\end{theorem}
\begin{proof}
Since $k < \beta < k+1$, $T_\alpha$ has either $k+1$ or $k+2$ branches depending on
whether $\alpha < k+1-\beta$ or $\alpha \geq k+1-\beta$. More precisely,
\[ T_\alpha(0^-) - T_\alpha(0^+) = \begin{cases}
\beta - k = \frac{d}{\beta} & \text{ if } \alpha \in [0,k+1-\beta) \quad \text{($k+1$ branches);}\\
\beta - (k+1) = \frac{d}{\beta} - 1& \text{ if } \alpha \in [k+1-\beta,1) \quad \text{($k+2$ branches).}
\end{cases}\]
In the first case, we have matching in the next iterate due to Lemma~\ref{l:1overbeta}.
\vskip .2cm
Let $\gamma := (k+1)-\beta = 1-\frac{d}{\beta}$ and note that
$1-\gamma = \frac{d}{\beta} \in \Delta(d)$,
and $\beta\gamma = \beta - d$.
It follows that if $x \in \Delta(i) \cap [0,1-\gamma)$, then
\begin{eqnarray*}
T_\alpha(x+\gamma) &=& T_\alpha(x) + \beta -d \bmod 1 \\
&=&
\begin{cases}
T_\alpha(x) + \beta-k = T_\alpha(x) + \frac{d}{\beta} & \text{ if } x+\gamma \in \Delta(i+k-d);\\
T_\alpha(x) + \beta - (k+1) = T_\alpha(x) - \gamma
& \text{ if } x+\gamma \in \Delta(i+k+1-d).
\end{cases}
\end{eqnarray*}
In the first case we have matching in the next step, and
the second case, the difference $\gamma$ switches to $-\gamma$.
Similarly, if $x \in \Delta(i) \cap [\gamma,1)$, then
\[
T_\alpha(x-\gamma) = T_\alpha(x) - \beta + d \bmod 1 =
\begin{cases}
T_\alpha(x) + \gamma & \text{ if } x-\gamma \in \Delta(i-(k+1)+d);\\
T_\alpha(x) - \frac{d}{\beta} & \text{ if } x-\gamma \in \Delta(i-k+d),
\end{cases}
\]
and in the second case, we have matching in the next iterate. The transition graph for the differences $T^j(0^-) - T^j(0^+)$ is given in Figure~\ref{fig:minusd}.
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[scale=2]
\node[rectangle, draw=black] at (1.3,0) {$\gamma$};
\node[rectangle, draw=black] at (1.3,-1) {$d/\beta$};
\node[rectangle, draw=black] at (2.7,0) {\small matching};
\draw[->] (0.2,-.2)--(1,-.8);
\draw[->] (1.6,-.75)--(2.2,-.2);
\draw[->] (1.25,0.75)--(1.25,0.25);
\draw[->] (1.35,0.25)--(1.35,0.75);
\draw[->] (2.7,.8)--(2.7,.2);
\node[rectangle, draw=black] at (0,0) {1};
\node[rectangle, draw=black] at (1.3,1) {$-\gamma$};
\node[rectangle, draw=black] at (2.7,1) {$-d/\beta$};
\draw[->] (0.2,.2)--(1,.8);
\draw[->] (1.7,1)--(2.3,1);
\draw[->] (1.3,-0.2)--(1.3,-0.75);
\node at (1.45,.6) {\small $d$};
\end{tikzpicture}
\caption{The transition graph for the root of $\beta^2 - k\beta - d=0$.
The vertical arrows stand for the $d$ possible $i$ that
$T_\alpha^j(0^-) \in \Delta(i)$ and $T_\alpha^j(0^+) = T_\alpha^j(0^-) \pm \gamma \in [0,1]$.}
\label{fig:minusd}
\end{center}
\end{figure}
As long as there is no matching, $T_\alpha$ switches the order of $T_\alpha^j(0^-)$
and $T_\alpha^j(0^+)$, and therefore we consider the second iterate with ``forbidden regions''
as the composition of two degree $d$-maps with plateaus.
For the first iterate, define for $i = 0,\dots, d-1$ the ``forbidden regions''
\[
V_i := \{ x \in \Delta(i) : x+\gamma \in \Delta(i+k-d) \}
= \Big[ \frac{i+1-\alpha}{\beta} \ , \ \frac{i+1+k-d-\alpha}{\beta} - \gamma\ \Big).
\]
Note that $T_\alpha(1 - \gamma) = \alpha = T_\alpha(0^+)$ and
$T_\alpha(\frac{i+1+k-d-\alpha}{\beta} - \gamma) = \gamma$.
Define $f^1_\alpha:[0,1-\gamma] \to [\gamma,1]$ as
\[ f^1_\alpha(x) = \begin{cases}
\gamma & \text { if } x \in V := \bigcup_{i=0}^{d-1} V_i; \\
T_\alpha(x) & \text{ otherwise.}
\end{cases}
\]
For the second iterate, the ``forbidden regions'' are
\[
W_i := \{ x \in \Delta(i) : x - \gamma \in \Delta(i-(k-d)) \}
= \Big[ \frac{i-(k-d)-\alpha}{\beta} + \gamma \ , \ \frac{i+1-\alpha}{\beta} \Big)
\]
for $i = k-d+1 ,\dots, k$.
Note that $T_\alpha(\gamma) = \beta+\alpha - (k+1) = T_\alpha(0^-)$ and
$T_\alpha(\frac{i-(k-d)-\alpha}{\beta} + \gamma) = 1-\gamma$.
Define $f^2_\alpha:[\gamma,1] \to [0,1-\gamma]$ as
\[ f^2_\alpha(x) = \begin{cases}
1-\gamma & \text { if } x \in W := \bigcup_{i=k-d+1}^{k} W_i; \\
T_\alpha(x) & \text{ otherwise.}
\end{cases}
\]
The composition
$g_\alpha := f^2_\alpha \circ f^1_\alpha : [0,1-\gamma] \to [0, 1-\gamma]$,
once we identify $0 \sim 1-\gamma$, becomes a non-decreasing degree $d^2$ circle endomorphism
with $d+d^2$ plateaus, and slope $\beta^2$ elsewhere.
\begin{figure}[h]
\centering
\subfigure[$T_\alpha$]{
\begin{tikzpicture}[scale=3.2]
\filldraw[yellow!30] (.171,0) rectangle (.226,1);
\filldraw[yellow!30] (.435,0) rectangle (.49,1);
\filldraw[yellow!30] (.699,0) rectangle (.754,1);
\draw(0,0)node[left]{\small $0$}--(.2,0)node[below]{\small $V_0$}--(.46,0)node[below]{\small $V_1$}--(.73,0)node[below]{\small $V_2$}--(1,0)node[below]{\small$1$}--(1,1)--(0,1)node[left]{\small $1$}--(0,.35)node[left]{\small $\alpha$}--(0,.209)node[left]{\small $\gamma$}--(0,0);
\draw[thick, purple!50!black](0,.35)--(.171,1)(.171,0)--(.435,1)(.435,0)--(.699,1)(.699,0)--(.963,1) (.963,0)--(1,.141);
\draw[dotted](0,0)--(1,1)(.171,0)--(.171,1)(.435,0)--(.435,1)(.699,0)--(.699,1)(.963,0)--(.963,1);
\draw[dashed](0,.209)--(.791,.209)--(.791,1);
\draw[dotted](.226,0)--(.226,1)(.49,0)--(.49,1)(.754,0)--(.754,1);
\end{tikzpicture}}
\hspace{2cm}
\subfigure[$T_\alpha$]{
\begin{tikzpicture}[scale=3.2]
\filldraw[yellow!30] (.908,0) rectangle (.963,1);
\filldraw[yellow!30] (.435,1) rectangle (.38,0);
\filldraw[yellow!30] (.699,0) rectangle (.644,1);
\draw(0,0)node[left]{\small $0$}--(.209,0)node[below]{\small $\gamma$}--(.405,0)node[below]{\small $W_1$}--(.67,0)node[below]{\small $W_2$}--(.93,0)node[below]{\small $W_3$}--(1,0)--(1,1)--(0,1)node[left]{\small $1$}--(0,.791)node[left]{\small $1-\gamma$}--(0,.35)node[left]{\small $\alpha$}--(0,0);
\draw[thick, purple!50!black](0,.35)--(.171,1)(.171,0)--(.435,1)(.435,0)--(.699,1)(.699,0)--(.963,1) (.963,0)--(1,.141);
\draw[dotted](0,0)--(1,1)(.171,0)--(.171,1)(.435,0)--(.435,1)(.699,0)--(.699,1)(.963,0)--(.963,1);
\draw[dashed](.209,0)--(.209,.791)--(1,.791);
\draw[dotted](.38,0)--(.38,1)(.644,0)--(.644,1)(.908,0)--(.908,1);
\end{tikzpicture}}
\hspace{1cm}
\subfigure[$f^1_\alpha$]{
\begin{tikzpicture}[scale=3.2]
\draw(0,.209)node[below]{\small $0$}--(.2,.209)node[below]{\small $V_0$}--(.46,.209)node[below]{\small $V_1$}--(.73,.209)node[below]{\small $V_2$}--(.791,.209)--(.791,1)--(0,1)node[left]{\small 1}--(0,.35)node[left]{\small $\alpha$}--(0,.209)node[left]{\small $\gamma$};
\draw[very thick, purple!50!black](0,.35)--(.171,1)(.171,.209)--(.226,.209)--(.435,1)(.435,.209)--(.49,.209)--(.699,1)(.699,.209)--(.754,.209)--(.791,.35);
\draw[dotted](0,.35)--(.791,.35)(.171,.209)--(.171,1)(.435,.209)--(.435,1)(.699,.209)--(.699,1)(.226,.209)--(.226,1)(.49,.209)--(.49,1)(.754,.209)--(.754,1);
\node at (.92,.125) {\small $1-\gamma$};
\draw[white] (1.2,.209)--(1.2,1);
\end{tikzpicture}}
\hspace{.9cm}
\subfigure[$f^2_\alpha$]{
\begin{tikzpicture}[scale=3.2]
\draw(.209,0)node[below]{\small $\gamma$}--(.405,0)node[below]{\small $W_1$}--(.67,0)node[below]{\small $W_2$}--(.93,0)node[below]{\small $W_3$}--(1,0)--(1,.791)--(.209,.791)node[left]{\small $1-\gamma$}--(.209,.141)node[left]{\small $\beta+ \alpha-(k+1)$}--(.209,0)node[left]{\small $0$};
\draw[very thick, purple!50!black](.209,.141)--(.38,.791)--(.435,.791)(.435,0)--(.644,.791)--(.699,.791)(.699,0)--(.908,.791)--(.963,.791)(.963,0)--(1,.141);
\draw[dotted](.209,.141)--(1,.141)(.38,0)--(.38,.791)(.435,0)--(.435,.791)(.699,0)--(.699,.791)(.644,0)--(.644,.791)(.908,0)--(.908,.791)(.963,0)--(.963,.791);
\draw[white] (1.4,0)--(1.4,.791);
\end{tikzpicture}}
\caption{The maps $T_\alpha$, $f_\alpha^1$ and $f_\alpha^2$ for $\beta$ satisfying $\beta^2-3\beta-2$ and $\alpha=0.35$.}
\end{figure}
The argument in the previous case gives again that
\[
X_\alpha = \{ x \in [0,1-\gamma] : g_\alpha^n(x) \notin \text{ plateaus for all } n \in \mathbb N\}
\]
has Hausdorff dimension $\frac{\log d^2}{\log \beta^2} = \frac{\log d}{\log \beta}$,
and also that
$A_{\beta} = \{ \alpha \in [0, k-\beta] \, : \, g_\alpha^n(0^+) \notin \text{ plateaus} \}$
has $\dim_H(A_\beta) = \frac{\log d}{\log \beta}$.
\end{proof}
\section{Multinacci numbers}\label{sec:multinacci}
The last family of Pisot numbers that we consider are the multinacci numbers. Let $\beta$ be the Pisot number that satisfies $\beta^k - \beta^{k-1} - \dots -\beta-1=0$. These numbers increase and tend to $2$ as $k \to \infty$. The map $T_{\alpha}$ has either two or three branches.
\begin{prop}\label{prop:two}
If $T_\alpha$ has two branches, i.e., $\alpha \in \big[0,\frac1{\beta^k}\big)$, then there is matching after $k$ steps.
\end{prop}
\begin{proof}
The map $T_\alpha$ has two branches if and only if
\[ \frac{2-\alpha}{\beta} \ge 1 \quad \Leftrightarrow \quad \alpha \le 2-\beta = 1-\frac1{\beta}-\frac1{\beta^2} - \cdots - \frac1{\beta^{k-1}}=\frac1{\beta^k}.\]
In case $\alpha \le \frac1{\beta^k}$ we have $T_{\alpha}(0^-) = \beta + \alpha -1 = \alpha + \frac1{\beta} + \frac1{\beta^2} + \cdots + \frac1{\beta^{k-1}}$. Since $T_{\alpha}(0^+) = \alpha$,
this means that $T_{\alpha}(0^+) \in \Delta(0)$ and $T_{\alpha}(0^-) \in \Delta(1)$. Hence,
\[ T^2_{\alpha} (0^-) = \beta \alpha + 1 + \frac1{\beta} + \cdots + \frac1{\beta^{k-2}} + \alpha-1 = T^2_{\alpha} (0^+) + \frac1{\beta} + \cdots + \frac1{\beta^{k-2}}.\]
Continuing in the same way, we get that $T_{\alpha}^j(0^+) \in \Delta(0)$ and $T_{\alpha}^j(0^-) \in \Delta(1)$ for all $0 \le j \le k-2$ and that
\begin{eqnarray*}
T^{j+1}_{\alpha} (0^-) &=& \beta(\beta^{j-1} + \cdots + \beta+1) \alpha + 1 + \frac1{\beta} + \cdots + \frac1{\beta^{k-(j+1)}} + \alpha-1 \\
&=& T^{j+1}_{\alpha} (0^+) + \frac1{\beta} + \cdots + \frac1{\beta^{k-(j+1)}}.
\end{eqnarray*}
So,
\[ T^{k-1}_{\alpha} (0^-) = \beta(\beta^{k-3} + \cdots + \beta+1) \alpha + 1 + \frac1{\beta} + \alpha-1 = T^{k-1}_{\alpha} (0^+) + \frac1{\beta}.\]
Lemma~\ref{l:1overbeta} tells us that $T_{\alpha}$ has matching after $k$ steps.
\end{proof}
For the remainder of this section, we assume that $T_\alpha$ has three branches, so $\alpha > \frac{1}{\beta^k}$.
\begin{lemma}\label{lem:codes}
For every $j \ge 0$ we have
\[ |T^j_{\alpha}(0^+) - T^j_{\alpha}(0^-)| \in \Big\{ \frac{e_1}{\beta} + \frac{e_2}{\beta^2} + \cdots + \frac{e_k}{\beta^k} \, : \, e_1, \dots, e_k \in \{0,1\} \Big\}.\]
\end{lemma}
\begin{proof}
The proof goes by induction. Let $D_j = |T_\alpha^j(0^+)-T_\alpha^j(0^-)|$. Clearly
$D_0 = 1 = \sum_{n=1}^k \frac{1}{\beta^n}$.
Suppose now that $D_j = \sum_{n=1}^k \frac{e_n}{\beta^n}$ for $e_n \in \{0,1\}$.
There are three cases:
\begin{enumerate}
\item $T_\alpha^j(0^+)$ and $T_\alpha^j(0^-)$ belong to the same $\Delta(i)$, so $D_j < \frac1{\beta}$. This implies that $e_1=0$ and thus $D_{j+1} =\beta D_j = \sum_{n=2}^k \frac{e_n}{\beta^{n-1}}$ has the required form.
\item $T_\alpha^j(0^+)$ and $T_\alpha^j(0^-)$ belong to adjacent $\Delta(i)$'s. There are two cases:\\
If $e_1 = 0$, then $D_{j+1} = 1-\beta D_j = \sum_{n=2}^k \frac{1-e_n}{\beta^{n-1}} + \frac{1}{\beta^k}$ has the required form.\\
If $e_1 = 1$, then $D_{j+1} = \beta D_j-1 = \sum_{n=2}^k \frac{e_n}{\beta^{n-1}}$ has the required form.
\item $T_\alpha^j(0^+) \in \Delta(0)$ and $T_\alpha^j(0^-)\in \Delta(2)$ or vice versa. Then $D_j > \frac{1}{\beta}$, and since $\sum_{n=2}^k \frac{1}{\beta^n} = 1-\frac1{\beta} < \frac1{\beta}$ we must have $e_1 = 1$.
But then $D_{j+1} = 2-\beta D_j = 1-\sum_{n=2}^k \frac{e_n}{\beta^{n-1}}
= \sum_{n=2}^k \frac{1-e_n}{\beta^n} + \frac{1}{\beta^k}$ has the required form.
\end{enumerate}
This concludes the induction and the proof.
\end{proof}
\begin{remark}\label{rem:codes}
Note also that in case (i) and (ii) with $e_1 = 1$ in the above proof, $T_\alpha^j(0^+)-T_\alpha^j(0^-)$ and $T_\alpha^{j+1}(0^+)-T_\alpha^{j+1}(0^-)$ have the same sign, whereas in case (ii) with $e_1 = 0$ and case (iii), $T_\alpha^j(0^+)-T_\alpha^j(0^-)$ and $T_\alpha^{j+1}(0^+)-T_\alpha^{j+1}(0^-)$ have the opposite sign. This knowledge will be used in the proof of Theorem~\ref{thm:tribonacci2}.
\end{remark}
Now assume that $\beta$ is the tribonacci number, i.e., the Pisot number with minimal polynomial $\beta^3-\beta^2-\beta-1$. Figure~\ref{fig:tribonacci} illustrates Lemma~\ref{lem:codes} and Remark~\ref{rem:codes}. Given $\alpha$ and initial difference $|0^--0^+| = 1=\frac1{\beta}+ \frac1{\beta^2}+\frac1{\beta^3}$ (coded as $111$), the path through the diagram is uniquely determined by the orbit of $0^+$. If $100$ is reached, there is matching in the next step.
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[scale=2]
\node[rectangle, draw=black] at (-.75,0) {111};
\node[rectangle, draw=black] at (.5,0) {001};
\node[rectangle, draw=black] at (2,0) {010};
\node[rectangle, draw=black] at (2,1) {101};
\node[rectangle, draw=black] at (2,-1) {011};
\node[rectangle, draw=black] at (3.5,-1) {110};
\node[rectangle, draw=red] at (3.5,0) {100};
\node[rectangle, draw=black] at (4.75,0) {\small matching};
\draw[->] (-.45,0)--(.2,0);
\draw[->] (.8,0)--(1.7,0);
\draw[->] (2.3,0)--(3.2,0);
\draw[->] (2,.8)--(2,.2);
\draw[->] (2,-.2)--(2,-.8);
\draw[->] (2.3,-1.05)--(3.2,-1.05);
\draw[->] (3.2,-.95)--(2.3,-.95);
\draw[->] (1.8,-.8)--(.8,-.2);
\draw[->] (.8,.2)--(1.8,.8);
\draw[->] (3.8,0)--(4.25,0);
\draw[->] (3.5,-.8)--(3.5,-.2);
\draw[->] (2.1,0.8) .. controls (2.4,.2) and (2.9,1.3) .. (2.25,1);
\node at (1,.6) {\small $1$};
\node at (1,-.6) {\small $1$};
\node at (1.3,-.15) {\small $0$};
\node at (2.7,-.15) {\small $0$};
\node at (2.7,.85) {\small $2$};
\node at (3.65,-.5) {\small $1$};
\node at (2.2,-.5) {\small $1$};
\node at (2.2,.4) {\small $1$};
\node at (2.75,-.8) {\small $2$};
\node at (2.75,-1.2) {\small $0$};
\end{tikzpicture}
\caption{The transition graph for the tribonacci number $\beta$
(root of $\beta^3 = \beta^2 + \beta + 1$).
The small numbers near the arrows indicate the difference in branch
between $T^n_\alpha(0^+)$ and $T^n_\alpha(0^-)$ when this arrow is taken.}
\label{fig:tribonacci}
\end{center}
\end{figure}
We will prove that the bifurcation set has $0<\dim_H(A_\beta) < 1$,
but first we indicate another matching interval.
\begin{prop}\label{p:notdense}
Let $\beta$ be the tribonacci number, so $\beta^3 = \beta^2 +\beta + 1$. If $\alpha \in \big[ \frac1{\beta}, \frac1{\beta^2}+\frac2{\beta^3}\big]$, then there is matching after four steps.
\end{prop}
\begin{proof}
Let $p(\alpha) = \frac{1-\alpha}{\beta-1}$ be the fixed point of the map $T_{\alpha}$. Note that $T_{\alpha}(0^+)= \alpha > p(\alpha)$ if and only if $ \alpha > \frac1{\beta}$ and that $T_{\alpha}(0^-)= \beta + \alpha -2 < p(\alpha)$ if and only if $\alpha < \frac1{\beta^2} + \frac2{\beta^3}$. So if $\alpha \in \big[ \frac1{\beta}, \frac1{\beta^2}+\frac2{\beta^3}\big]$, then the absolutely continuous invariant measure for $T_{\alpha}$ is not fully supported. One can check by direct computation that $T_{ \alpha}(0^+), T_{\alpha}(0^-), T^2_{\alpha}(0^+), T^2_{\alpha}(0^-) \in \Delta(1)$, which gives the following orbits:
\[ \begin{array}{ccccccc}
0^+ & \to &\alpha & \to & (\beta+1)\alpha -1 & \to & \beta^3 \alpha -\beta -1,\\
0^- & \to & \beta + \alpha -2 & \to & (\beta+1) \alpha -1-\frac1{\beta^2} & \to & \beta^3\alpha - \beta -1 -\frac1{\beta}.
\end{array}\]
Hence, there is matching after four steps. Also for $\alpha \in \big\{ \frac1{\beta}, \frac1{\beta^2}+\frac2{\beta^3}\big\}$ one can easily show that there is matching in four steps, since either $0^+$ or $0^-$ is mapped to the fixed point directly and the other one is mapped to 0 after three steps.
\end{proof}
\begin{theorem}\label{thm:tribonacci2}
If $\beta$ is the tribonacci number,
then the bifurcation set $A_\beta$ has Hausdorff dimension strictly
between $0$ and $1$.
\end{theorem}
\begin{proof}
We know from Propositions~\ref{prop:two} and \ref{p:notdense} that there is matching if $\alpha \in [0,\frac{1}{\beta^3}]$ and $\alpha \in [\frac{1}{\beta}, \frac{1}{\beta^2} + \frac2{\beta^3}]$. Therefore is suffices to consider $\alpha \in A := [\frac1{\beta^3}, \frac1{\beta}] \cup [\frac{1}{\beta^2} + \frac2{\beta^3}, 1]$.
Figure~\ref{fig:tribonacci} enables us to represent the phase space
containing the orbits $\{ T_\alpha^n(0^+), T_\alpha^n(0^-) \}_{n \in \mathbb N}$
as $\cup_{e \in \mathcal F} I_e$, where the {\em fiber}
$\mathcal F = \{ \pm 001, \pm 010, \pm 011, \pm 101, \pm 110 \}$ consists of ten
points, and
$$
I_e = \begin{cases}
\, [0,
|
1-\frac{e_1}{\beta} - \frac{e_2}{\beta^2} - \frac{e_3}{\beta^3}] \times \{ e \}
&\text{ if } e = +e_1e_2e_3, \\
\qquad [\frac{e_1}{\beta} + \frac{e_2}{\beta^2} + \frac{e_3}{\beta^3}, 1] \times \{ e \} &\text{ if } e = -e_1e_2e_3.
\end{cases}
$$
(Taking these subintervals of $[0,1]$ allows both
$T_\alpha^n(0^+)$ and $T_\alpha^n(0^-) = T_\alpha^n(0^+) \pm(\frac{e_1}{\beta} + \frac{e_2}{\beta^2} + \frac{e_3}{\beta^3})$
to belong to $[0,1]$ together.)
The dynamics on this space is a skew-product
$$
S_\alpha(x, \pm e_1e_2e_3) = \Big( T_\alpha(x), \phi_\alpha(\pm e_1e_2e_3) \Big),
\qquad \phi_\alpha: \begin{array}{llccc}
e & & 0 & 1 & 2 \\
\hline
\pm001 & \mapsto & \pm010 & \mp101 & \\
\pm010 & \mapsto & \pm100 & \mp011 & \\
\pm011 & \mapsto & \pm110 & \mp001 & \\
\pm101 & \mapsto & & \pm010 & \mp101 \\
\pm110 & \mapsto & & \pm100 & \mp011
\end{array}
$$
where the fiber map $\phi_\alpha$ is given as in Lemma~\ref{lem:codes}
and Remark~\ref{lem:codes}.
However, see Figure~\ref{fig:skew}, there is a set $J_\alpha$ consisting of several intervals where $S$ is not defined because these sets lead to matching in two steps. For $\alpha < 1-\frac{1}{\beta}$, we have
\begin{align*}
J_\alpha = & \Big( \Big[\frac{1-\alpha}{\beta} + \frac{1}{\beta^2}, \frac{2-\alpha}{\beta}\Big)
\cup \Big[\frac{1}{\beta^2}, \frac{1-\alpha}{\beta}\Big) \Big) \times \{ -010\} \\
& \bigcup \Big(\Big[0,\frac{1-\alpha}{\beta} - \frac{1}{\beta^2}\Big)
\cup \Big[\frac{1-\alpha}{\beta} , \frac{2-\alpha}{\beta} - \frac{1}{\beta^2}\Big) \Big) \times \{ +010\} \\
& \bigcup \Big[\frac{1}{\beta} + \frac{1}{\beta^2}, \frac{2-\alpha}{\beta}\Big) \times \{ -110 \}
\bigcup \Big[0, \frac{1-\alpha}{\beta} -\frac{1}{\beta^2}\Big) \times \{ +110 \},
\end{align*}
(depicted in Figure~\ref{fig:skew}), and similar values hold for $1-\frac{1}{\beta} \leq \alpha \leq 1-\frac{1}{\beta^2}$ and
$1-\frac{1}{\beta^2} < \alpha$. We can artificially define $S$ also on $J_\alpha$ by making it affine there as well, with slope $\beta$. We denote the resulting map by $\hat S$.
\begin{figure}[ht]
\begin{center}
\unitlength=8mm
\begin{picture}(12,10)(1.3,0)
\put(1,1){\line(1,0){4}}\put(1.5, 0.6){\tiny$-110$}
\put(4,0.98){\line(-1,0){0.4}}\put(4,0.96){\line(-1,0){0.4}}
\put(3.6, 0.5){\tiny$J_\alpha$}
\put(2,1){\line(0,1){0.1}}
\put(4,1){\line(0,1){0.1}}\put(2.7, 1.2){\tiny$\Delta(1)$}
\put(1.3,1.3){\vector(0,1){1.5}}\put(1, 2){\tiny$2$}
\put(5,2.8){\vector(3,-1){5}}\put(6.8, 1.4){\tiny$0$}
\put(10,1){\line(1,0){4}}\put(13, 0.6){\tiny$+110$}
\put(10,0.98){\line(1,0){0.4}}\put(10,0.96){\line(1,0){0.4}}
\put(10.1, 0.5){\tiny$J_\alpha$}
\put(11,1){\line(0,1){0.1}}
\put(13,1){\line(0,1){0.1}}\put(11.7, 1.2){\tiny$\Delta(1)$}
\put(10.5,1.3){\vector(0,1){1.5}}\put(10.7, 2){\tiny$2$}
\put(10,2.8){\vector(-3,-1){5}}\put(8.2, 1.4){\tiny$0$}
\put(1,3){\line(1,0){4}}\put(1.5, 2.6){\tiny$+011$}
\put(2,3){\line(0,1){0.1}}
\put(4,3){\line(0,1){0.1}}\put(2.7, 3.2){\tiny$\Delta(1)$}
\put(0.7,3.2){\vector(0,1){3.5}}\put(0.3,5.4){\tiny$1$}
\put(10,3){\line(1,0){4}}\put(13, 2.6){\tiny$-011$}
\put(11,3){\line(0,1){0.1}}
\put(13,3){\line(0,1){0.1}}\put(11.7, 3.2){\tiny$\Delta(1)$}
\put(14.2,3.2){\vector(0,1){3.5}}\put(14.4,5.4){\tiny$1$}
\put(6.5,3.5){\tiny\fbox{$\begin{array}{l}\pm100\text{ and} \\
\text{matching} \end{array}$}}
\put(4,1.3){\vector(2,1){3}}\put(5.1,2){\tiny$1$}
\put(10.1,1.3){\vector(-1,1){1.6}}\put(9.5,2){\tiny$1$}
\put(4,4.8){\vector(2,-1){2.3}}\put(5.3, 4.2){\tiny$0$}
\put(11.2,4.8){\vector(-2,-1){2.3}}\put(9.8,4.2){\tiny$0$}
\put(1,5){\line(1,0){4}}\put(1.5, 4.6){\tiny$-010$}
\put(4,4.98){\line(-1,0){1.2}}\put(4,4.96){\line(-1,0){1.2}}
\put(2,4.98){\line(-1,0){0.4}}\put(2,4.96){\line(-1,0){0.4}}
\put(3.5, 4.5){\tiny$J_\alpha$}
\put(2,5){\line(0,1){0.1}}
\put(4,5){\line(0,1){0.1}}\put(2.7, 5.2){\tiny$\Delta(1)$}
\put(1.3,4.8){\vector(0,-1){1.5}}\put(1, 4){\tiny$1$}
\put(10,5){\line(1,0){4}}\put(13, 4.6){\tiny$+010$}
\put(11,4.98){\line(1,0){1.2}}\put(11,4.96){\line(1,0){1.2}}
\put(10,4.98){\line(1,0){0.4}}\put(10,4.96){\line(1,0){0.4}}
\put(11.1, 4.5){\tiny$J_\alpha$}
\put(11,5){\line(0,1){0.1}}
\put(13,5){\line(0,1){0.1}}\put(11.7, 5.2){\tiny$\Delta(1)$}
\put(10.5,4.8){\vector(0,-1){1.5}}\put(10.7, 4){\tiny$1$}
\put(1,7){\line(1,0){4}}\put(1.5, 6.6){\tiny$-001$}
\put(2,7){\line(0,1){0.1}}
\put(4,7){\line(0,1){0.1}}\put(2.7, 7.2){\tiny$\Delta(1)$}
\put(1.3,7.2){\vector(0,1){1.5}}\put(1, 8){\tiny$1$}
\put(1.3,6.8){\vector(0,-1){1.5}}\put(1, 6){\tiny$0$}
\put(10,7){\line(1,0){4}}\put(13, 6.6){\tiny$+001$}
\put(11,7){\line(0,1){0.1}}
\put(13,7){\line(0,1){0.1}}\put(11.7, 7.2){\tiny$\Delta(1)$}
\put(10.5,7.2){\vector(0,1){1.5}}\put(10.7, 8){\tiny$1$}
\put(10.5,6.8){\vector(0,-1){1.5}}\put(10.7, 6){\tiny$0$}
\put(1,9){\line(1,0){4}}\put(1.5, 8.6){\tiny$+101$}
\put(2,9){\line(0,1){0.1}}
\put(4,9){\line(0,1){0.1}}\put(2.7, 9.2){\tiny$\Delta(1)$}
\put(6,8.9){\vector(1,0){3}}\put(7.5, 8.6){\tiny$2$}
\put(4,8.7){\vector(2,-1){6}}\put(8.5, 7.7){\tiny$1$}
\put(10,9){\line(1,0){4}}\put(13, 8.6){\tiny$-101$}
\put(11,9){\line(0,1){0.1}}
\put(13,9){\line(0,1){0.1}}\put(11.7, 9.2){\tiny$\Delta(1)$}
\put(9,9.1){\vector(-1,0){3}}\put(7.5, 9.3){\tiny$2$}
\put(11,8.7){\vector(-2,-1){6}}\put(6.5, 7.7){\tiny$1$}
\end{picture}
\caption{The central part of Figure~\ref{fig:tribonacci}
represented as expanding map on ten intervals.
The small bold
subintervals in the intervals encoded $\pm010$ and $\pm110$
are, for $\alpha < 1-\frac{1}{\beta}$, the regions leading to matching in two steps.}
\label{fig:skew}
\end{center}
\end{figure}
{\bf Claim:} For each $\alpha \in A$ the set $\cup_{n \geq 0} S_\alpha^{-n}(J_\alpha)$ is dense in $[0,1]$.
To prove the claim, first observe that the choice of $\alpha$ ensures
that $T_\alpha$ has a fully supported invariant density.
Therefore, by the Ergodic Theorem, the orbit of Lebesgue-a.e.\ point is dense in $[0,1]$.
In particular, for $p = \frac{1-\alpha}{\beta-1}$ the fixed point,
$\cup_{n \geq 0} T_\alpha^{-n}(p)$ is dense.
This implies that $\cup_{n \geq 0} \hat S_\alpha^{-n}(\{ p \} \times \mathcal F)$
is dense in $\cup_{e \in\mathcal F} I_e$.
By inspection of Figure~\ref{fig:skew} we have that by only
using arrows labelled $0$ and $1$,
state $\pm 010$ cannot be avoided for more than four iterates.
Hence for every neighbourhood
$V \owns (p,e)$ for some $e \in \mathcal F$, there is an $n \in \mathbb N$ such that
$\hat S^n(V) \cap J_\alpha \neq \emptyset$.
This together proves the claim.
\\[3mm]
Let $X_\alpha := \{ x \in \cup_{e \in\mathcal F} I_e : S_\alpha^n(x) \notin J_\alpha\ \forall n \geq 0\}$.
It follows from \cite[Theorem 1 \& 2]{Rai92}
(extending results of \cite{MS80} and \cite{Urb87})
that $h_{top}(S_\alpha|X_\alpha)$ and $h_{top}(\hat S_\alpha|X_\alpha)$
depend continuously on $\alpha \in A$.
In fact, \cite[Theorem 3]{Rai92} gives that
$\dim_H(X_\alpha)$ depends
continuously on $\alpha$ as well, and therefore
they reach their infimum and supremum for some $\alpha \in A$.
{\bf The upper bound:} To show that $\sup_{\alpha \in A} \dim_H(X_\alpha) < 1$, note
that, since $\hat S_\alpha$ a uniformly expanding with slope $\beta$,
it is Lebesgue ergodic, with a unique measure of maximal
entropy $\mu_{\max}(\alpha)$.
This measure is fully supported, and clearly $h_{top}(\hat S_\alpha) = \log \beta$.
Given that $\text{supp}(\mu_{\max}(\alpha)) \neq X_\alpha$,
$h_{top}(\hat S_\alpha|X_\alpha) < \log \beta$ and the above mentioned
continuity implies that also
$\sup_{\alpha \in A} h_{top}(\hat S_\alpha|X_\alpha) < \log \beta$.
The dimension formula (\cite{Bow79}) implies that
$$
\sup_{\alpha \in A} \dim_H(X_\alpha) \leq
\sup_{\alpha \in A} \frac{h_{top}(\hat S_\alpha|X_\alpha)}
{\chi(\hat S_\alpha|X_\alpha)}
=\sup_{\alpha \in A} \frac{h_{top}(\hat S_\alpha|X_\alpha)}{\log \beta}
< 1.
$$
Here $\chi$ denotes the Lyapunov exponent, which has the common value
$\log \beta$ for every $\hat S_\alpha$-invariant measure,
because the slope is constant $\beta$.
If ${\mathcal V} = {\mathcal V}(\alpha) = \{ V_k \}_k$ is a cover of $X_\alpha$ consisting of closed disjoint intervals,
then we can shrink each $V_k$ so that $\partial V_k$ consists of
preimages $w(\alpha)$ of $\Big(\!\{\frac{1-\alpha}{\beta}, \frac{2-\alpha}{\beta}\} \times \mathcal F\Big)
\cup\, \bigcup_{e \in \mathcal F} \partial I_e \cup \partial J_\alpha$.
By Remark~\ref{rem:pw_affine}, the intervals $V_k$ move to the left with speed
$\approx \frac{1}{\beta-1}$ as $\alpha$ moves in $A$.
At the same time, $S_\alpha(0^+ , \{ -001\})$ moves with speed $1$ to the right.
Therefore, to each $V_k = V_k(\alpha) \in {\mathcal V}(\alpha)$ corresponds a parameter interval $U_k$ of
length $\leq 2(1+\frac{1}{\beta-1}) |V_k|$ such that $T_\alpha(0^+) \in V_k(\alpha)$
if and only if $\alpha \in U_k$, and
the $\gamma$-dimensional Hausdorff mass satisfies $\sum_k |U_k|^\gamma \leq
2^\gamma(1+\frac{1}{\beta-1})^\gamma \sum_k |V_k|^\gamma$.
Furthermore, $A_\beta \subset \cup_k U_k$ for each cover $\mathcal V$,
and therefore $\dim_H(A_\alpha) \leq \sup_{\alpha \in A} \dim_H(X_\alpha) < 1$.
{\bf The lower bound:} To obtain a positive lower bound we find a Cantor subset $K(\alpha) \subset X_\alpha$ such that
$h_{top}(S_\alpha|K(\alpha))$ can be estimated. Let $e \in \mathcal F$ and $p_0 = p_0(\alpha) \in I_e$ be periodic under $S_\alpha$, say of period $m$. We think of $p_0$ as a lift of the fixed point $p$ of $T_\alpha$, but due to the transition between fibers, the period of $p_0$ is strictly larger than $1$. Take $\alpha_0$ such that $S^{n-1}_{\alpha_0}(\alpha_0, -001) = p_0$ for some $n \in \mathbb N$, and let $U \owns \alpha_0$ be a neighbourhood such that $g: \alpha \mapsto S_\alpha^{n-1}(\alpha,-001)$ maps $U$ to a fixed neighbourhood of $p_0$; more precisely, write $U = (\alpha_1, \alpha_2)$ and assume that there is $\eta > 0$ such that $g|U$ is monotone, $g(\alpha_1) = p_0(\alpha_1) - \eta \in I_e$ and $g(\alpha_2) = p_0(\alpha_2) + \eta \in I_e$. Since $\frac{d}{d\alpha} g(\alpha) = \frac{\beta^n-1}{\beta-1}$ and $\frac{d}{d\alpha} p_0(\alpha) = \frac{-1}{\beta}$ by Remark~\ref{rem:pw_affine}, such $\eta > 0$ and interval $U$ can be found.
Since $\cup_{j \geq 0} S_\alpha^{-j}(p_0(\alpha))$ is dense, we can find $q_0 \in I_e \setminus \{ p_0\}$ such that $S_\alpha^k(q_0) = p_0$ for some $k \geq 1$. Moreover, the preimages $q_{-j} = I_e \cap S_\alpha^{-m}(q_{1-j})$, $j \geq 1$, converge exponentially fast to $p_0$ as $j \to \infty$. Let $V_0$ be a neighbourhood of $q_0$ such that $S_\alpha^k$ maps $V_0$ diffeomorphically onto a neighbourhood $W \owns p_0$; say $W \subset B_\eta(p_0)$. Let $V_{-j} = I_e \cap S_\alpha^{-m}(q_{1-j})$ and find $j'$ so large that $V_{-j'}$ and $V_{-j'-1} \subset W$. Then there is a maximal Cantor set $K' \subset V_{-j'} \cup V_{-j'-1}$ such that $S_\alpha^{k+j'm}(V_{-j'} \cap K') = S_\alpha^{k+(j'+1)m}(V_{-j'-1} \cap K')= K'$. Next let $K = \cup_i S_\alpha^i(K')$. Then $K = K(\alpha)$ is $S_\alpha$-invariant and since $K'$ is in fact a full horseshoe of two branches with periods $k+j'm$ and $k+(j'+1)m$, $h_{top}(S_\alpha|K) \geq \frac{\log 2}{k+(j'+1)m} > 0$. All these sets vary as $\alpha$ varies in $U$, but since the slope is constant $\beta$, the dimension formula gives
$\dim_H(K(\alpha)) \geq \frac{\log 2}{(k+(j'+1)m) \log \beta}$, uniformly in $\alpha \in U$.
The set $K(\alpha)$ moves slowly as $\alpha$ moves in $A$; as in Remark~\ref{rem:pw_affine}, $\frac{d}{d\alpha} y(\alpha)$ is bounded when $y(\alpha) \in K(\alpha)$ is the continuation of $y \in K(\alpha_0)$. At the same time, $g$ is affine on $U$ and has slope $\frac{d}{d\alpha} T^{n-1}_\alpha(\alpha) \equiv \frac{\beta^n-1}{\beta-1}$. Therefore, $U$ contains a Cantor set of dimension $\ge \frac{\log 2}{(k+(j'+1)m)\log \beta}$, and this proves the lower bound.
\iffalse
Because we chose $\alpha \in A$, the fixed point $p$ has another preimage
than itself. By the symmetry $(\alpha,x) \mapsto (1-\alpha,1-x)$,
we can assume that this preimage lies in $\Delta(0)$.
For a small fixed $\eta > 0$, let us first assume that
there is $W_0 \subset \Delta(0)$ such that $T_\alpha(W_0) =
[p-\eta,p+\eta] \subset \Delta(1)$.
Define recursively $W_n = T_\alpha^{-1}(W_{n-1}) \cap \Delta(1)$.
Then for a fixed $m = m(\eta) \in \mathbb N$ we have
$W_{3m-1}, W_{3m+2} \subset [p-\eta,p+\eta] \subset \Delta(1)$,
and there is a maximal set $L \subset (W_{3m-1} \cup W_{3m+2})$
such that
$T_\alpha^{3m}(L \cap W_{3m-1}) = T_\alpha^{3m+3}(L \cap W_{3m+2}) = L$,
and
$h_{top}(T_\alpha | \cup_{n \geq 0} T_\alpha^n(L)) \geq \frac{\log 2}{3m+2} > 0$.
Next find $e \in \mathcal F$ such that
$\hat S_\alpha^{3m}((L \cap W_{3m-1}) \times \{e\})
= \hat S_\alpha^{3m+3}((L \cap W_{3m+2}) \times \{ e \} ) = L \times \{ e \}$.
Then $K_\alpha = \cup_{n \geq 0} \hat S_\alpha^n(L \times \{ e \})$
is the required subset of $X_\alpha$ with $h_{top}(S_\alpha|X_\alpha) \geq \frac{\log 2}{3m+2}$. By the
dimension formula, $\dim_H(K_\alpha) \geq \frac{\log 2}{(3m+2)\log \beta}$
for all $\alpha \in A$.
Now if $T_\alpha(\Delta(0)) \not\supset [p-\eta, p+\eta]$,
then at least we can find $W_0 \subset \Delta(0)$ such that
$T_\alpha(W_0) = [p, p+\eta]$.
Define $W_1 = T_\alpha^{-1}(W_0) \cap \Delta(2)$ and then
$W_n = T_\alpha^{-1}(W_{n-1}) \cap \Delta(1)$ recursively
for $n \geq 2$, and use further the same procedure as above.
The set $K_\alpha$ moves slowly as $\alpha$ moves in $A$;
as in Remark~\ref{rem:pw_affine}, $\frac{d}{d\alpha} y(\alpha)$ is bounded for $y \in K_{\alpha}$.
At the same time, $\alpha \mapsto S_\alpha^{n-1}(\alpha,-001)$
is piecewise affine on $A$ and has slope
$\frac{d}{d\alpha} T^{n-1}_\alpha(\alpha) \equiv \frac{\beta^n-1}{\beta-1}$.
Therefore, whenever
$\{ \alpha \in A : S^{n-1}_\alpha(\alpha, -001) \in K_\alpha \}$ is non-empty,
it contains a Cantor set of dimension $\ge \frac{\log 2}{(m+2)\log \beta}$,
and this proves the lower bound.
\fi
\end{proof}
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[scale=2]
\node[rectangle, draw=black] at (-1,1) {1111};
\node[rectangle, draw=black] at (0,0) {0001};
\node[rectangle, draw=black] at (1,0) {0100};
\node[rectangle, draw=red] at (2,0) {1000};
\node[rectangle, draw=black] at (3,0) {1100};
\node[rectangle, draw=black] at (2,1) {0111};
\node[rectangle, draw=black] at (3,1) {1110};
\node[rectangle, draw=black] at (-1,-1) {1101};
\node[rectangle, draw=black] at (0,-1) {0101};
\node[rectangle, draw=black] at (1,-1) {0010};
\node[rectangle, draw=black] at (3,-1) {0110};
\node[rectangle, draw=black] at (0,-2) {1010};
\node[rectangle, draw=black] at (1,-2) {1011};
\node[rectangle, draw=black] at (2,-2) {1001};
\node[rectangle, draw=black] at (3,-2) {0011};
\node[rectangle, draw=black] at (2,-.75) {\small matching};
\draw[->] (-1,.8)--(-.3,.2);
\draw[->] (.3,-.2)--(.7,-.8);
\node at (.45,-.2) {0};
\draw[->] (-.3,-.2)--(-1,-.8);
\node at (-.65,-.3) {1};
\draw[->] (2,-.2)--(2,-.55);
\draw[->] (-.7,-1)--(-.3,-1);
\node at (-.5,-.85) {2};
\draw[->] (-.7,-1.2)--(-.3,-1.8);
\node at (-.65,-1.7) {1};
\draw[->](0,-1.2)--(0,-1.8);
\node at (.15,-1.5) {0};
\draw[->](.3,-2)--(.7,-2);
\node at (.5,-2.15) {2};
\draw[->](1.3,-2)--(1.7,-2);
\node at (1.5,-2.15) {2};
\draw[->] (2.7,-2)--(2.3,-2);
\node at (2.5,-2.15) {1};
\draw[->](1,-.8)--(1,-.2);
\node at (1.15,-.5) {0};
\draw[->](1,-1.2)--(1,-1.8);
\node at (1.15,-1.5) {1};
\draw[->](2.9,-1.2)--(2.9,-1.8);
\node at (2.75,-1.5) {1};
\draw[->](3.1,-1.8)--(3.1,-1.2);
\node at (3.25,-1.5) {0};
\draw[->](3,-.8)--(3,-.2);
\node at (3.15,-.5) {0};
\draw[->](3,.8)--(3,.2);
\node at (3.15,.5) {1};
\draw[->](1.3,0)--(1.7,0);
\node at (1.5,-.15) {0};
\draw[->](2.7,0)--(2.3,0);
\node at (2.5,-.15) {1};
\draw[->](2.3,1)--(2.7,1);
\node at (2.5,1.15) {0};
\draw[->] (2.7,.2)--(2.3,.8);
\node at (2.6,.6) {2};
\draw[->] (1,.2)--(1.7,.8);
\node at (1.25,.6) {1};
\draw[->](2,-1.8)--(1.3,-1.2);
\node at (1.75,-1.4) {1};
\draw[->](1.7,1)--(0,1)--(0,.2);
\node at (1,1.15) {1};
\draw[->](3.3,1)--(3.5,1)--(3.5,-2)--(3.3,-2);
\node at (3.65,-.5) {2};
\draw[->](2,-2.2)--(2,-2.4)--(-1,-2.4)--(-1,-1.2);
\node at (-1.15,-1.7) {2};
\draw[->] (.1,-.8) .. controls (.4,-.2) and (-.4,-.2) .. (-.1,-.8);
\node at (0,-.5) {1};
\draw[->] (.3,-1.8)--(.7,-.2);
\node at (.5,-1.4) {1};
\draw[->](1.3,-1.8)--(2.7,-1.2);
\node at (2.3,-1.55) {1};
\end{tikzpicture}
\caption{The transition graph for the Pisot number $\beta$ that is the root of $\beta^4 = \beta^3 + \beta^2 + \beta + 1$.
The small numbers near the arrows indicate the difference in branch between $T^n_\alpha(0)$ and $T^n_\alpha(1)$ when this arrow needs to be taken.}
\label{fig:4bonacci}
\end{center}
\end{figure}
As the degree of the multinacci number becomes higher, the transition graph becomes more complicated. Figure~\ref{fig:4bonacci} shows the graph corresponding to the Pisot number with minimal polynomial $\beta^4 - \beta^3-\beta^2-\beta-1$. We conjecture that there is a.s.~matching for all multinacci numbers $\beta$, but it is not obvious how the technique from Theorem~\ref{thm:tribonacci2} can be extended to cover the general case.
\section{Final remarks}
We make one final remark about the presence of matching for Pisot and non-Pisot $\beta$'s. The techniques used in this article are based on the fact that the difference between $T^n_\alpha(0^+)$ and $T^n_\alpha(0^-)$ can only take finitely many values. Note that $|T^n_\alpha(0^+)-T^n_\alpha(0^-)| = \sum_{i=0}^n c_i \beta^i$ for some $c_i \in \{0,\pm 1, \ldots, \pm \lceil \beta \rceil \}$, see (\ref{q:0orbit}). The Garsia Separation Lemma (\cite{Gar62}) implies that for any Pisot number $\beta$, the set
\[ Y(\beta) = \big\{ \sum_{i=0}^n c_i \beta^i \, : \, c_i \in \{ 0, \pm 1, \ldots, \pm \lceil \beta \rceil\}, \, n = 0,1,2,\ldots \big\}\]
is uniformly discrete, i.e., there is an $r>0$ such that $|x-y|>r$ for each $x\neq y \in Y(\beta)$ (see for example \cite{AK13}). In particular, $Y(\beta)\cap [0,1]$ is finite. Hence, for any Pisot number $\beta$ the set of states in the corresponding transition graph will be finite. We suspect that a substantial number of actual paths through this graph will lead to the matching state, which leads us to the following conjecture.
\begin{conj}
If $\beta$ is a non-quadratic Pisot number, then $0 < \dim_H(A_\beta) <1$.
\end{conj}
Salem numbers require further techniques to decide on prevalent matching. For non-Salem, non-Pisot algebraic numbers finding any matching seems unlikely.
\medskip
{\bf Acknowledgement:}
The authors would like to thank Christiane Frougny and the referee, whose remarks spurred us on to sharper results, which led to Theorem~\ref{t:quadratic} in its present form. The second author is partially supported by the GNAMPA group of the ``Istituto Nazionale di Alta Matematica'' (INdAM). The third author is supported by the NWO Veni-grant 639.031.140.
|
\section{Introduction}
The emergence of new, modern protocols for the Internet promises a solution to long-standing issues that can only be solved by changing core parts of the current protocol stack.
Such new protocols and their implementations must meet the highest requirements: They will have to reliably function at similar levels of maturity as what they aim to replace.
This includes aspects such as reliability, security, performance and, prominently, interoperability between implementations.
Ensuring interoperability is the main reason for standardizing QUIC as a protocol, and the IETF standardization process goes to great lengths, such as requiring multiple independent implementations, to make sure this is achievable.
Thus, better methods and tools that assist with the difficult challenge of interoperability testing are highly desirable.
Automated testing techniques, such as \ac{symex}, have proven themselves to be capable of analyzing complex real world software, usually focused on finding low-level safety violations~\cite{ThreeDecadesLater2013}, and \ac{symex} has also proven its worth in the networking domain in various other ways~\cite{KleeNet2008,KleeNet2010,SOFT2012,PIC2015,SymbexNet2014,SymNet2016,SymPerf2017,VigNAT2017}.
This paper explores the potential of \ac{symex} for checking the interoperability of QUIC implementations.
It does so by presenting a \ac{symex}-based method to detect interoperability issues, and demonstrates its potential in a case study of two existing QUIC implementations, picoquic and QUANT.
We discover that, while our method is able to successfully analyze nontrivial interactions between different implementations, implementations need to disclose more protocol-level information to truly enable deep semantic interoperability testing.
\subsection{Key Contributions and Outline}
The key contributions of this paper are as follows:
\begin{itemize}
\item We describe a method that uses \acf{symex} to test QUIC implementations for interoperability, and discuss how additional information from implementations about their current protocol state could be leveraged for semantically deeper testing.
\item We then present our case study in which we symbolically test picoquic and QUANT for interoperability, and discuss the abstraction layers that are necessary to enable \ac{symex} for QUIC implementations.
\item The final key contribution is the evaluation of our implementation, testing picoquic and QUANT, in which we report on the performance of our method as well as on defects we discovered.
\end{itemize}
\begin{figure*}[t]
\includegraphics[width=\textwidth]{se-tree.pdf}
\caption{\acf{symex} of a small example program. Constraints encountered in branching statements are recorded in the path constraints of the corresponding explored paths. By checking new branching conditions for satisfiability on each path, exactly all reachable paths through the program are explored.}
\label{fig:symex}
\end{figure*}
We begin by giving background on \ac{symex} in Sect.~\ref{sec:symex}, followed by a discussion of related work in Sect.~\ref{sec:rw}. We then present our method in Sect.~\ref{sec:method}, and describe its implementation and the setup of the case study in Sect.~\ref{sec:casestudy}. This is followed by an evaluation of our results in Sect.~\ref{sec:eval}, before we shortly discuss future work in Sect.~\ref{sec:fw} and conclude in Sect.~\ref{sec:conclusion}.
\section{Symbolic Execution (SymEx)}
\label{sec:symex}
Given a program that takes some input (e.g., command line arguments, files, network packets, etc.), \ac{symex} systematically explores the program by executing all reachable paths.
It does so by assigning symbolic values instead of concrete ones to its input, which allows the \ac{symex} engine to fork execution at a branch-statement (i.e., \texttt{if}) when both branches are feasible.
If this is the case, the condition that caused the fork (i.e., the condition inside the \texttt{if} statement) is remembered on the execution path following the \texttt{true}-branch as an additional constraint.
On the other execution path, which follows the \texttt{false}-branch, the negation of the condition is remembered as a constraint instead.
To determine the reachability given the current constraints, an SMT solver, such as Z3~\cite{Z3SMT2008}, is queried.
SMT solvers are the backbone of every \ac{symex} engine, and their performance and completeness directly influence the efficiency of the symbolic analysis, and they ensure that only feasible paths are explored.
Continuing in this fashion a \ac{symex} engine will explore all reachable paths through the program.
Whenever a path terminates, either regularly or because an error was encountered, the engine will query the SMT solver using the collected path constraints to get concrete values for each symbolic input value.
These values will then be recorded in the form
|
of a test case, which can then be run again later to exercise the same path through the program.
If a bug was encountered, the generated test case will be able to reproduce the taken path for further debugging and introspection.
Figure~\ref{fig:symex} shows a small example program that performs operations depending on the value of a symbolic input variable \texttt{x}.
The program contains two conditional branches that have to be traversed before the \texttt{return} in line 4 is reached.
On the right, all paths explored by \ac{symex} are shown.
In the beginning, \texttt{x} is unconstrained, but, as \ac{symex} progresses, a path for each side of the first branch is explored.
For each side, a corresponding constraint (either $x<5$ or $x\geq5$) is added to the path constraints.
When the second branch is reached, only three paths need to be explored further: The constraint set $\{x<5,x\geq100\}$ is not satisfiable, and therefore this path will never be reachable during execution.
In the end, \ac{symex} will query the SMT solver for concrete values for \texttt{x} for each path to generate a suite of concrete test cases that cover all reachable paths of the program.
\section{Related Work}
\label{sec:rw}
Formal methods have long been used to analyze network protocols~\cite{FormalProtocolTesting1989,DivideAbstractModelCheck1999,SSL30Analysis1998,SPINCrypto2002,CMC2002}, often with a focus on security.
However, even if the formal analysis of a network protocol has successfully proven a property, be it related to correctness or security, it is by no means guaranteed that this property will also hold for an implementation of said protocol.
Programs have also been analyzed with formal methods, such as \ac{symex}, to test for obvious problems like memory safety and assertion violations~\cite{KLEE2008,ThreeDecadesLater2013} and for less easily checked properties, such as liveness violations~\cite{SymLive2018} and authentication bypass flaws in firmware binaries~\cite{Firmalice2015}.
One of the main problems encountered when formally analyzing real-world code is the penchant of the state-space to grow infeasibly large---a problem also known as \emph{state explosion}.
Many different approaches to tame the state explosion problem inherent in \ac{symex} have been proposed in the past: state merging~\cite{EfficientStateMerging2012}, targeted search strategies~\cite{DirectedSymEx2011} and pruning of provably equivalent paths~\cite{RWset2008}, to name a few.
As the the state explosion problem grows exponentially with the number of programs considered at the same time, approaches explicitly targeting distributed programs have been developed.
For example, KleeNet~\cite{KleeNet2008,KleeNet2010} exploits the independence of networked programs by delaying codependent path forks until messages are received at each node that require the fork to be actualized.
Testing protocols and programs independently is -- however worthwhile -- not enough.
To this end, approaches have been designed that test implementations for protocol compliance using many different testing and verification methodologies, ranging from fuzzing~\cite{SNOOZE2006,MutationFuzz2012} over \ac{symex}~\cite{SymbexNet2014} to model checking~\cite{ModelCheckingProtocolImplementations2004,CMC2002}.
Validating that a given implementation fulfills a specification or standard does, however, require a formalized representation to be available, which effectively constitutes another implementation of the specification.
One way to circumvent this chicken-egg problem is to exploit the fact that any relevant standard will have multiple implementations, which enables the substitution of compliance testing with that of interoperability testing.
While it is possible that neither implementation is technically compliant with the standard, it becomes more and more improbable that the standard is captured incorrectly by many different people in exactly the same manner.
Due to the inherent state-explosion problem of interoperability testing (multiple different, or even all possible programs are considered at once), multiple approaches to specialized~\cite{SOFT2012} and general~\cite{PIC2015, SymbexNet2014} interoperability testing have been proposed in the past.
\section{Method Outline}
\label{sec:method}
\ac{symex} engines such as KLEE~\cite{KLEE2008}, which our cases study utilizes, usually expect their input to be a program.
However, protocol implementations are naturally libraries, and as such lack an implicit singular entry point.
Although ways to analyze libraries directly have been proposed~\cite{UCKLEE2015}, they suffer from a lack of insight into what constitutes a \emph{sensible} use of the library.
Instead, we propose to analyze programs that utilize the libraries and execute different test scenarios.
One way to choose the test scenarios for this would be to use existing applications that already implement real-world application logic.
This is currently difficult
|
90}}
\put(37,3){\vector(0,1){8}}
\put(35,-2){\makebox(0,0){\footnotesize $1.13$}}
\put(78,12){\line(0,1){80}}
\put(78,3){\vector(0,1){8}}
\put(78,-2){\makebox(0,0){\footnotesize $1.37$}}
\put(150,105){\vector(0,-1){10}}
\put(150,113){\makebox(0,0){disconnected}}
\put(150,65){\vector(-1,-1){10}}
\put(176,68){\makebox(0,0){U-shaped}}
\end{picture}
\parbox{70ex}{
\caption{${\cal F}$ as a function of $u_*$.
}
\label{Fus}}
\end{center}
\end{figure}
The behavior of ${\cal F}$ as a function of $u_*$ and $L$ are shown
in Fig.~\ref{Fus} and Fig.~\ref{finTF}, respectively.
\begin{figure}[ht]
\begin{center}
\begin{picture}(200,200)(0,0)
\put(5,0){\includegraphics[scale=0.7]{free_energy4.eps}}
\put(200,15){\makebox(0,0){$\frac{u_T}{R^2}L$}}
\put(0,173){\makebox(0,0){$\frac{R}{u_T^2}{\cal F}$}}
\put(102,170){\makebox(0,0){\footnotesize
$-4f_0^3\left(\frac{u_T}{R^2}L\right)^{-2}$}}
\put(128,162){\vector(3,-1){20}}
\put(26,130){\vector(-1,1){10}}
\put(42,124){\makebox(0,0){\footnotesize $\frac{u_*}{u_T}=1$}}
\put(162,125){\vector(0,4){25}}
\put(170,117){\makebox(0,0){\footnotesize $\frac{u_*}{u_T}\simeq 1.13$}}
\put(90,10){\vector(-1,-4){5}}
\put(110,0){\makebox(0,0){\footnotesize $\frac{u_*}{u_T}\rightarrow\infty$}}
\put(6,141){\vector(1,0){8}}
\put(-8,140){\makebox(0,0){\footnotesize $-0.5$}}
\put(175,175){\vector(0,-1){32}}
\put(173,183){\makebox(0,0){disconnected}}
\put(120,75){\vector(-1,1){10}}
\put(146,68){\makebox(0,0){U-shaped}}
\end{picture}
\parbox{70ex}{
\caption{${\cal F}$ as a function of $L$.
}
\label{finTF}}
\end{center}
\end{figure}
Fig.~\ref{figL} and Fig.~\ref{Fus} suggest that
both $L$ and ${\cal F}$ take maximum values at $u_*/u_T\sim 1.13$.
In fact, one can show a relation
\begin{eqnarray}
\frac{\partial{\cal F}(u_*)}{\partial u_*}=\frac{u_*^3}{R^3}\sqrt{1-
\frac{u_T^4}{u_*^4}}\, \frac{\partial L(u_*)}{\partial u_*}\ ,
\end{eqnarray}
which implies that $L$ and ${\cal F}$ take maximum at the same point.
Therefore, there is a critical value of $L$ around
\begin{eqnarray}
L_c\simeq \frac{R^2}{u_T}\times 0.308\ ,
\end{eqnarray}
at which the brane configuration jumps:
\begin{eqnarray}
\begin{array}{ccl}
L<L_c&\Rightarrow&\mbox{U-shaped solution}\,, \\
L>L_c&\Rightarrow&\mbox{disconnected solution}\,.
\end{array}
\end{eqnarray}
A plot of the minimum values of ${\cal F}$
is shown in Fig.~\ref{finTF2}.
\begin{figure}[ht]
\begin{center}
\begin{picture}(200,200)(0,0)
\put(5,0){\includegraphics[scale=0.7]{free_energy7.eps}}
\put(200,15){\makebox(0,0){$\frac{u_T}{R^2}L$}}
\put(0,173){\makebox(0,0){$\frac{R}{u_T^2}{\cal F}$}}
\put(146,15){\line(0,1){125}}
\put(146,6){\vector(0,1){8}}
\put(146,0){\makebox(0,0){\footnotesize $0.308$}}
\end{picture}
\parbox{70ex}{
\caption{Free energy as a function of $L$.
}
\label{finTF2}}
\end{center}
\end{figure}
This phenomenon is similar to the behavior of
the probe D8 brane discussed in \cite{Aharony:2006da}
in the context of the holographic QCD based on D4/D8-brane
system.\cite{Sakai:2004cn}
In the phase described by the disconnected solution,
the $U(1)\times U(1)$ symmetry, which is broken to
$U(1)_{\rm diag}$ at $T=0$ as discussed in section \ref{anomsbem},
is restored. This is because the two boundaries are disconnected
and $\varphi^{(+)}$ and $\varphi^{(-)}$ can be shifted independently,
unlike the case for the U-shaped configuration
discussed in section \ref{anomsbem},
\newpage
\section{Summary and discussion}
\label{summary}
This work dealt with level-changing defects in YM-CS field theory,
as realized holographically within the construction of~\cite{Fujita:2009kw}.
We found explicit solutions for the probe brane profiles dual to these defects,
providing a clear geometric understanding of their behavior under level-rank
duality.
After holographic renormalization, we computed the zero-momentum
correlation functions for operators transforming trivially under the
(ultraviolet) $SO(6)$ $R$-symmetry.
Our analysis shows that the system exhibits several interesting phenomena
including anomalies and (in the limit of infinite $N$) the spontaneous breaking
of global symmetries localized on the defects.
Systems with multiple defects furthermore exhibit interesting phase
transitions in which operators localized on defect pairs become
correlated or uncorrelated, depending on the relative separations of the
defects.
In the finite temperature case, we find that this phase transition has an
interesting structure as the temperature rises above the critical temperature
for the ($k=0$) confinement-deconfinement phase transition.
As we argued in section \ref{anomsbem}, the gapless edge mode
found in the 3-dimensional $U(1)$ DBI-CS theory on the probe D7 brane with
two boundaries corresponds to the Nambu-Goldstone mode associated
with the chiral symmetry breaking (an analog of the pion) in large $N$
2-dimensional QCD with one massless flavor. This observation suggests
interesting relations between the physics of the FQHE and 2-dimensional
QCD. In fact, there is a direct correspondence between these two
seemingly unrelated theories, because both of them are
governed by $U(1)$ CS theory at low energies: the effective theory
of mesons in 2-dimensional QCD is given by 3-dimensional DBI-CS theory
on a D7 brane \cite{Yee:2011yn}, while the $U(1)$ CS theory (for the
statistical gauge field) is
an effective theory of the Laughlin states of the FQHE.
The particle that
|
couples to the statistical $U(1)$ gauge field
with the unit charge is the quasiparticle (or quasihole) of the FQH state,
and should correspond to the end point of a fundamental
string attached to the D7 brane;
in 2-dimensional QCD, this is interpreted as an external quark.
Since the CS level is $N$, the quasiparticle
carries an electric charge $1/N$, corresponding to
the baryon number charge of the quark.
Therefore, the electron (an object with unit electric charge) in FQH
state corresponds to the baryon in 2-dimensional QCD.
It would be interesting to investigate this correspondence in more
detail.
We offered further evidence that in the IR limit the model becomes
non-abelian CS theory with level-changing defects, and thus
resembles (the non-Abelian generalization of) the FQHE in the presence
of defects (or edges). Not only does the IR
theory exhibit a gap in the
bulk between the defects, the Wilson loop evaluated in the bulk between
the defects exhibits the topological (perimeter law) behavior
expected of a CS theory when the CS level is non-vanishing, and
confining (area law) behavior expected of pure YM theory when the CS
level vanishes.
This suggests a number of interesting further questions.
The first regards the Hall response.
This was computed in \cite{Fujita:2009kw} in the absence of defects,
both in field theory and its holographic dual.
However, the physical Hall current should actually be carried by the
edge modes, being localized on the defects.
It would be interesting to verify that this edge current is correctly
reproduced by our system in the presence of a background electric field.
Another is how flux attachment, recently discussed in two different
holographic setups in
\cite{Jokela:2013hta,Jokela:2014wsa,Lippert:2014jma}, is realized in
the setup considered here.
Condensed matter physicists have discussed a variety of
experimental setups that
can probe the charge and statistics of the gapless quasiparticle
excitations at the edge of FQH samples.
In the simplest setup, an electric voltage applied between the two edges
of a FQH sample leads,
at zero temperature, to tunneling of quasiparticle excitations between
the edges.
Assuming that the edges are described by 1-dimensional Luttinger liquids
with Luttinger exponent $g$, the
tunneling current responds non-linearly to the applied voltage as $I_t
\sim V_t^{2g-1}$ for non-resonant, and $I_t \sim V_t^{g-1}$
for resonant, tunneling \cite{Wen:1991ty,Kan13449,Wen:1993zz}.
The temperature dependence of the tunneling conductivity is determined
by the same exponents \cite{Wen:1993zz}.\footnote{The tunneling effect
arises only when there is an assistance of the
impurities or other interactions to absorb the other momentum along the
edge direction because electrons on two different edges have different
momentum in general. } It would be interesting to calculate the
tunneling current and
conductivities directly in our holographic setup. This could either be
done directly by applying an
electric field between our defects, or via the retarded correlator of
the relevant quasiparticles on the edge
\cite{Wen:1993zz}.\footnote{\label{Footnote23}According to
\cite{Wen:1993zz,Wen}, if the two edges are separated by vacuum, it is
electrons that are tunneling, and if the separation is by the FQH state,
the relevant excitations are the quasiparticles and -holes
themselves. We hence have to identify these in our model first.} There
is also a third way, employing the retarded correlator of the tunneling
operator between the edges \cite{Wen}. Of course, in
order to be consistent all these three approaches should yield the same
result. We hope to return to the calculation of the tunneling response in
the near future \cite{futurework}.
The non-trivial correlations for the dimension five operator between
distinct edges of the D7 branes found in \eqref{VEV616}
exhibit a behavior that differs between
the cases of defects separated by the YM vacuum ($k=0$) and by a
QH state ($k\ne 0$):
correlations between insertions of the dimension 5 operator at different
edges are non-trivial (to leading order in $N$ and $\lambda$) if and
only if the two edges are connected by a D7 brane in the holographic dual.
But the edges being connected by a D7 brane means that there is
a nontrivial YM-CS vacuum between them, while edges not
connected by any D7 brane are separated by the confining YM vacuum.
It will be interesting to
analyze the implications of this observation for other observables
(such as \emph{e.g.} the chiral condensate) associated to defect pairs
in our model.
Tunneling experiments can also distinguish, in the AC response,
between different non-Abelian statistics
at the same filling fraction (which in most cases determines the
Luttinger exponent $g$).\cite{Wen:1993zz}
Another very elegant experimental setup, the two point-contact
interferometer, was proposed in \cite{Fradkin:1997ge}. In this setup,
quasi-holes can interfere along two interfering paths of a quantum
interferometer, with quasi-holes tunneling from one path to the other at
two point contacts (similar to Josephson junctions). The setup is then
equivalent to an Aharonov-Bohm type experiment, except that the
quasiholes can not only feel the quanta of magnetic flux inside the
closed loop their path is tracing, but also the non-trivial
self-statistics they have with quasiholes inserted in the loop. By
dialing the flux quanta and the number of quasiholes in the
interferometer, one can access both the effective charge and statistics
of the quasiholes. In this way, using the two
point-contact interferometer, one can measure the VEV of closed Wilson
lines with non-Abelian statistics \cite{Fradkin:1997ge}, and ultimately
the Jones polynomial. In the holographic setup, the VEV of Wilson loops
is derived from the minimal surface of the string worldsheet ending at a
prescribed closed curve on the
boundary~\cite{Rey:1998ik,Maldacena:1998im}.\footnote{To compute such
VEV holographically, we need to specify the boundary
condition on the minimal surface at intersecting points with
D7 branes.} It would be interesting to carry out such a calculation in
our model.
We hope to return to this and other interesting aspects of the
model considered here
in the near future \cite{futurework}.
\section*{Acknowledgements}
We would like to thank Adi Armoni, Gerald V. Dunne, Ling-Yan Hung,
Kristan Jensen, Dmitri Kharzeev, Shiraz Minwalla,
Ioannis Papadimitriou, Shinsei Ryu, Kostas Skenderis, Tadashi
Takayanagi, Seiji Terashima,
and Hoo-Ung Yee for helpful discussions. The work of all authors was
supported in part by the World Premier International Research Center
Initiative (WPI), MEXT, Japan. The work of S.S. was supported in part by
JSPS KAKENHI Grant Number 24540259.
The work of R.M. was also supported in
part by the U.S. Department of Energy under Contract
No. DE-FG-88ER40388, as well as by the Alexander-von-Humboldt Foundation
through a Feodor Lynen postdoctoral fellowship.
The work of C.M.T. was supported in part by the 1000 Youth Fellowship program
and a Fudan University start-up grant.
M.F. is partially supported by the grants NSF-PHY-1521045 and NSF-PHY-1214341.
We also thank the Yukawa Institute for Theoretical Physics at Kyoto
University for hospitality. Discussions during the YITP workshop
``Developments in String Theory and Quantum Field Theory''
(YITP-W-15-12) were useful to complete this work.
\newpage
|
\section{Introduction}
\label{intro}
The two-parameter log-Normal distribution, with probability density function,
\begin{equation}
p\left( x\right) =\frac{1}{\sqrt{2\pi }\sigma x}\exp \left[ -\frac{\left(
\ln x-\mu \right) ^{2}}{2\sigma ^{2}}\right] ,\qquad x>0, \label{log-dist}
\end{equation}%
has played a major role in the statistical characterisation of many data sets for several decades
(empirical fitting) and has been an inspiration for theoretical studies as
well. The form of Eq.~(\ref{log-dist})
has been derived in several ways with particularly emphasis to the works of
Kapteyn \cite{kapteyn}, the Gibrat's law of proportionate effect~\cite{gibrat}, the Theory
of Breakage introduced by Kolmogorov~\cite{kolmogorov} or more recently in the theory of
chemical reactions~\cite{fa}. Concomitantly, Eq.~(\ref{log-dist}) has been
systematically modified to cope with different sets of data. Of those
generalisations the most famous of them is the truncated
log-normal distribution,%
\begin{equation}
p\left( x\right) =\frac{1}{\sqrt{2\pi }\sigma \left( x-\gamma \right) }\exp %
\left[ -\frac{\left( \ln \left( x-\gamma \right) -\mu \right) ^{2}}{2\sigma
^{2}}\right] ,\qquad 0<\gamma <x, \label{log-truncated}
\end{equation}%
which has become a mathematical object of study in itself.
In this manuscript we introduce an alternative generalisation of the
log-normal distribution which we will term the $q$\emph{-log Normal}
distribution for historial reasons. This purported probability density
function emerges from changing the traditional algebra in the Kaypten
dynamics by a modified multiplication operation independently introduced by
Borges~\cite{borges} and Nivanen~\textit{et~al.}~\cite{nivanen}. This algebra has got
direct consequences on the emergence of asymptotic scale-free behaviour.
Specifically, in this manuscript we survey a new family of probability
density functions based on,%
\begin{equation}
p_{q}\left( x\right) =\frac{1}{\mathcal{Z}_{q}\,x^{q}}\exp \left[ -\frac{%
\left( \ln _{q}\,x-\mu \right) ^{2}}{2\,\sigma ^{2}}\right] ,\qquad \left(
x\geq 0\right) , \label{q-log}
\end{equation}%
where $ln_{q}(x)$ represents a generalisation of the logarithm of base $e$
with the normalisation,%
\begin{equation}
\mathcal{Z}_{q}=\left\{
\begin{array}{ccc}
\sqrt{\frac{\pi }{2}}\mathrm{erfc}\left[ -\frac{1}{\sqrt{2}\sigma }\left(
\frac{1}{1-q}+\mu \right) \right] \sigma & if & q<1 \\
& & \\
\sqrt{\frac{\pi }{2}}\mathrm{erfc}\left[ \frac{1}{\sqrt{2}\sigma }\left(
\frac{1}{1-q}+\mu \right) \right] \sigma & if & q>1.%
\end{array}%
\right. ,
\end{equation}
where $$\mathrm{erfc}(x) \equiv 2 \Phi(\sqrt{2} \, x)-1,$$ and
which in the limit $q\rightarrow 1^{\pm }$ exactly gives the traditional
log-Normal distribution. The aim of the present work is to introduce the functional
form of the distribution,
its dynamical origins and statistical features as well as applying it
to data of biological and financial origin. The manuscript is organised as
follows. In Sec. \ref{preliminar} we give a historical and mathematical
introduction of the underlying algebra; In Sec.~\ref{multiplicative} we reinterpret the Kaypten scenario for the emergence of
the log-normal, but using the $q$-algebra formalism which lead to the $q<1$,
$q>1$, and double $q$-log-Normal probability density function. In Secs.~\ref%
{exemplo} and \ref{randomnumber} we analyse their statistical properties and generate
random variables according to the distribution.
Finally, in Sec.~\ref{test} we introduce some real examples to which the new
distribution is shown to be a worthy candidate for medelling the data.
\section{Preliminaries: the $q$-product}
\label{preliminar}
The $q$-product, $\otimes _{q}$, has its origins in the endeavour to
extend the subject of statistical mechanics to systems exhibiting anomalous
behaviour when compared to systems described at Boltzmann-Gibbs equilibrium,
\textit{i.e.}, to deal with systems presenting long-lasting correlations,
ageing phenomena, non-exponential sensitivity to initial conditions, and
scale-invariance occupancy of the allowed phase space (for detailed
explanation of these concepts see~\cite{applications,abe}). The proposed
extension of statistical mechanics theory is grounded on the entropic
functional%
\begin{equation}
S_{q}\equiv \frac{1-\int \left[ p\left( x\right) \right] ^{q}\,dx}{q-1}%
,\qquad \left( q\in
\mathbb{R}
\right) \label{ct-88}
\end{equation}%
(in its continuous and one-dimensional version) usually called Tsallis
entropy as well~\cite{ct-88}. This entropic form recovers the celebrated
Boltzmann-Gibbs-Shannon information measure,%
\begin{equation}
S=-\int p\left( x\right) \,\ln \left( x\right) \,dx, \label{bg}
\end{equation}%
in the limit that the entropic parameter $q$ approaches $1$. The
interpretation of Eq. (\ref{ct-88}) as a $q$ generalisation of Eq.~(\ref{bg}%
) induced the introduction of analogue functions of the exponential and the
logarithm, namely, the $q$\emph{-exponential}
\begin{equation}
\exp _{q}\left( x\right) \equiv \left[ 1+\left( 1-q\right) \,x\right] ^{%
\frac{1}{1-q}},\qquad \left( x,q\in
\mathbb{R}
\right) ,
\end{equation}%
($\exp _{q}\left( x\right) =0$ if $1+(1-q)\,x\leq 0$) and its inverse the $q$%
\emph{-logarithm}~\cite{ct-quimica},
\begin{equation}
\ln _{q}\left( x\right) \equiv \frac{x^{1-q}-1}{1-q},\qquad \left( x>0,q\in
\mathbb{R}
\right) .
\end{equation}%
A functional form that generalises
the mathematical identity,
\begin{equation}
\exp \left[ \ln \,x+\ln \,y\right] =x\times y,\qquad \left( x,y>0\right) ,
\end{equation}%
for the $q$-product is,
\begin{equation}
x\otimes _{q}y\equiv \exp _{q}\left[ \ln _{q}\,x+\ln _{q}\,y\right] .
\label{q-product}
\end{equation}
For $q\rightarrow 1$, Eq.~(\ref{q-product}) recovers the usual
property,
\begin{equation*}
\ln \left( x\times y\right) =\ln \,x+\ln \,y
\end{equation*}%
($x,y>0$), with $x\times y\equiv x\otimes _{1}y$. Its inverse operation, the
$q$-division, $x\oslash _{q}y$, satisfies the following equality $\left(
x\otimes _{q}y\right) \oslash _{q}y=x$.
Bearing in mind that the $q$-exponential is a non-negative function, the $q$%
-product must be restricted to the values of $x$ and $y$ that respect the
condition,
\begin{equation}
\left\vert x\right\vert ^{1-q}+\left\vert y\right\vert ^{1-q}-1\geq0.
\label{cond-q-prod}
\end{equation}
Moreover, we can extend the domain of the $q$-product to negative values of $%
x$ and $y$ writing it as,
\begin{equation}
x\otimes_{q}y\equiv\mathrm{\ sign}\left( x\,y\right) \exp_{q}\left[ \ln
_{q}\,\left\vert x\right\vert +\ln_{q}\,\left\vert y\right\vert \right] .
\label{q-product-new}
\end{equation}
Regarding some key properties of the $q$-product we mention:
\begin{enumerate}
\item $x\otimes_{1}y=x\ y$;
\item $x\otimes_{q}y=y\otimes_{q}x$;
\item $\left( x\otimes_{q}y\right) \otimes_{q}z=x\otimes_{q}\left(
y\otimes_{q}z\right) =\left[ x^{1-q}+y^{1-q}-2\right] ^{\frac{1}{1-q}}$;
\item $\left( x\otimes_{q}1\right) =x$;
\item $\ln_{q}\left[ x\otimes_{q}y\right] \equiv\ln_{q}\,x+\ln_{q}\,y$;
\item $\ln_{q}\left( x\,y\right) =\ln_{q}\left( x\right) +\ln_{q}\left(
y\right) +\left( 1-q\right) \ln_{q}\left( x\right) \ln_{q}\left( y\right) $;
\item $\left( x\otimes_{q}y\right) ^{-1}=x^{-1}\otimes_{2-q}y^{-1}$;
\item $\left( x\otimes _{q}0\right) =\left\{
\begin{array}{ccc}
0 & & \mathrm{if\ }\left( q\geq 1\ \mathrm{and\ }x\geq 0\right) \ \mathrm{%
or\ if\ }\left( q<1\ \mathrm{and\ }0\leq x\leq 1\right) \\
& & \\
\left( x^{1-q}-1\right) ^{\frac{1}{1-q}} & & \mathrm{otherwise}%
\end{array}%
\right. $
\end{enumerate}
For particular values of $q$, \textit{e.g.}, $q=1/2$, the $q$-product
provides nonnegative values at points for which the inequality $\left\vert
x\right\vert ^{1-q}+\left\vert y\right\vert ^{1-q}-1<0$ is verified.
According to the cut-off of the $q$-exponential, a value of zero for $%
x\otimes _{q}y$ is set down in these cases. Restraining our analysis of Eq.~(%
\ref{cond-q-prod}) to the sub-space $x,y>0$, we can observe that for $%
q\rightarrow -\infty $ the region $\left\{ 0\leq x\leq 1,0\leq y\leq
1\right\} $ is not defined. As the value of $q$ increases, the forbidden
region decreases its area, and when $q=0$, we have the limiting line given
by $x+y=1$, for which $x\otimes _{0}y=0$. Only for $q=1$, the entire set of $%
x$ and $y$ real values of has a defined value for the $q$-product. For $q>1$%
,\ the condition (\ref{cond-q-prod}) implies a region, $\left\vert
x\right\vert ^{1-q}+\left\vert y\right\vert ^{1-q}=1$ for which the $q$%
-product diverges. This undefined region augments its area as $q$ goes to
infinity. When $q=\infty $, the $q$-product is only defined in $\left\{
x\geq 0,0\leq y\leq 1\right\} \cup \left\{ 0\leq x\leq 1,y>1\right\} $.
Illustrative plots are presented in Fig.\ (1) of~\cite{part1}.
From the properties presented above we ascertain that the $q$-product has
got a neutral element and opposite and inverse elements under restrictions.
However, distributive property is not held and this fact thwarts the $q$%
-product of having commutative ring or field structures. Nevertheless, it
does not diminish the importance of this algebraic structure as other
algebras like the tropical algebra \cite{tropical}\ do not present all the
standard algebra properties and because the $q$-product represents a quite
rare case of a both-side non-distributive structure~\cite{green}.
Besides its inherent exquisiteness, this generalisation has found its own
field of applicability in the definition of the $q$-Fourier transform~\cite{sabir} which plays a key part in non-linear generalisations of the $q$%
-Central Limit Theorem~\cite{clt}, the definition of a modified characteristics methods
which allows the full analytical solution of the porous medium equations~\cite{sudq} and the structure of Pascal-Leibniz triangles~\cite{thierry}.
\section{Multiplicative processes as generators of distributions}
\label{multiplicative}
Multiplicative processes, particularly stochastic multiplicative processes,
have been the source of plentiful models applied in several fields of
science and knowledge. In this context, we can name the study of fluid
turbulence \cite{turbulence}, fractals \cite{feder}, finance \cite%
{mandelbrot}, linguistics \cite{murilinho}, etc. Specifically,
multiplicative processes play a very important role in the emergence of the
log-Normal distribution as a natural and ubiquitous distribution.
With regard to the dynamical origins of the log-Normal
distribution, we have mentioned in Sec. \ref{intro} the most celebrated
examples. Now, we shall give a brief account of the Kapteyn's process. To that, let us
consider a variable $\tilde{Z}$ obtained from a multiplicative random
process,%
\begin{equation}
\tilde{Z}=\prod\limits_{i=1}^{N}\tilde{\zeta}_{i}, \label{product}
\end{equation}%
where $\tilde{\zeta}_{i}$ are nonnegative microscopic variables associated
with a distribution $f^{\prime }\left( \tilde{\zeta}\right) $. If we
consider the following change of variables $Z\equiv \ln \tilde{Z}$, then we
have,%
\begin{equation*}
Z=\sum\limits_{i=1}^{N}\zeta _{i},
\end{equation*}%
with $\zeta \equiv \ln \tilde{\zeta}$. Assume now that $\zeta $ has a distribution $f\left( \zeta \right) $ with
mean $\mu $ and variance $\sigma ^{2}$. Then, $Z$ converges to the
Gaussian distribution in the limit of $N$ going to infinity as entailed by
the Central Limit Theorem~\cite{araujo}. Explicitly, considering that the
variables $\zeta $ are independently and identically distributed, the
Fourier Transform of $p\left( Z^{\prime }\right) $ is given by,
\begin{equation}
\mathcal{F}\left[ p\left( Z^{\prime }\right) \right] \left( k\right) =\left[
\int_{-\infty }^{+\infty }e^{i\,k\,\frac{\zeta }{N}}\,f\left( \zeta \right)
\,d\zeta \right] ^{N}, \label{fourier1}
\end{equation}%
where $Z^{\prime }=N^{-1}Z$. For all $N$, the integrand can be expanded as,%
\begin{equation}
\begin{array}{c}
\mathcal{F}\left[ p\left( Z^{\prime }\right) \right] \left( k\right) =\left[
\sum\limits_{n=0}^{\infty }\frac{\left( ik\right) ^{n}}{n!}\frac{%
\left\langle \zeta ^{n}\right\rangle }{N}\right] ^{N}, \\
\\
\mathcal{F}\left[ p\left( Z^{\prime }\right) \right] \left( k\right) =\exp
\left\{ N\ln \left[ 1+ik\frac{\left\langle \zeta \right\rangle }{N}-\frac{1}{%
2}k^{2}\frac{\left\langle \zeta ^{2}\right\rangle }{N^{2}}+O\left(
N^{-3}\right) \right] \right\} ,%
\end{array}%
\end{equation}%
where $\left\langle \zeta ^{n}\right\rangle$ represents the $n$th order raw moment of $\zeta $.
Expanding the logarithm,%
\begin{equation}
\mathcal{F}\left[ P\left( Z^{\prime }\right) \right] \left( k\right) \approx
\exp \left[ ik\mu -\frac{1}{2N}k^{2}\sigma ^{2}\right] .
\end{equation}%
Applying the inverse Fourier Transform, and reverting the $Z^{\prime }$
change of variables we finally obtain,%
\begin{equation}
p\left( Z\right) =\frac{1}{\sqrt{2\,\pi \,N}\sigma }\exp \left[ -\frac{%
\left( Z-N\,\mu \right) ^{2}}{2\,\sigma ^{2}\,N}\right] .
\end{equation}%
We can define the attracting distribution in terms of the original
multiplicative random process which yields the usual log-Normal distribution
\cite{log-normal-book},%
\begin{equation}
p\left( \bar{Z}\right) =\frac{1}{\sqrt{2\,\pi \,N}\sigma \,\bar{Z}}\exp %
\left[ -\frac{\left( \ln \bar{Z}-N\,\mu \right) ^{2}}{2\,\sigma ^{2}\,N}%
\right] .
\end{equation}
Although this distribution with two parameters, $\mu $ and $\sigma $, is
able to appropriately describe a large variety of data sets, there are cases
for which the log-Normal distribution fails statistical testing~\cite{log-normal-book}.
In some of these cases, such a failure has been overcome
by introducing different statistical distributions (e.g., Weibull
distributions~\cite{weibull, weibull1, weibull2}) or by changing the 2-parameter log-Normal
distribution into a 3-parameter log-Normal distribution~\cite{yuang, finney},%
\begin{equation}
p\left( x\right) =\frac{1}{\sqrt{2\,\pi }\sigma \,\left( x-\theta \right) }%
\exp \left[ -\frac{\left( \ln \left[ x-\theta \right] -\mu \right) ^{2}}{%
2\,\sigma ^{2}}\right] ,
\end{equation}%
which is very well characterised in the current scientific literature \cite%
{cohen}.
\subsection{One side generalisations}
Moving ahead, we now present our alternative procedure to generalise the distribution in
Eq.~(\ref{log-dist}). The motivation for this proposal comes from changing
the $N$ products in Eq. (\ref{product}) by $N$ $q$-products,%
\begin{equation}
\tilde{Z}=\underset{}{\prod\limits_{i=1}^{N}}^{(q)}\tilde{\zeta}_{i}\equiv
\tilde{\zeta}_{1}\otimes _{q}\tilde{\zeta}_{2}\otimes _{q}\ldots \otimes _{q}%
\tilde{\zeta}_{N}. \label{p-product}
\end{equation}%
Applying the $q$-logarithm we have a sum of $N$ terms. If every term is
identically and independently distributed, then for variables $\zeta
_{i}=\ln _{q}$ $\tilde{\zeta}_{i}$ with finite variables we have a Gaussian
has stable distribution~\footnote{
Stable in the sense that if we consider the addition of two variables with that distribution the outcome of the convolution of the probability density functions is a probability density function with exactly the same functional form.}
, \textit{i.e.}, a Gaussian distribution in the $q$%
-logarithm variable. From this scenario we can obtain our $q$\emph{-log
Normal probability density function},%
\begin{equation}
p_{q}\left( x\right) \equiv \frac{1}{\mathcal{Z}_{q}\,x^{q}}\exp \left[ -%
\frac{\left( \ln _{q}\,x-\mu \right) ^{2}}{2\,\sigma ^{2}}\right] ,\qquad
\left( x\geq 0\right) , \label{qlog-normal}
\end{equation}%
with the normalisation,%
\begin{equation}
\mathcal{Z}_{q}\equiv \left\{
\begin{array}{ccc}
\sqrt{\frac{\pi }{2}}\mathrm{erfc}\left[ -\frac{1}{\sqrt{2}\sigma }\left(
\frac{1}{1-q}+\mu \right) \right] \sigma & if & q<1 \\
& & \\
\sqrt{\frac{\pi }{2}}\mathrm{erfc}\left[ \frac{1}{\sqrt{2}\sigma }\left(
\frac{1}{1-q}+\mu \right) \right] \sigma & if & q>1.%
\end{array}%
\right.
\end{equation}%
In the limit of $q$ equal to $1$, $\ln _{q\rightarrow 1}x=\ln x$ and $%
\mathcal{Z}_{q\rightarrow 1}=\sqrt{2\,\pi }\sigma $ and the usual log-Normal
is recovered. The cumulative distribution,
\begin{equation*}
\mathcal{P}\left( x\right) \equiv \int_{0}^{x}p\left( z\right) \,dz,
\end{equation*}%
is given by the following expressions,%
\begin{equation*}
\mathcal{P}_{q>1}\left( x\right) =\frac{1+\mathrm{erf}\left[ \frac{\ln
_{q}\left( x\right) -\mu }{\sqrt{2}\sigma }\right] }{1+\mathrm{erf}\left[
\frac{\frac{1}{q-1}-\mu }{\sqrt{2}\sigma }\right] },
\end{equation*}%
and,%
\begin{equation*}
\mathcal{P}_{q<1}\left( x\right) =\frac{\mathrm{erf}\left[ \frac{\ln
_{q}\left( x\right) -\mu }{\sqrt{2}\sigma }\right] -\mathrm{erf}\left[ -%
\frac{1}{\sqrt{2}\sigma }\left( \frac{1}{1-q}+\mu \right) \right] }{1+%
\mathrm{erfc}\left[ -\frac{1}{\sqrt{2}\sigma }\left( \frac{1}{1-q}+\mu
\right) \right] },
\end{equation*}%
Typical plots for cases with $q=\frac{4}{5}$, $q=1$, $q=\frac{5}{4}$
are depicted in Fig.~\ref{fig-pdf}. It can be seen that for $q$ greater than
one the likelihood of events round the peak as well as large values is
greater than that for the log-Normal case whereas the case $q<1$ favours events of
small value and the intermediate regime between the peak and the tail.
\begin{figure}[tbh]
\includegraphics[width=0.45\columnwidth,angle=0]{dist.eps}
\includegraphics[width=0.45\columnwidth,angle=0]{distloglinear.eps}
\includegraphics[width=0.45\columnwidth,angle=0]{distloglog.eps}\\[0pt]
\caption{Plots of the Eq. (\protect\ref{qlog-normal}) \textit{vs} $x$ for $q=%
\frac {4}{5}$ (dotted line), $q=1$ (full line) and $q=\frac{5}{4}$ (dashed
line) in linear-linear scale (upper), log-linear (right), log-log (lower).}
\label{fig-pdf}
\end{figure}
The raw statistical moments,%
\begin{equation}
\left\langle x^{n}\right\rangle \equiv \int_{0}^{\infty }x^{n}\,p\left(
x\right) \,dx, \label{momentos}
\end{equation}%
can be analytically computed for $q<1$ giving \cite{gradshteyn},%
\begin{equation}
\left\langle x^{n}\right\rangle =\frac{\Gamma \left[ \nu \right] \exp \left[
-\frac{\gamma ^{2}}{8\,\beta }\right] D_{-\nu }\left[ \frac{\gamma }{\sqrt{%
2\,\beta }}\right] }{\sqrt{\beta ^{\nu }\,\pi }\,\sigma \left( 1-q\right)
\mathrm{erfc}\left[ -\frac{1}{\sqrt{2}\sigma }\left( \frac{1}{1-q}+\mu
\right) \right] }, \label{raw moment}
\end{equation}%
with%
\begin{equation}
\beta =\frac{1}{2\sigma ^{2}\left( 1-q\right) ^{2}};\quad \gamma =-\frac{%
1+\mu \,\left( 1-q\right) }{\left( 1-q\right) ^{2}\,\sigma ^{2}};\quad \nu
=1+\frac{n}{1-q},
\end{equation}%
where $D_{-a}\left[ z\right] $ is the \emph{parabolic cylinder function}
\cite{paraboliccylinderd}. Equation (\ref{raw moment}) allows us to write
the Fourier Transform or the generating function as,
\begin{equation}
\begin{array}{ccc}
\varphi \left( k\right) & = & \int p\left( x\right) e^{ikx}dx \\
& & \\
& = & \sum\limits_{n=0}^{\infty }\frac{\Gamma \left[ \nu _{n}\right] \exp %
\left[ -\frac{\gamma ^{2}}{8\,\beta }\right] D_{-\nu _{n}}\left[ \frac{%
\gamma }{\sqrt{2\,\beta }}\right] }{\sqrt{\beta ^{\nu _{n}}\,\pi }\,\sigma
\left( 1-q\right) \mathrm{erfc}\left[ -\frac{1}{\sqrt{2}\sigma }\left( \frac{%
1}{1-q}+\mu \right) \right] }(\mathrm{i}k)^{n}.%
\end{array}
\label{mom-gen}
\end{equation}
For $q>1$, the raw moments are given by an expression quite similar to the
Eq. (\ref{raw moment}) with the argument of the erfc replaced by
\begin{equation*}
\frac{1}{\sqrt{2}\sigma }\left( \frac{1}{1-q}+\mu \right) .
\end{equation*}%
However, the finiteness of the raw moments is not guaranteed for every $q>1$
for two reasons. First, according to the definition of $D_{-\nu }\left[ z\right] $, $%
\nu $ must be greater than $0$. Second, the core of the probability density
function (\ref{qlog-normal}),
\begin{equation*}
\exp \left[ -\frac{\left( \ln _{q}\,x-\mu \right) ^{2}}{2\,\sigma ^{2}}%
\right] ,
\end{equation*}%
does not vanish in the limit of $x$ going to infinity,%
\begin{equation}
\lim_{x\rightarrow \infty }\exp \left[ -\frac{\left( \ln _{q}\,x-\mu \right)
^{2}}{2\,\sigma ^{2}}\right] =\exp \left[ -\frac{\gamma ^{2}}{2}\right] .
\end{equation}%
This means that the limit $p\left( x\rightarrow \infty \right) =0$ is
introduced by the normalisation factor $x^{-q}$, which comes from redefining
the Normal distribution of variables,
\begin{equation}
y\equiv \ln _{q}x,
\end{equation}%
as the probability density function of variables $x$. Because of that, if
the moment exceeds the value of $q$, then the integral (\ref{momentos})
diverges. This has got severe repercussion in the adjustment procedures that
can be applied.
\subsection{Two side generalisation}
As visible from Fig.~(\ref{fig-pdf}), our generalisation modifies the tail
behaviour for small and large values of the variable depending on the value
of $q$ which describes the dependence between the variable that is
transferred into the $q$-product. It is well-known that many processes are
actually defined by a mixture of different laws of formation, some simpler
than others. Within this context, dual relations namely,%
\begin{eqnarray*}
q^{\prime } &=&2-q, \\
q^{\prime } &=&\frac{1}{2-q},
\end{eqnarray*}%
wherefrom property 7 of the $q$-product (see Sec. \ref{preliminar}) emerges,
are very inviting in the way that they represent the mapping of a certain rule
onto another which seems to be different at first but for which there is
actually a univocal transformation. Accordingly, we can imagine a scenario
in which variables follow two distinct paths either $q$-multiplying or $(2-q)
$-multiplying (corresponding to the inverse of the $q$-product) according to
some proportions $f$ and $f^{\prime }=1-f$. This proposal is in fact quite
plausible if we bear in mind few of the rife examples of mixing in dynamical
processes. From that, we establish the law,%
\begin{equation}
p_{q,2-q}\left( x\right) =f\,p_{q}\left( x\right) +\,f^{\prime
}\,p_{2-q}\left( x\right) , \label{two-branched}
\end{equation}%
for which we hold that $f=f^{\prime }=\frac{1}{2}$ is the most paradigmatic
case.
\subsection{Alternative interpretation}
The $q$-log-Normal distribution can introduce another clear advantage.
Namely, it provides us with an \emph{natural} and dynamical interpretation of the
truncated Normal distribution~\cite{johnsonkotz}. In other words, we can
look at the left(right) truncated Normal distribution,%
\begin{equation}
\mathcal{G}_{b}\left( y\right) =\sqrt{\frac{2}{\pi \,\sigma }}\text{\textrm{%
erfc}}\left[ \text{\textrm{sgn}}\left( b\right) \frac{1}{\sqrt{2}\sigma }%
\left(
|
\mu -b\right) \right] ^{-1}\exp \left[ -\frac{\left( y-\mu \right)
^{2}}{2\,\sigma ^{2}}\right] ,
\end{equation}%
in which the truncation factor,
\begin{equation}
b=\frac{1}{q-1}, \label{truncation}
\end{equation}%
and $y=\ln _{q}x$ are intimately related to the value of $q$ which controls
the $q$-product part of the dynamical process. In this case the Fourier Transform can be analytically
determined. For left truncations we obtain,%
\begin{equation}
\mathcal{F}\left[ \mathcal{G}_{b}\right] \left( y\right) =\frac{1+\text{%
\textrm{erf\thinspace }}\left[ -\mathrm{i}\frac{k\sigma }{\sqrt{2}}-B\right]
}{\text{\textrm{erfc}}\left[ B\right] }\exp \left[ -\frac{1}{2}k\left( 2\,%
\mathrm{i}\,\mu +k\,\sigma ^{2}\right) \right] ,
\end{equation}%
where $\mathrm{erf}(x) \equiv \Phi(x)-1$. For right-truncations,%
\begin{equation}
\mathcal{F}\left[ \mathcal{G}_{b}\right] \left( y\right) =\frac{\text{%
\textrm{erfc\thinspace }}\left[ -\mathrm{i}\frac{k\sigma }{\sqrt{2}}-B\right]
}{\text{\textrm{erfc}}\left[ -B\right] }\exp \left[ -\frac{1}{2}k\left( 2\,%
\mathrm{i}\,\mu +k\,\sigma ^{2}\right) \right] ,
\end{equation}%
with%
\begin{equation}
B=\frac{b-\mu }{\sqrt{2}\sigma }.
\end{equation}%
\section{Examples of cascade generators}
\label{exemplo}
In this section, we discuss the upshot of two simple cases in which the
dynamical process described in the previous section is applied. We are going
to verify that the value of $q$ influences the nature of the attractor in
probability space.
\subsection{Compact distribution $[0,b]$}
First, let us consider a compact distribution for identically and
independently distributed variables $x$ within the interval $0$ and $b$.
Following what we have described in the preceding section, we can transform
our generalised\ multiplicative process into a simple additive process of $%
y_{i}$ variables which are now distributed in conformity with the
distribution,%
\begin{equation}
p^{\prime}\left( y\right) =\frac{1}{b}\left[ 1+\left( 1-q\right) y\right] ^{%
\frac{q}{1-q}}, \label{q-uniform}
\end{equation}
with $y$ defined between $\frac{1}{q-1}$ and $\frac{b^{1-q}-1}{1-q}$ if $q<1$%
, whereas $y$ ranges over the interval between $-\infty$ and $\frac{b^{1-q}-1%
}{1-q}$ when $q>1$. Some curves for the special case $b=2$ are plotted in
Fig.~\ref{fig-flat}.
\begin{figure}[tbh]
\includegraphics[width=0.75\columnwidth,angle=0]{pdfcd.eps}
\caption{Plots of the Eq. (\protect\ref{q-uniform}) \textit{vs} $y$ for $b=2$
and the values of $q$ presented in the text. }
\label{fig-flat}
\end{figure}
If we look at the variance of this independent variable,%
\begin{equation}
\sigma _{y}^{2}=\left\langle y^{2}\right\rangle -\left\langle \mu
_{y}\right\rangle ^{2},
\end{equation}%
which is the moment whose finiteness plays the leading role in the Central
Limit Theory, we verify that for $q>\frac{3}{2}$, we obtain a divergent
value,%
\begin{equation}
\sigma _{y}^{2}=\frac{b^{2-2\,q}}{\left( 3-2\,q\right) \left( 2-q\right) ^{2}%
}.
\end{equation}%
Hence, if $q<\frac{3}{2}$, we can apply the Lyapunov's central Limit theorem
and our attractor in the probability space is the Gaussian distribution. On
the other hand, if $q>\frac{3}{2}$, the L\'{e}vy-Gnedenko's version of the
central limit theorem \cite{levy}\ asserts that the attracting distribution
is a L\'{e}vy distribution with a tail exponent,
\begin{equation}
\alpha =\frac{1}{q-1}.
\end{equation}%
Furthermore, it is simple to verify that the interval $\left( \frac{3}{2}%
,\infty \right) $ of $q$ values\ maps onto the interval $\left( 0,2\right) $
of $\alpha $ values, which is precisely the interval of validity of the L%
\'{e}vy class of distributions that is defined by its Fourier transform,%
\begin{equation}
\mathcal{F}\left[ L_{\alpha }\right] \left( k\right) =\exp \left[
-a\,\left\vert k\right\vert ^{\alpha }\right] . \label{eq-levy}
\end{equation}%
In Fig.~\ref{fig-gen} we depict some sets generated by this process for
different values of $q$.
\begin{figure}[tbh]
\includegraphics[width=0.55\columnwidth,angle=0]{generator.eps} %
\includegraphics[width=0.55\columnwidth,angle=0]{generatorloglinear.eps}
\caption{Sets of random variables generated from the process (\protect\ref%
{p-product}) with $N=100$ and $q=-\frac{1}{2}$ (green), $0$ (red), $\frac{1}{%
2}$ (blue), $1$ (black), $\frac{5}{4}$ (magenta) in linear (upper panel) and
log scales (lower panel). The generating variable is uniformly distributed
within the interval $\left[ 0,1\right] $ as is the same for all of the cases
that we present. As visible, the value of $q$ deeply affects the values of $%
X_{N}=\tilde{Z}$. }
\label{fig-gen}
\end{figure}
\subsection{$q$-log Normal distribution}
In this example, we consider the case of generalised multiplicative
processes in which the variables follow a $q$-log Normal distribution. In
agreement with what we have referred to in Sec. \ref{multiplicative}, the
outcome strongly depends on the value of $q$. Consequently, in the
associated $x$ space, if we apply the generalised process to $N$ variables $%
y=\ln _{q}\,x$ ($x\in \left[ 0,\infty \right) $) which follows a
Gaussian-like functional form \footnote{%
Strictly speaking, we cannot use the term Gaussian distribution because it
is not defined in the interval $\left( -\infty ,\infty \right) $. The
limitations in the domain do affect the Fourier transform and thus the
result of the convolution of the probability density function.} with average
$\mu $ and finite standard deviation $\sigma $, \textit{i.e.}, $\forall
_{q<1}$ or $q>3$ in Eq.(\ref{qlog-normal}), the resulting distribution in
the limit of $N$ going to infinity corresponds to the probability density
function (\ref{qlog-normal}) with $\mu \rightarrow N\,\mu $ and $\sigma
^{2}\rightarrow N\,\sigma ^{2}$. In respect of the conditions of $q$ we have
just mentioned here above, the $q$-log normal can be seen as an asymptotic
attractor, a stable attractor for $q=1$, and an unstable distribution for
the remaining cases with the resulting attracting distribution being
computed by applying the convolution operation.
\section{Random number generation and testing}
\label{randomnumber}
The generation of random numbers is by itself a subject of study and
depending on distribution different kinds of strategies can be used which
start from the shrewd von Neumann-Buffon acceptance-rejection method~\cite{vonNeuman}
and go to more sophisticated techniques~\cite{Gentle}. Since we
aim to introduce a global portrait of the distribution we have not tried to
develop bespoken algorithms but applied the robust method of the Smirnov
transformation (or inverse transformation sampling). From a classic robust
generator of uniform numbers, $\left\{ z\right\} $, between $-1$ and $1$ and
considering the probability conservation when $z$ is transformed into $y$
where $y$ is associated with a truncated log Normal distribution with
parameters $\mu $ and $\sigma $ and $b$ given by Eq. (\ref{truncation}). For
$q<1$, i.e., for $y=\ln _{q}\,x$ between $b$ and $+\infty $ we must use,%
\begin{equation}
y=\mu +\sqrt{2}\sigma \text{\thinspace \textrm{erf}\thinspace }^{-1}\left[
\infty ,\frac{1}{2}\left( z-1\right) \text{\textrm{erfc}}\left[ B\right] %
\right] , \label{generetormaior}
\end{equation}%
whereas for $q>1$, i.e., for $y=\ln _{q}\,x$ between $-\infty $ and $b$%
\begin{equation}
y=\mu +\sqrt{2}\sigma \text{\thinspace \textrm{erf}\thinspace }^{-1}\left[ 0,%
\frac{1}{2}\left\{ z\left( \text{\textrm{erf\thinspace }}\left[ B\right]
+1\right) +\text{\textrm{erf\thinspace }}\left[ B\right] -1\right\} \right] .
\label{generatormenor}
\end{equation}%
From these formulae we have defined the Kolmogorov-Smirnov distance tables
that we present for typical cases $q=4/5$ and $q=5/4$ with $\mu =0$ and $%
\sigma =1$. For each case $10^{6}$ samples have been considered.
\begin{table}
\caption{Quantiles of the Kolmogorov-Smirnov statistics of a q-log-Normal
distribution with $q=4/5$, $\protect\mu =0$ and $\protect\sigma =1$.}
\label{tab1}
\centering
\fbox{%
\begin{tabular}{*{6}{c}}
\hline
$n$ $\diagdown $ $P$ & 0.80 & 0.85 & 0.90 & 0.95 & 0.99 \\ \hline\hline
\multicolumn{1}{l}{5} & \multicolumn{1}{l}{0.442} & \multicolumn{1}{l}{0.471}
& \multicolumn{1}{l}{0.504} & \multicolumn{1}{l}{0.558} & \multicolumn{1}{l}{
0.663} \\
\multicolumn{1}{l}{10} & \multicolumn{1}{l}{0.318} & \multicolumn{1}{l}{0.339
} & \multicolumn{1}{l}{0.362} & \multicolumn{1}{l}{0.404} &
\multicolumn{1}{l}{0.485} \\
\multicolumn{1}{l}{15} & \multicolumn{1}{l}{0.261} & \multicolumn{1}{l}{0.277
} & \multicolumn{1}{l}{0.296} & \multicolumn{1}{l}{0.333} &
\multicolumn{1}{l}{0.399} \\
\multicolumn{1}{l}{20} & \multicolumn{1}{l}{0.211} & \multicolumn{1}{l}{0.225
} & \multicolumn{1}{l}{0.245} & \multicolumn{1}{l}{0.274} &
\multicolumn{1}{l}{0.334} \\
\multicolumn{1}{l}{25} & \multicolumn{1}{l}{0.192} & \multicolumn{1}{l}{0.205
} & \multicolumn{1}{l}{0.222} & \multicolumn{1}{l}{0.249} &
\multicolumn{1}{l}{0.302} \\
\multicolumn{1}{l}{30} & \multicolumn{1}{l}{0.176} & \multicolumn{1}{l}{0.188
} & \multicolumn{1}{l}{0.204} & \multicolumn{1}{l}{0.228} &
\multicolumn{1}{l}{0.278} \\
\multicolumn{1}{l}{35} & \multicolumn{1}{l}{0.164} & \multicolumn{1}{l}{0.175
} & \multicolumn{1}{l}{0.190} & \multicolumn{1}{l}{0.212} &
\multicolumn{1}{l}{0.257} \\
\multicolumn{1}{l}{40} & \multicolumn{1}{l}{0.154} & \multicolumn{1}{l}{0.165
} & \multicolumn{1}{l}{0.178} & \multicolumn{1}{l}{0.200} &
\multicolumn{1}{l}{0.242} \\
\multicolumn{1}{l}{45} & \multicolumn{1}{l}{0.146} & \multicolumn{1}{l}{0.156
} & \multicolumn{1}{l}{0.169} & \multicolumn{1}{l}{0.189} &
\multicolumn{1}{l}{0.230} \\
\multicolumn{1}{l}{50} & \multicolumn{1}{l}{0.140} & \multicolumn{1}{l}{0.149
} & \multicolumn{1}{l}{0.161} & \multicolumn{1}{l}{0.18} &
\multicolumn{1}{l}{0.219} \\
\multicolumn{1}{l}{60} & \multicolumn{1}{l}{0.128} & \multicolumn{1}{l}{0.139
} & \multicolumn{1}{l}{0.148} & \multicolumn{1}{l}{0.165} &
\multicolumn{1}{l}{0.199} \\
\multicolumn{1}{l}{70} & \multicolumn{1}{l}{0.119} & \multicolumn{1}{l}{0.127
} & \multicolumn{1}{l}{0.138} & \multicolumn{1}{l}{0.154} &
\multicolumn{1}{l}{0.186} \\
\multicolumn{1}{l}{80} & \multicolumn{1}{l}{0.112} & \multicolumn{1}{l}{0.121
} & \multicolumn{1}{l}{0.1301} & \multicolumn{1}{l}{0.144} &
\multicolumn{1}{l}{0.175} \\
\multicolumn{1}{l}{90} & \multicolumn{1}{l}{0.106} & \multicolumn{1}{l}{0.113
} & \multicolumn{1}{l}{0.122} & \multicolumn{1}{l}{0.135} &
\multicolumn{1}{l}{0.164} \\
\multicolumn{1}{l}{100} & \multicolumn{1}{l}{0.101} & \multicolumn{1}{l}{
0.108} & \multicolumn{1}{l}{0.115} & \multicolumn{1}{l}{0.129} &
\multicolumn{1}{l}{0.156} \\
$n>100$ & 1.02 $n^{-1/2}$ & 1.17 $n^{-1/2}$ & 1.30 $n^{-1/2}$ & 1.42 $%
n^{-1/2}$ & 1.56 $n^{-1/2}$ \\ \hline
\end{tabular}}%
\end{table}
\begin{table}
\caption{Quantiles of the Kolmogorov-Smirnov statistics of a q-log-Normal
distribution with $q=5/4$, $\protect\mu=0$ and $\protect\sigma = 1$.}
\label{tab2}
\centering
\fbox{%
\begin{tabular}{*{6}{c}}
\hline
$n$ $\diagdown$ $P$ & 0.80 & 0.85 & 0.90 & 0.95 & 0.99 \\ \hline\hline
5 & 0.382 & 0.413 & 0.454 & 0.513 & 0.627 \\
10 & 0.286 & 0.307 & 0.334 & 0.377 & 0.461 \\
15 & 0.246 & 0.262 & 0.282 & 0.317 & 0.384 \\
20 & 0.204 & 0.218 & 0.237 & 0.277 & 0.327 \\
25 & 0.189 & 0.202 & 0.218 & 0.246 & 0.299 \\
30 & 0.174 & 0.186 & 0.202 & 0.225 & 0.276 \\
35 & 0.161 & 0.174 & 0.188 & 0.213 & 0.256 \\
40 & 0.155 & 0.165 & 0.178 & 0.204 & 0.242 \\
45 & 0.146 & 0.156 & 0.169 & 0.189 & 0.229 \\
50 & 0.137 & 0.148 & 0.162 & 0.183 & 0.217 \\
60 & 0.128 & 0.138 & 0.148 & 0.165 & 0.201 \\
70 & 0.118 & 0.127 & 0.138 & 0.154 & 0.186 \\
80 & 0.111 & 0.121 & 0.1301 & 0.143 & 0.175 \\
90 & 0.107 & 0.113 & 0.122 & 0.135 & 0.164 \\
100 & 0.099 & 0.107 & 0.115 & 0.128 & 0.155 \\
$n>100$ & 1.01 $n ^{-1/2}$ & 1.15 $n ^{-1/2}$ & 1.28 $n ^{-1/2}$ & 1.41 $n
^{-1/2}$ & 1.57 $n ^{-1/2}$ \\ \hline
\end{tabular}}%
\end{table}
\section{Examples of applicability}
\label{test}
In the following examples parameter estimation has been made using
traditional maximum log-likelihood methods. In spite of using Brent's method
for optimisation~\cite{brent} of the log-likelihood function, the following
set of equations can be solved if a differential method is preferred:
\begin{equation}
\left\{
\begin{array}{c}
\sum\limits_{i=i}^{n}\frac{d}{dq}\ln P\left( x_{i}\right) =0 \\
\sum\limits_{i=i}^{n}\frac{d}{d\mu }\ln P\left( x_{i}\right) =0 \\
\sum\limits_{i=i}^{n}\frac{d}{d\sigma }\ln P\left( x_{i}\right) =0%
\end{array}%
\right. .
\end{equation}%
The specific equations can be obtained after straightforward (and tedious)
calculus.
\subsection{Shadow prices in metabolic networks}
The representation of metabolic networks is often related to linear
programming approaches~\cite{palsson} for which there is a dual optimisation
procedure. In other words, the maximisation of the reaction fluxes of a
metabolic network with a given stoichiometry matrix has as its dual
solution the minimisation of a certain function defined by quantities traditionally called
\textit{shadow prices}, $\Pi $, which for this case correspond to the chemical
potencial~\cite{pbw}. In a previous study the shape of the distribution of the shadow
prices has been analysed. From the set of tested PDFs the log-normal has
proven to be the better description.
Our first example is composed of shadow prices of the genome-scale model for
\textit{E. coli} (iJR 904) growing on a D-glucose substrate \cite{reed,kummel,reed2}. The number of
shadow prices is $649$. Minimisation of the log-likelihood function we
obtained $\mu =-0.432$, $\sigma =0.838$ and $q=1.21$ in comparison with $\mu
=-0.454$ and $\sigma =0.741$ for the log-Normal. The corresponding
Kolmogorov-Smirnov distances are $0.072$ and $0.123$, respectively, as we
depict in Fig.\ref{Ecoli}. Other qualitatively similar results, $q>1$, are
found for the shadow prices of models growing upon aerobic conditions.
\begin{figure}[tbh]
\includegraphics[width=0.75\columnwidth,angle=0]{Ecoli.eps}
\caption{Cumulative density function of the shadow prices vs shadow price of
the metabolic network of the E. coli (iJR 904) growing on a D-glucose
substrate. The symbols are obtained from the data and the lines the best
fits with the q-log-Normal distribution and log-Normal. The values of the
parameters and error are mentioned in the text.}
\label{Ecoli}
\end{figure}
A different kind of distribution was obtained when a metabolic network like
the \textit{M. barkeri} (iAF 692 model) evolving in a Hydrogen medium was
considered. In this case $517$ metabolites are taken into account. The
values of the best fit obtained were $\mu =-0.633$, $\sigma =1.24$ and $%
q=0.822$ in comparison with $\mu =-0.454$ and $\sigma =0.741$ for the
log-Normal.
\begin{figure}[tbh]
\includegraphics[width=0.75\columnwidth,angle=0]{Mbarkeri.eps}
\caption{Cumulative density function of the shadow prices vs shadow price of
the metabolic network of the M. barkeri (iAF 692 model) growing in Hydrogen
medium \cite{feist}. The symbols are obtained from the data and the lines the best fits
with the q-log-Normal distribution and log-Normal. The values of the
parameters and error are mentioned in the text.}
\label{MBarkeri}
\end{figure}
These parameters yield the following Kolmogorov-Smirnov distances $0.049$
and $0.080$ which represent a blunt improvement. Moreover, the introduction
of the extra parameter $q$ is completely justified when we calculate the
Akeike information criterion (AIC) \cite{aic},%
\begin{equation*}
AIC=2\,k+n\ln \left[ \frac{RSS}{n}\right] ,
\end{equation*}%
where $k$ is the number of parameters, $n$ is the number of metabolites of
the metabolic network and $RSS$ is the residual sum of squares. The values
of AIC per metabolite are $-6.288$ and $-5.607$ for the $q$-log-Normal and
the log-Normal in the case of the \textit{E. coli}, respectively. For the
\textit{M. barkeri} the values are $-7.474$ and $-6.682$. This is clearly
adduces that the $q$-log-Normal outperforms the log-Normal distribution
which had given the best results. From a biological perspective is even more
appealing that metabolic networks developed in a aerobic environment have
presented a value of $q>1$ and \ networks related to anaerobic environments
yield $q$ values smaller than 1. Whence, we can infer that $q$ value can be
possibly used as a signature of aerobic and anaerobic growing.
\subsection{Volatility in financial markets}
One of the keystone elements of mathematical finance is the volatility.
Despite appearing in every theory of financial markets the truth is that
volatility still lacks a precise definition~\cite{engle}. Nonetheless, it is
customarily associated with average of squared fluctuations,%
\begin{equation*}
r_{\Delta }\left( t\right) \equiv \ln S\left( t+\Delta \right) -\ln S\left(
t\right) ,
\end{equation*}%
of the (log-)price (or index) $S$ over some window $T$ ($\Delta $ is lag).
It is well-known for a long time that price fluctuations are nicely fitted by
the Student's $t$-distribution. An explanation for that relies on the local
Gaussianity of the price distributions but with a time dependent variance as
it has been hold by heteroskedastic processes~\cite{engle}. In that
sense, we can consider a variable $\mathcal{B}\left( t\right) =v\left(
t\right) ^{-1}$, where%
\begin{equation}
v\left( t\right) =\frac{1}{T}\sum_{i=1}^{T}r^{2}\left( t-i\right) .
\label{volatility}
\end{equation}%
Accordingly, the distribution of price fluctuations following a Bayesian
approach%
\begin{equation}
p\left( r\right) =\int P\left( \beta \right) \,p\left( r|\mathcal{B}\right)
\,d\mathcal{B}. \label{superstatistics}
\end{equation}%
Assuming the Student's $t$-distribution hypothesis for $p\left( r\right) $ and
\begin{equation}
p\left( r|\mathcal{B}\right) =\sqrt{\frac{\mathcal{B}}{2\pi }}\exp [-%
\mathcal{B}\,r^{2}], \label{localgauss}
\end{equation}%
the distribution of $\mathcal{B}$ must be,%
\begin{equation}
P_{\Gamma }\left( \mathcal{B}\right) =\frac{\theta ^{-1-\kappa }}{\Gamma %
\left[ 1+\kappa \right] }\mathcal{B}^{\kappa }\exp [-\frac{\mathcal{B}}{%
\theta }]. \label{p-gamma}
\end{equation}%
In this case, we have analysed the distribution of $\mathcal{B}$ according
to the definition given in Eq. (\ref{volatility}) using the daily
fluctuations of the SP500 index from the $3^{rd}$ January 1950 to the $%
3^{rd} $ April 2009 and considering 5-business days windows with data
obtained from {\tt http://finance.yahoo.com} (see Fig. \ref{volatilitySP}).
\begin{figure}[tbh]
\includegraphics[width=0.75\columnwidth,angle=0]{volatilidade-graf.eps}
\caption{Evolution of the five-day volatility of the SP500 index as defined
in the text after normalisation by its average value. Value above the dashed
red line can be considered extreme events.}
\label{volatilitySP}
\end{figure}
Regarding $P(\mathcal{B})$, we have actually noticed that the distribution
is poorly described by Eq.~(\ref{p-gamma}). Conversely, we verified a very
good agreement with Eq.~(\ref{two-branched}) as we show in Fig.~\ref{pdf-vol}%
. The values obtained for $P_{\Gamma }\left( \mathcal{B}\right) $ are $%
\kappa =0.314$ and $\theta =1.41$ and for $p_{q,2-q}\left( \mathcal{B}%
\right) $ we have $\mu =0.391$, $\sigma =1.15$ and $q=1.22$. The values
of the Kolmogorov-Smirnov distances yield 0.0959 and 0.0126 which has passed
the statistical test for $\alpha =2\%$. A representation of the probability
density function adjustment is shown in Fig. \ref{pdf-vol}. Although not
shown here a log-Normal adjustment which yielded $\mu =0.379$ and $\sigma
=1.121$ and a Kolmogorov-Smirnov distance of $0.0177$ which is $40\%$
greater than the Kolmogorov-Smirnov distance of the $q$-Log-Normal. The
utilisation of two values for $q$ (although they relate one another) can be
understood if we accept tested hypotheses that the volatility runs over two
mechanisms (short and long scale) \cite{bouchaud}. Nevertheless, it is
worth mentioning that applying Eqs. (\ref{superstatistics})-(\ref%
{p-gamma}) with the values we have determined brings about a Student-$t$
distribution with $\nu =2.64$ that is in accordance with the value measured
for the tail exponent of that distribution (see e.g. \cite{smdqQF}).
\begin{figure}[tbh]
\includegraphics[width=0.75\columnwidth,angle=0]{pdf-vol.eps}
\caption{Probability density function of the 5-day volatility vs $\mathcal{B}
$ . The symbols are obtained from the data and the lines the best fits with
the Gamma distribution and the double sided q-log-Normal. The values of the
parameters and error are mentioned in the text.}
\label{pdf-vol}
\end{figure}
\section{Final remarks}
In this manuscript we have introduced a new kind of generalisation of the
log-Normal distribution which stem from a modification of the multiplicative
cascade using a new type of algebra recently introduced in a physical
context. This modification of the $q$-product, as shown in other cases,
represents a way of describing a type of dependence between the variables.
Accordingly, the new distribution differs from the classical log-Normal by a
single parameter $q$ which favours the right-side tail for $q>1$, the
left-side tail if $q<1$ and recovers the traditional form when $q=1$. We
have made an extensive description of the distribution namely by defining
the moments, the Fourier Transform we have purported a generator of random
numbers as well which yield the distributions we mentioned here. Using these
random number generators we have depicted the construction of a $P$-value
table for the Kolmogorov-Smirnov distance when the $q$-log-Normal
distribution is assumed. Moreover, we have tested the distribution against
real data of biological and financial origin. Both results have shown its
usefulness and all the cases we have studied curiously present values of $q$
or $2-q$ close to $5/4$. Concerning future work we can mention the
modification of the two branched distribution to accommodate equal weights
for $f$ and $f^{\prime }$ as we have considered here, using different dual
relations for the $q$ parameters or parameters that are not related by any
dual relation as well. This last approach corresponds to accepting different
mixtures of dynamical or structural processes. It is obvious that such
modifications augment the number of the parameters which might be plainly
justified by usual statistical criteria.
\bigskip
{\small SMDQ acknowledges P.B. Warren for having provided the shadow prices data,
T. Cox for several comments on the subject matter and the critical reading of the manuscript
and M.A. Naeeni for preliminary discussions.
This work benefited from financial support from the Marie Curie Fellowship programme (European
Union).}
|
\section{Scientific computing languages: The Julia innovation}
The original numerical computing language was Fortran, short for
``Formula Translating System'', released in 1957. Since those
early days, scientists have dreamed of writing high-level, generic
formulas and having them translated automatically into low-level,
efficient machine code, tailored to the particular data types they
need to apply the formulas to. Fortran made historic strides towards
realization of this dream, and its dominance in so many areas of
high-performance computing is a testament to its remarkable success.
The landscape of computing has changed dramatically over the years.
Modern scientific
computing environments such as Python~\cite{numpy}, R~\cite{Rlang},
Mathematica~\cite{mathematica}, Octave~\cite{Octave},
Matlab~\cite{matlab}, and SciLab~\cite{scilab}, to name some, have grown in popularity
and fall under the general category
known as \hspace{-.08in} { {\it dynamic languages} or {\it dynamically typed languages}.
In these programming
languages, programmers write simple, high-level code without any
mention of types like \code{int}, \code{float} or \code{double} that
pervade {\it statically typed languages} such as C and Fortran.
Many researchers today do their day-to-day
work in dynamic languages. Still, C and Fortran remain the gold
standard for computationally-intensive problems for performance.
In as much as the dynamic language programmer has missed out on performance,
the C and Fortran programmer has missed out on productivity.
An unfortunate outcome of the currently popular languages is that the
most challenging areas of numerical computing have benefited the least
from the increased abstraction and productivity offered by higher
level languages.
The
consequences have been more serious than many
realize.
Julia's innovation is the very combination of productivity and performance.
New users want a quick explanation as to why Julia is fast, and
whether somehow the same ``magic dust" could also be sprinkled on
their traditional scientific computing language.
Julia is fast because of careful language design and
the right combination of the carefully chosen technologies that work
very well with each other.
This paper demonstrates some of these technologies using a number of examples.
We invite the reader to follow along at \url{http://juliabox.org} using
Jupyter notebooks \cite{jupyter,jupyternature} or by downloading Julia \url{http://julialang.org/downloads}.
\subsection{Computing transcends communities}
Numerical computing research has always lived on the boundary of computer science, engineering, mathematics, and computational sciences.
Readers might enjoy trying to label the ``Top 10 algorithms"\cite{top10} by field, and may quickly
see that advances typically transcend any one field with broader impacts to science
and technology as a whole.
Computing is more than using an overgrown calculator. It is a cross cutting communication medium.
Research into programming languages therefore breaks us out of our research boundaries.
The first decade of the 21st century saw a boost in such
research with the High Productivity Computing Systems DARPA funded
projects into the languages such as Chapel \cite{chapelref,chapelpaper}, Fortress \cite{fortressref,fortressspec}
} and X10 \cite{x10,x10paper}. Also contemporaneous has been a growing acceptance of Python.
Up to around 2009 some of us were working on Star-P, an interactive high performance computing system
for parallelizing various dynamic programming languages. Some excellent resources on Star-P that
discuss these ideas are the Star-P user Guide~\cite{starpug}, the
Star-P Getting Started guide~\cite{starpstart}, and various papers~\cite{starpright,starpstudy,Husbands98inter,Choy04star-p:high}.
Julia continues our research into parallel computing, with the most
important lesson from our Star-P experience being that one cannot
design a high performance parallel programming system without a
programming language that works well sequentially.
\subsection{Julia architecture and language design philosophy}
Many popular dynamic languages were not designed with the goal of high
performance. After all, if you wanted really good performance you
would use a static language, or so the popular wisdom would say. Only with the increased need in the
day-to-day life of scientific programmers for simultaneous
productivity and performance in a single system has the need for
high-performance dynamic languages become pressing. Unfortunately,
retrofitting an existing slow dynamic language for high performance is
almost impossible \textit{specifically} in numerical computing
ecosystems. This is because numerical computing requires
performance-critical numerical libraries, which invariably depend on
the details of the internal implementation of the high-level language,
thereby locking in those internal implementation details.
For example, you can run Python code much faster than the standard
CPython implementation using the PyPy just-in-time compiler; but PyPy
is currently incompatible with NumPy and the rest of SciPy.
Another important point is that just because a program is available in
C or Fortran, it may not run efficiently from the high level language
or be easy to ``glue" it in. For example when Steven Johnson tried to
include his C \verb+erf+ function in Python, he reported that Pauli
Virtane had to write glue
code\footnote{\url{https://github.com/scipy/scipy/commit/ed14bf0}} to
vectorize the erf function over the native structures in Python in
order to get good performance. Johnson also had to write similar glue
code for Matlab, Octave, and Scilab. The Julia effort was, by
contrast, effortless.\footnote{Steven Johnson, personal
communication. See
\url{http://ab-initio.mit.edu/wiki/index.php/Faddeeva_Package}} As
another example, \verb+randn+, Julia's normal random number generator
was originally based on calling \verb+randmtzig+, a C implementation.
It turned out later, that a pure Julia implementation of the same code
actually ran faster, and is now the default implementation.
In some cases, ``glue'' can often lead to poor performance, even
when the underlying libraries being called are high performance.
The best path to a fast, high-level system for scientific and
numerical computing is to make the system fast enough that all of its
libraries can be written in the high-level language in the first
place. The JUMP.jl~\cite{jump} and the Convex.jl~\cite{convexjl} packages are great examples of the
success of this approach---the entire library is written in Julia and
uses many Julia language features described in this paper.
{\bf The Two Language Problem:} As long as the developers' language
is harder than the users' language, numerical computing will always
be hindered. This is an essential part of the design philosophy of Julia: all basic
functionality must be possible to implement in Julia---never force the
programmer to resort to using C or Fortran.
Julia solves the two language problem.
Basic
functionality must be fast: integer arithmetic, for loops, recursion,
floating-point operations, calling C functions, manipulating C-like
structs. While these are not only important for numerical programs,
without them, you certainly cannot write fast numerical code.
``Vectorization languages'' like Python+NumPy, R, and Matlab hide
their for loops and integer operations, but they are still there, inside the C
and Fortran, lurking behind the thin veneer. Julia removes this
separation entirely, allowing high-level code to ``just write a for
loop'' if that happens to be the best way to solve a problem.
We believe that the Julia programming language fulfills much of the
Fortran dream: automatic translation of formulas into efficient
executable code. It allows programmers to write clear, high-level,
generic and abstract code that closely resembles mathematical
formulas, as they have grown accustomed to in dynamic systems, yet
produces fast, low-level machine code that has traditionally only been
generated by static languages.
Julia's ability to combine these levels of performance and
productivity in a single language stems from the choice of a number of
features that work well with each other:
\begin{enumerate}
\item An expressive type system, allowing optional type
annotations (Section \ref{sec:types});
\item Multiple dispatch using these types to select implementations (Section \ref{sec:select});
\item Metaprogramming for code generation (Section \ref{subsection:macros});
\item A dataflow type inference algorithm allowing types of
most expressions to be inferred \cite{jeff:thesis,Bezanson:2012jf};
\item Aggressive code specialization against run-time types \cite{jeff:thesis,Bezanson:2012jf};
\item Just-In-Time (JIT) compilation \cite{jeff:thesis,Bezanson:2012jf} using the LLVM compiler framework~\cite{LLVM}, which is also used
by a number of other compilers such as Clang~\cite{clang} and
Apple's Swift~\cite{swift}; and
\item Julia's carefully written libraries that leverage the language
design, i.e., points 1 through 6 above (Section \ref{sec:lang}).
\end{enumerate}
Points 1, 2, and 3 above are especially
for the human, and the focus of this paper. On details about the parts
that are about language implementation and internals such as in points
4, 5, and 6, we direct the reader to our earlier
work(~\cite{jeff:thesis,Bezanson:2012jf}). Point 7 brings everything together to build high performance computational libraries in Julia.
Although a sophisticated type system is made available to the
programmer, it remains unobtrusive in the sense that one is never
required to specify types, nor are type annotations necessary for
performance. Type information flows naturally through the program due to dataflow type inference.
In what follows, we describe the benefits of Julia's language design
for numerical computing, allowing programmers to more readily express
themselves and obtain performance at the same time.
\section{A taste of Julia}
\subsection{A brief tour}
\hspace*{.13in}
\begin{jinput}
A = rand(3,3) + eye(3) \sh {\# Familiar Syntax} \\
inv(A)
\end{jinput}
\begin{joutput}
3x3 Array\{Float64,2\}:\\
\hspace*{.02em} 0.698106 -0.393074 -0.0480912 \\
-0.223584 \hspace*{.02em} 0.819635 -0.124946 \\
-0.344861 \hspace*{.02em} 0.134927 \hspace*{.02em} 0.601952
\end{joutput}
The output from the Julia prompt says that $A$ is a two dimensional
matrix of size $3 \times 3$, and contains double precision floating
point numbers.
Indexing of arrays is performed with brackets, and
is 1-based. It is also possible to compute an entire array
expression and then index into it, without assigning the expression to
a variable:
\begin{jinput}
x = A[1,2] \\
y = (A+2I)[3,3] \sh{ \# The [3,3] entry of A+2I}
\end{jinput}
\begin{joutput}
2.601952
\end{joutput}
In Julia, \verb+I+ is a built-in representation of the identity
matrix,
without explicitly forming the identity matrix as is commonly done
using commands such as ``eye." (``eye" ,
a homonym of ``I", is used in such languages as Matlab, Octave,
Go's matrix library, Python's Numpy, and Scilab.)
Julia has symmetric tridiagonal matrices as a special type. For
example, we may define Gil Strang's favorite matrix (the second order
difference matrix) in a way that uses only $O(n)$ memory.
\begin{figure}[h]
\centering
\includegraphics[width=3in]{cupcakes1.jpg}
\caption{Gil Strang's favorite matrix is {\tt strang(n) =
SymTridiagonal(2*ones(n),-ones(n-1)) } \newline Julia only stores
the diagonal and off-diagonal. (Picture taken in Gil Strang's
classroom.) }
\end{figure}
\begin{jinput}
strang(n) = SymTridiagonal(2*ones(n),-ones(n-1)) \\
strang(7)
\end{jinput}\begin{joutput}
7x7 SymTridiagonal\{Float64\}: \vspace{-.05in}
\begin{verbatim}
2.0 -1.0 0.0 0.0 0.0 0.0 0.0
-1.0 2.0 -1.0 0.0 0.0 0.0 0.0
0.0 -1.0 2.0 -1.0 0.0 0.0 0.0
0.0 0.0 -1.0 2.0 -1.0 0.0 0.0
0.0 0.0 0.0 -1.0 2.0 -1.0 0.0
0.0 0.0 0.0 0.0 -1.0 2.0 -1.0
0.0 0.0 0.0 0.0 0.0 -1.0 2.0
\end{verbatim}
\end{joutput}
A commonly used notation to express the solution $x$ to the equation $Ax=b$ is
\verb+A\b+.
If Julia knows that $A$ is a tridiagonal matrix,
it uses an efficient $O(n)$ algorithm:
\begin{jinput}
strang(8)\textbackslash ones(8)
\end{jinput}\begin{joutput}
8-element Array\{Float64,1\}: \vspace{-.1in}
\begin{verbatim}
4.0
7.0
9.0
10.0
10.0
9.0
7.0
4.0
\end{verbatim}
\end{joutput}
Note the \verb+Array{ElementType,dims}+ syntax.
In the above example, the elements are 64 bit floats or \verb+Float64+'s.
The \verb+1+ indicates it is a one dimensional vector.
Consider the sorting of complex numbers. Sometimes it is handy
to have a sort that generalizes the real sort. This can be done by
sorting first by the real part, and where there are ties, sort by the imaginary
part. Other times it is handy to use the polar representation, which sorts
by radius then angle.
By default, complex numbers are incomparable in Julia.
If a numerical computing language ``hard-wires" its sort to be one or the other,
it misses an opportunity. A sorting algorithm need not depend on details
of what is being compared or how it is being compared. One can abstract away these
details thereby reusing a sorting algorithm for many different situations. One can specialize
later. Thus alphabetizing strings, sorting
real numbers, or sorting complex numbers in two or more ways all run with the same code.
In Julia, one can turn a complex number \verb+w+ into an ordered pair
of real numbers (a tuple of length 2) such as the Cartesian form \verb+(real(w),imag(w))+
or the polar form \verb+(abs(w),angle(w))+. Tuples are then compared lexicographically
in Julia. The sort command takes an optional ``less-than" operator, \verb+lt+, which is used to compare
elements when sorting. Note the compact function definition syntax available
in Julia used in the example below and is of the form \verb+f(x,y,...) = <expression>+.
\begin{jinput}
\sh{\# Cartesian comparison sort of complex numbers} \\
complex\_compare1(w,z) = (real(w),imag(w)) < (real(z),imag(z)) \\
sort([-2,2,-1,im,1], lt = complex\_compare1 )
\end{jinput}\begin{joutput}
5-element Array\{Complex\{Int64\},1\}: \\
-2+0im \\
-1+0im \\
\hspace*{.02em} 0+1im \\
\hspace*{.02em} 1+0im \\
\hspace*{.02em} 2+0im
\end{joutput}
\begin{jinput}
\sh{\# Polar comparison sort of complex numbers} \\
complex\_compare2(w,z) = (abs(w),angle(w)) < (abs(z),angle(z)) \\
sort([-2,2,-1,im,1], lt = complex\_compare2)
\end{jinput}\begin{joutput}
5-element Array\{Complex\{Int64\},1\}: \\
\hspace*{.02em} 1+0im \\
\hspace*{.02em} 0+1im \\
-1+0im \\
\hspace*{.02em} 2+0im \\
-2+0im
\end{joutput}
\begin{comment}
Julia also provides an interface to the QR factorization,
known as a
``factorization object." The result of the
QR factorization from LAPACK\cite{lapack} is more compact and efficient
than the alternative \verb+qr+ command which is also available.
\end{comment}
\begin{comment}
\begin{jinput}
X=qrfact(rand(4,2))
\end{jinput}
\begin{joutput}
QRCompactWY\{Float64\} ( 4x2 Array\{Float64,2\}: \\
\hspace*{.1em} -1.4584 \hspace*{.5em} -1.38826 \\
\> 0.414351 \hspace*{.1em} 0.813698 \\
\> 0.420327 \hspace*{.1em} 0.830722 \\
\> 0.415317 -0.0241482,2x2 Array\{Float64,2\}: \\
1.31505 \hspace*{1.5em} -1.17218 \\
6.93424e-310 1.18295)
\end{joutput}
In the example above $X$ is not a matrix, it is an efficient QR
Factorization object. We can build the components
upon request. Here is \verb+R+
\begin{jinput}
X[:R]
\end{jinput}
\begin{joutput}
2x2 Array\{Float64,2\}: \\
-1.4584 -1.38826 \\
\hspace*{.02em} 0.0 \, 0.813698
\end{joutput}
One can use the factorization object to find the least squares
solution. The backslash operator for QR factorization objects has been
overloaded in Julia:
\begin{jinput}
b=rand(4) {\sh { \# One dimensional vector} }
\end{jinput}
\begin{joutput}
4-element Array\{Float64,1\}: \\
\> 0.299695 \\
\> 0.976353 \\
\> 0.688573 \\
\> 0.653433
\end{joutput}
\begin{jinput}
X\ / b {\sh{ \# Least squares solution is efficient due to packed format} }
\end{jinput}
\begin{joutput}
2-element Array \{Float64,1\}: \\
\> 0.985617 \\
\hspace*{.12em} -0.0529438
\end{joutput}
\end{comment}
To be sure, experienced computer scientists tend to suspect there is nothing new under the sun.
The C function \verb+qsort()+ takes a \verb+compar+ function. Nothing really new there.
Python also has custom sorting with a key. Matlab's sort is more basic.
The real contribution of Julia, as will be fleshed out further in this paper, is that
the design of Julia allows custom sorting to be high performance and
flexible and comparable with implementations in other dynamic
languages that are often written in C.
\begin{jinput}
Pkg.add("PyPlot") \sh{\# Download the PyPlot package} \\
using PyPlot \sh {\# load the functionality into Julia} \\
\\
for i=1:5 \\
\, y=cumsum(randn(500)) \\
\, plot(y) \\
end \\ \\
\includegraphics[width=3in]{myfig2.pdf}
\end{jinput}
\vspace{.2in}
The next example that we have chosen for the introductory taste of
Julia is a quick plot of Brownian motion, in two ways.
The first such example uses the Python Matplotlib package for graphics, which is popular for users coming from Python or Matlab.
The second example uses Gadfly.jl, another very popular package for plotting.
Gadfly was built by Daniel Jones completely in Julia and was influenced by the well admired Grammar of Graphics
(see \cite{gg1} and \cite{gg2})\footnote{See tutorial on \code{http://gadflyjl.org}}
Many Julia users find Gadfly more flexible and
prefer its aesthetics. Julia plots can also be manipulated
interactively with sliders and buttons using Julia's Interact.jl package\footnote{https://github.com/JuliaLang/Interact.jl}.
The Interact.jl package page contains many examples of interactive visualizations\footnote{\code{https://github.com/JuliaLang/Interact.jl/issues/36}}.
\begin{jinput}
Pkg.add("Gadfly") \sh{\# Download the Gadfly package} \\
using Gadfly \sh {\# load the functionality into Julia} \\
\begin{verbatim}
n = 500
p = [layer(x=1:n, y=cumsum(randn(n)), color=[i], Geom.line)
for i in ["First","Second","Third"]]
labels=(Guide.xlabel("Time"),Guide.ylabel("Value"),
Guide.Title("Brownian Motion Trials"),Guide.colorkey("Trial"))
plot(p...,labels...)
\end{verbatim}
\end{jinput}
\vspace{.1in}
\includegraphics[width=4in]{gadflyplot.pdf}
The ellipses on the last line above is known as a \verb+splat+ operator.
The elements of the vector \verb+p+ and the tuple \verb+labels+ are
inserted individually as arguments to the \verb+plot+ function.
\subsection{An invaluable tool for numerical integrity}
One popular feature of Julia is that it gives the user the ability to
``kick the tires" of a numerical computation. We thank Velvel Kahan
for the sage advice\footnote{Personal communication, January 2013, in the Kahan
home, Berkeley, California} concerning the importance of this feature.
The idea is simple: a good engineer tests his or her code for numerical stability.
In Julia this can be done by changing IEEE rounding modes.
There are five modes to choose from, yet most engineers silently only choose the
\verb+RoundNearest+ mode default available in many numerical computing systems.
If a difference is detected, one can also run the computation in higher precision.
Kahan writes:
\begin{quotation}
Can the effects of roundoff upon a floating-point computation be assessed without submitting it to
a mathematically rigorous and (if feasible at all) time-consuming error-analysis? In general, No.
$\ldots$
Though far from foolproof, rounding every inexact arithmetic operation (but not
constants) in the same direction for each of two or three directions besides the
default To Nearest is very likely to confirm accidentally exposed hypersensitivity
to roundoff. When feasible, this scheme offers the best {\it Benefit/Cost }ratio.
\cite{kahan:mindless}
\end{quotation}
As an example, we round a 15x15 Hilbert-like matrix, and take the [1,1] entry of the inverse
computed in various round off modes. The radically different answers dramatically indicates
the numerical sensitivity to roundoff.
We even noticed that slight changes to LAPACK give radically different answers. Very likely
you will see different numbers when you run this code due to the very high sensitivity to roundoff errors.
\begin{jinput}
h(n)=[1/(i+j+1) for i=1:n,j=1:n]
\end{jinput}
\begin{joutput}
h (generic function with 1 method)
\end{joutput}
\begin{jinput}
H=h(15); \\
with\_rounding(Float64,RoundNearest) do \\
\, inv(H)[1,1] \\
end
\end{jinput}
\begin{joutput}
154410.55589294434
\end{joutput}
\begin{jinput}
with\_rounding(Float64,RoundUp) do \\
\, inv(H)[1,1] \\
end
\end{jinput}
\begin{joutput}
-49499.606132507324
\end{joutput}
\begin{jinput}
with\_rounding(Float64,RoundDown) do \\
\, inv(H)[1,1] \\
end
\end{jinput}
\begin{joutput}
-841819.4371948242
\end{joutput}
With 300 bits of precision, we obtain \\
\begin{jinput}
with\_bigfloat\_precision(300) do \\
\, inv(big(H))[1,1] \\
end
\end{jinput}
\begin{joutput}
-2.09397179250746270128280174214489516162708857703714959763232689047153\\ 50765882491054998376252e+03
\end{joutput}
Note this is the [1,1] entry of the inverse of the rounded Hilbert-like
matrix, not the inverse of the exact Hilbert-like matrix. Also, the \verb+Float64+ results
are senstive to the BLAS\cite{blas} and LAPACK\cite{lapack}, and may differ on
different machines with different versions of Julia.
For extended precision, Julia uses the MPFR library\cite{MPFR}.
\subsection{The Julia community}
Julia has been in development since 2009. A public release was
announced in February of 2012. It is an active open source project
with over 350 contributors and is available under the MIT
License~\cite{mitlicense} for open source software. Over 1.3 million
unique visitors have visited the Julia website since then, and Julia
has now been adopted as a teaching tool in dozens of universities
around the world\footnote{\url{http://julialang.org/community}}. While it was nurtured at the
Massachusetts Institute of Technology, it is really the contributions
from experts around the world that make it a joy to use for numerical computing. It is also
recognized as a general purpose computing language unlike
traditional numerical computing systems, allowing it to be used
not only to prototype numerical algorithms, but also to deploy those
algorithms, and even serve results to the
rest of the world. A great example of this is Shashi Gowda's Escher.jl
package\footnote{https://github.com/shashi/Escher.jl}, which makes it
possible for Julia programmers to build beautiful interactive websites
in Julia, and serve up the results of a Julia computation from the
web server, without any knowledge of HTML or javascript. Another such example is the Sudoku as a service\footnote{http://iaindunning.com/2013/sudoku-as-a-service.html}, by Iain Dunning,
where a Sudoku puzzle is solved using the
optimization capabilities of the JUMP.jl Julia package \cite{jump} and made
available as a web service. This is exactly why Julia is being increasingly deployed in production environments in businesses, as seen in various talks at JuliaCon~\footnote{\url{http://www.juliacon.org}}. These use cases utilize not just Julia's capabilities for mathematical computation, but also to build web APIs, database access, and much more. Perhaps most significantly, a rapidly
growing ecosystem of over 600 open source, composable packages
\footnote{url{http://pkg.julialang.org}}, which include a mix of libraries for mathematical as well as general purpose computing, is leading to the adoption of Julia in research, businesses, and in government.
\section{Writing programs with and without types}
\label{sec:types}
\subsection{The balance between human and the computer}
\label{sec:humancomputer}
Graydon Hoare, author of the Rust programming
language~\cite{rust}, in an essay on ``Interactive Scientific
Computing''~\cite{hoareessay} defined programming languages
succinctly:
\begin{quote}
Programming languages are mediating devices, interfaces that try to
strike a balance between human needs and computer needs. Implicit in
that is the assumption that human and computer needs are equally
important, or need mediating.
\end{quote}
A program consists of data and operations on data. Data is not just
the input file, but everything that is held---an array, a list, a
graph, a constant---during the life of the program. The more the
computer knows about this data, the better it is at executing
operations on that data. Types are exactly this metadata. Describing
this metadata, the types, takes real effort for the human. Statically
typed languages such as C and Fortran are at one extreme, where all
types must be defined and are statically checked during the
compilation phase. The result is excellent performance. Dynamically
typed languages dispense with type definitions, which leads to greater
productivity, but lower performance as the compiler and the runtime
cannot benefit from the type information that is essential to produce
fast code. Can we strike a balance between the human's preference to
avoid types and the computer's need to know?
\subsection{Julia's recognizable types}
Many users of Julia may never need to know about types for performance.
Julia's type inference system often does the work, giving performance
without type declarations.
Julia's design allows for the gradual learning of concepts, where users
start in a manner that is familiar to them and over time, learn to
structure programs in the ``Julian way'' --- a term that captures
well-structured readable high performance Julia code. Julia users
coming from other numerical computing environments have a notion that
data may be represented as matrices that may be dense, sparse,
symmetric, triangular, or of some other kind. They may also, though
not always, know that elements in these data structures may be single
precision floating point numbers, double precision, or integers of a
specific width. In more general cases, the elements within data
structures may be other data structures. We introduce Julia's type
system using matrices and their number types:
\begin{jinput}
rand(1,2,1)
\end{jinput}
\begin{joutput}
1x2x1 Array\{Float64,3\}: \\
{}[ :, :, 1{}] = \\
\> 0.789166 0.652002
\end{joutput}
\begin{jinput}
{}[1 2; 3 4{}]
\end{jinput}
\begin{joutput}
2x2 Array\{Int64,2\}: \\
\> 1 2 \\
\> 3 4
\end{joutput}
\begin{jinput}
{}[true; false{}]
\end{jinput}
\begin{joutput}
2-element Array\{Bool,1\}: \\
\> true \\
\> false
\end{joutput}
We see a pattern in the examples above. \noindent
\verb+Array{T,ndims}+ is the general form of the type of a dense
array with \verb+ndims+ dimensions, whose elements themselves have
a specific type \verb+T+, which is of type double precision floating
point in the first example, a 64-bit signed integer in the second, and
a boolean in the third example.
Therefore \verb+Array{T,1}+ is a 1-d vector (first class
objects in Julia) with element type \verb+T+ and \verb+Array{T,2}+ is
the type for 2-d matrices.
It is useful to think of arrays as a
generic N-d object that may contain elements of any type
\verb+T+. Thus \verb+T+ is a type parameter for an array that can take
on many different values. Similarly, the dimensionality of the array
\verb+ndims+ is also a parameter for the array type. This generality
makes it possible to create arrays of arrays. For example, Using
Julia's array comprehension syntax, we create a 2-element vector
containing $2\times2$ identity matrices.
\begin{jinput}
a = {}[eye(2) for i=1:2{}]
\end{jinput}
\begin{joutput}
2-element Array\{Array\{Float64,2\},1\}:
\end{joutput}
\noindent
\subsection{User's own types are first class too}
\label{sec:firstclass}
Many dynamic languages for numerical computing have traditionally had
an asymmetry, where built-in types have much higher
performance than any user-defined types. This is not the case with
Julia, where there is no meaningful distinction between user-defined
and ``built-in'' types.
We have mentioned so far a few number types and two matrix types,
\verb+Array{T,2}+ the dense array, with element type \verb+T+ and
\verb+SymTridiagonal{T}+, the symmetric tridiagonal with element type \verb+T+. There
are also other matrix types, for other structures including
SparseMatrixCSC (Compressed Sparse Columns), Hermitian, Triangular, Bidiagonal, and Diagonal. Julia's sparse matrix type
has an added flexibility that it can go beyond storing just numbers as
nonzeros, and instead store any other Julia type as well. The indices
in SparseMatrixCSC can also be represented as integers of any width
(16-bit, 32-bit or 64-bit). All these different matrix types, although
available as built-in types to a user downloading Julia, are
implemented completely in Julia, and are in no way any more or less
special than any other types one may define in their own program.
For demonstration, we create a symmetric arrow matrix type that
contains a diagonal and the first row \verb+A[1,2:n]+.
\begin{jinput}
\sh{ \# Type Parameter Example (Parameter T)} \\
\sh{\# Define a Symmetric Arrow Matrix Type with elements of type T} \\
type SymArrow\{T\} \\
\, dv::Vector\{T\} \sh{ \# diagonal }\\
\, ev::Vector\{T\} \sh{ \# 1st row{}[2:n{}] }\\
end \\ \\
\sh{ \# Create your first Symmetric Arrow Matrix} \\
S = SymArrow([1,2,3,4,5],[6,7,8,9])
\end{jinput}\begin{joutput}
\begin{verbatim}
SymArrow{Int64}([1,2,3,4,5],[6,7,8,9])
\end{verbatim}
\end{joutput}
The parameter in the array refers to the type of each element of the array.
Code can and should be written independently of the type of each element.
Later in Section \ref{sec:arrow}, we develop the symmetric arrow example much further.
The \verb+SymArrow+
matrix type contains two vectors, one each for the diagonal and the
first row, and these vector contain elements of type \verb+T+. In the type definition,
the type \verb+SymArrow+ is parametrized by the type of the
storage element \verb+T+. By doing so, we have created a generic type,
which refers to a universe of all arrow matrices containing elements
of all types. The matrix \verb+S+, is an example where \verb+T+ is \verb+Int64+.
When we write functions in Section~\ref{sec:arrow}
that operate on arrow matrices, those functions themselves will be
generic and applicable to the entire universe of arrow matrices we
have defined here.
Julia's type system allows for abstract types, concrete ``bits''
types, composite types, and immutable composite types. All of these
can have parameters and users may even write programs using unions of
these different types. We refer the reader to read all about Julia's
type system in the types chapter in the Julia manual\footnote{See the chapter
on types in the Julia manual:
\url{http://docs.julialang.org/en/latest/manual/types/}}.
\subsection{Vectorization: Key Strengths and Serious Weaknesses}
Users of traditional high level computing languages know that vectorization
improves performance. Do most users know exactly why vectorization
is so useful? It is precisely because, by vectorizing, the user has promised
the computer that the type of an entire vector of data matches the very first element.
This is an example where users are willing to provide type information to the
computer without even knowing exactly that is what they are doing.
Hence, it is an example of a strategy that balances the computer's needs with the human's.
From the computer's viewpoint, vectorization means that
operations on data happen largely in sections of the code where types
are known to the runtime system. When the runtime is operating on
arrays, it has no idea about the data contained in an array until it
encounters the array. Once encountered, the type of the data within
the array is known, and this knowledge is used to execute an
appropriate high performance kernel. Of course what really occurs at
runtime is that the system figures out the type, and gets to reuse
that information through the length of the array. As long as the
array is not too small, all the extra work in gathering type
information and acting upon it at runtime is amortized over the entire
operation.
The downside of this approach is that the user can achieve
high performance only with built-in types, and user defined types end
up being dramatically slower. The restructuring for vectorization is
often unnatural, and at times not possible. We illustrate this with an
example of the cumulative sum computation. Note that due to the size
of the problem, the computation is memory bound, and one does not observe
the case with complex arithmetic to be twice as slower than the real case,
even though it is performing twice as many floating point operations.
\begin{jinput}
\sh {\# Sum prefix (cumsum) on vector w with elements of type T} \\
function prefix\{T\}(w::Vector\{T\})\\
\, for i=2:size(w,1)\\
\,\, w[i]+=w[i-1]\\
\, end\\
\, w\\
end
\end{jinput}
We execute this code on a vector of double precision numbers and
double-precision complex numbers and observe something
that may seem remarkable: similar running times.
\begin{jinput}
x = ones(1\_000\_000)\\
@time prefix(x) \\ \\
y = ones(1\_000\_000) + im*ones(1\_000\_000)\\
@time prefix(y);
\end{jinput}\begin{joutput}
elapsed time: 0.003243692 seconds (80 bytes allocated) \\
elapsed time: 0.003290693 seconds (80 bytes allocated)
\end{joutput}
\noindent
This simple example is difficult to vectorize, and hence is often
provided as a \verb+built-in+ function in many numerical computing
systems. In Julia, the implementation is very similar to the snippet
of code above, and runs at speeds similar to C. While Julia users can
write vectorized programs like in any other dynamic language,
vectorization is not a pre-requisite for performance. This is because
Julia strikes a different balance between the human and the computer
when it comes to specifying types. Julia allows optional type
annotations, which are essential when writing libraries that utilize
multiple dispatch, but not for end-user programs that are exploring
algorithms or a dataset.
Generally, in Julia, type annotations are not for performance. They are purely for
code selection. (See Section \ref{sec:select}).
If the
programmer annotates their program with types, the Julia compiler will
use that information. But for the most part,
user code often includes minimal or no type annotations, and the Julia
compiler automatically infers the types.
\subsection{Type inference rescues ``for loops" and so much more}
\label{sec:inference}
A key component of Julia's ability to combine performance with
productivity in a single language is its implementation of
dataflow type
inference~\cite{graphfree},\cite{kaplanullman},\cite{Bezanson:2012jf}.
Unlike type inference algorithms for static languages, this algorithm
is tailored to the way dynamic languages work: the typing of code is
determined by the flow of data through it. The algorithm works by
walking through a program, starting with the types of its input
values, and ``abstractly interpreting'' it: instead of applying the
code to values, it applies the code to types, following all branches
concurrently and tracking all possible states the program could be in,
including all the types each expression could assume.
The dataflow type inference algorithm allows programs to be
automatically annotated with type bounds without forcing the
programmer to explicitly specify types. Yet, in dynamic languages it
is possible to write programs which inherently cannot be concretely
typed. In such cases, dataflow type inference provides what bounds it
can, but these may be trivial and useless---i.e. they may not narrow
down the set of possible types for an expression at all. However, the
design of Julia's programming model and standard library are such that
a majority of expressions in typical programs \textit{can} be
concretely typed. Moreover, there is a positive correlation between
the ability to concretely type code and that code being
performance-critical.
A lesson of the numerical computing languages is that one must learn to
vectorize to get performance. The mantra is ``for loops" are bad,
vectorization is good. Indeed one can find the mantra on p.72 of the
``1998 Getting Started with Matlab manual'' (and other editions):
\begin{quotation}
Experienced Matlab users like to say ``Life is too short to spend writing for loops."
\end{quotation}
It is not that ``for loops'' are inherently slow by themselves. The
slowness comes from the fact that in the case of most dynamic
languages, the system does not have access to the types of the
variables within a loop. Since programs often spend much of their time
doing repeated computations, the slowness of a particular operation
due to lack of type information is magnified inside a loop. This leads
to users often talking about ``slow for loops'' or ``loop overhead''.
In statically typed languages, full type information
is always available at compile time, allowing
compilation of a loop into a few machine instructions. This is
not the case in most dynamic languages, where the types are discovered
at run time, and the cost of determining the types and selecting the
right operation can run into hundreds or thousands of
instructions.
Julia has a transparent performance model. For example a
\verb+Vector{Float64}+ as in our example here, always has the same
in-memory representation as it would in C or Fortran; one can take a
pointer to the first array element and pass it to a C library function
using \verb+ccall+ and it will just work. The programmer knows exactly how
the data is represented and can reason about it. They know that a
\verb+Vector{Float64}+ does not require any additional heap
allocation besides the \verb+Float64+ values and that
arithmetic operations on these values will be machine arithmetic
operations. In the case of say, \verb+Complex128+, Julia stores
complex numbers in the same way as C or Fortran. Thus complex arrays
are actually arrays of complex values, where the real and imaginary
values are stored consecutively. Some systems have taken the path of
storing the real and imaginary parts separately, which leads to some
convenience for the user, at the cost of performance and
interoperability. With the \verb+immutable+ keyword, a programmer can
also define immutable data types, and enjoy the same benefits of
performance for composite types as for the more primitive number types
(bits types). This approach is being used to define many interesting
data structures such as small arrays of fixed sizes, which can have
much higher performance than the more general array data structure.
The transparency of the C data and performance models has been one of
the major reasons for C's long-lived success. One of the design goals
of Julia is to have similarly transparent data and performance
models. With a sophisticated type system and type inference, Julia
achieves both.
\section{Code selection: Run the right code at the right time}
\label{sec:select}
Code selection or code specialization from one point of view is the
opposite of code reuse enabled by abstraction. Ironically, viewed
another way, it enables abstraction. Julia allows users to overload
function names, and select code based on argument types. This can
happen at the highest and lowest levels of the software stack. Code
specialization lets us optimize for the details of the case at hand.
Code abstraction lets calling codes, probably those not yet even
written or perhaps not even imagined, work all the way through on
structures that may not have been envisioned by the original
programmer.
We see this as the ultimate realization of the famous 1908 quip that
\begin{quote}
Mathematics is the art of giving the same name to different things.
\end{quote}
by noted mathematician Henri Poincar\'{e}.\footnote{ A few versions of
this quote are relevant to Julia's power of abstractions and
numerical computing. They are worth pondering:
\begin{quote}
It is the harmony of the different parts, their symmetry, and their
happy adjustment; it is, in a word, all that introduces order, all
that gives them unity, that enables us to obtain a clear
comprehension of the whole as well as of the parts. Elegance may
result from the feeling of surprise caused by the unlooked-for
occurrence of objects not habitually associated. In this, again, it
is fruitful, since it discloses thus relations that were until then
unrecognized. {\bf Mathematics is the art of giving the same names to
different things.}
\end{quote}
http://www.nieuwarchief.nl/serie5/pdf/naw5-2012-13-3-154.pdf.
and
\begin{quote}
One example has just shown us the importance of terms in mathematics;
but I could quote many others. It is hardly possible to believe what
economy of thought, as Mach used to say, can be effected by a
well-chosen term. I think I have already said somewhere that {\bf
mathematics is the art of giving the same name to different
things}. It is enough that these things, though differing in
matter, should be similar in form, to permit of their being, so to
speak, run in the same mould. When language has been well chosen, one
is astonished to find that all demonstrations made for a known object
apply immediately to many new objects: nothing requires to be
changed, not even the terms, since the names have become the same.
\end{quote}
{\tt
http://www-history.mcs.st-andrews.ac.uk/Extras/Poincare\_Future.html
}
}
In this upcoming section we provide examples of how plus can apply to
so many objects. Some examples are floating point numbers, or
integers. It can also apply to sparse and dense matrices. Another
example is the use of the same name, ``det", for determinant, for the
very different algorithms that apply to very different matrix
structures. The use of overloading not only for single argument
functions, but for multiple argument functions is already a powerful
abstraction.
\subsection{Multiple Dispatch}
\label{sec:dispatch}
Multiple dispatch is the selection of a function implementation
based on the types of each argument of the function.
It is not only a nice notation to remove a long list of ``case" statements,
but it is part of the reason for Julia's speed.
It is expressed in Julia by annotating the type of a function
argument in a function definition with the following syntax: \verb+argument::Type+.
Mathematical notations that are often used in
print are difficult to employ in programs. For example, we can
teach the computer some natural ways to multiply numbers and
functions. Suppose that $a$ and $t$ are scalars, and $f$ and $g$ are
functions, and we wish to define
\begin{enumerate}
\item { \bf Number x Function \ = scale output:} $a*g$ is the function that takes $x$ to $a*g(x)$
\vspace{-.05in}
\item {\bf Function x Number \ = scale argument :} $f*t$ is the function that takes $x$ to $f(tx)$ and
\vspace{-.05in}
\item {\bf Function x Function = composition of functions:} $f*g$ is the function that takes $x$ to $f(g(x))$.
\end{enumerate}
If you are a mathematician who does not program, you would not see the
fuss. If you thought how you might implement this in your favorite
computer language, you might immediately see the benefit. In Julia,
multiple dispatch makes all three uses of \verb+*+ easy to express:
\begin{jinput}
*(a::Number, g::Function)= x->a*g(x) \> \sh{ \# Scale output } \\
*(f::Function,t::Number) = x->f(t*x) \> \sh{ \# Scale argument } \\
*(f::Function,g::Function)= x->f(g(x)) \sh{ \# Function composition}
\end{jinput}
Here, multiplication is dispatched by the type of its first and second
arguments. It goes the usual way if both are numbers, but there are
three new ways if one, the other, or both are functions.
\begin{figure}
\centering
\includegraphics[width=3in]{gauss.jpg}
\caption{\label{fig:gauss} Gauss quote hanging from the ceiling of the longstanding old Boston Museum of Science Mathematica Exhibit.
}
\end{figure}
These definitions exist as part of a larger system of generic definitions,
which can be reused by later definitions.
Consider the case of the mathematician Gauss' preference for
$\sin^2 \phi $ to refer to $\sin(\sin(\phi))$ and not
$\sin(\phi)^2$ (writing ``$\sin^2(\phi)$ is odious to me, even
though Laplace made use of it."(Figure~\ref{fig:gauss}).)
By defining \verb+*(f::Function,g::Function)= x->f(g(x))+,
{\tt (f\^{}2)(x)} automatically computes $f(f(x))$ as Gauss
wanted. This is a consequence of a generic definition that evaluates
\verb+x^2+ as \verb+x*x+ no matter how \verb+x*x+ is defined.
This paradigm is a natural fit for numerical computing, since so
many important operations involve interactions among multiple
values or entities. Binary arithmetic operators are obvious examples,
but many other uses abound. The fact that the compiler can pick the
sharpest matching definition of a function based on its input types
helps achieve higher performance, by keeping the code execution paths
tight and minimal.
We have not seen this in the literature but it seems worthwhile to point out four possibilities:
\begin{enumerate}
\item Static single dispatch (not done)
\item Static multiple dispatch (frequent in static languages, e.g. C++ overloading)
\item Dynamic single dispatch (Matlab's object oriented system might fall in this category though it has its own special characteristics)
\item Dynamic multiple dispatch (usually just called multiple dispatch).
\end{enumerate}
In Section \ref{sec:traditional} we discuss the comparison with
traditional object oriented approaches. Class-based object oriented
programming could reasonably be called dynamic single dispatch, and
overloading could reasonably be called static multiple dispatch.
Julia's dynamic multiple dispatch
approach is more flexible and adaptable while still retaining
powerful performance capabilities. Julia programmers often find that
dynamic multiple dispatch makes it easier to structure their programs
in ways that are closer to the underlying science.
\subsection{Code selection from bits to matrices}
Julia uses the same mechanism for code selection at all levels, from
the top to the bottom.
\begin{center}
\begin{tabular}{|c|c|c|} \hline
f & Function & Operand Types \\\hline
Low Level ``+" & Add Numbers& \{Float , Int\} \\
High Level ``+" & Add Matrices & \{Dense Matrix , Sparse Matrix\} \\
`` * " & Scale or Compose & \{Function , Number \} \\ \hline
\end{tabular}
\end{center}
\subsubsection{Summing Numbers: Floats and Ints}
\label{sec:bits}
We begin at the lowest level. Mathematically, integers are thought of as being special real numbers, but
on a computer, an Int and a Float have two very different representations.
Ignoring for a moment that there are even many choices of Int and Float representations,
if we add two numbers,
code selection based on numerical representation is taking place at a very low level.
Most users are blissfully unaware of this code selection, because it is hidden somewhere that is usually
off-limits to the user.
Nonetheless, one can follow the evolution of the high level code all the way down to the assembler level which ultimately would
reveal an ADD instruction for integer addition, and, for example, the AVX\footnote{AVX: \color{red}A\color{black}danced \color{red}V\color{black}ector e\color{red}X\color{black}tension to the x86 instruction set} instruction VADDSD\footnote{VADDSD: \color{red}{V}\color{black}ector \color{red}ADD S\color{black}calar \color{red}D\color{black}ouble-precision} for floating point addition in the language
of x86 assembly level instructions. The point being these are ultimately two different algorithms being called, one for a pair of Ints and one for a pair of Floats.
Figure \ref{fig:nativeadd} takes a close look at what a computer
must do to perform \verb-x+y- depending on whether (x,y) is (Int,Int), (Float,Float), or (Int,Float) respectively.
In the first case, an integer add is called, while in the second case a float add is called. In the last case, a promotion of the int to float is called through the x86 instruction VCVTSI2SD\footnote{VCVTSI2SD: \color{red}{V}\color{black}ector
\color{red}{C}\color{black}{on}\color{red}{V}\color{black}{er}\color{red}{T}\color{black}{ Doubleword} (\color{red}S\color{black}calar) \color{red}{I}\color{black}{nteger to}\color{red}{(2) S}\color{black}{calar} \color{red}{D}\color{black}ouble Precision Floating-Point Value}, and then the float add follows.
It is instructive to build a Julia simulator in Julia itself.
Let us define the aforementioned assembler instructions using Julia.
\begin{jinput}
\sh {\# Simulate the assembly level add, vaddsd, and vcvtsi2sd commands}
\begin{verbatim}
add(x::Int ,y::Int) = x+y
vaddsd(x::Float64,y::Float64) = x+y
vcvtsi2sd(x::Int) = float(x)
\end{verbatim}
\end{jinput}
\begin{jinput}
\sh{\# Simulate Julia's definition of + using $\oplus$} \\
\sh{\# To type $\oplus$, type as in TeX, \textbackslash oplus and hit the <tab> key} \\
$\oplus$\verb+(x::Int, y::Int) = add(x,y)+ \\
$\oplus$(x::Float64,y::Float64) = vaddsd(x,y) \\
$\oplus$\verb+(x::Int, y::Float64) = vaddsd(vcvtsi2sd(x),y)+ \\
$\oplus$\verb+(x::Float64,y::Int) = y+ $\oplus$ x\\
\end{jinput}
\begin{jinput}
methods($\oplus$)
\end{jinput}\begin{joutput}
4 methods for generic function $\oplus$:\\
$\oplus$ (x::Int64,y::Int64) at In[23]:3 \\
$\oplus$ (x::Float64,y::Float64) at In[23]:4 \\
$\oplus$ (x::Int64,y::Float64) at In[23]:5\\
$\oplus$ (x::Float64,y::Int64) at In[23]:6
\end{joutput}
\begin{figure}
\caption{\label{fig:nativeadd} While assembly code may seem intimidating, Julia disassembles readily. Armed with the {\tt code\_native} command in Julia and perhaps a good list of
assembler commands such as may be found on {\tt http://docs.oracle.com/cd/E36784\_01/pdf/E36859.pdf}
or
{\tt http://en.wikipedia.org/wiki/X86\_instruction\_listings}
one can really learn to
see the details of
code selection in action at the lowest levels.
More importantly one can begin to understand that Julia is fast
because the assembly code produced is so tight.}
\hspace*{.14in}
\begin{jinput}
f(a,b) = a + b
\end{jinput}\begin{joutput}
f (generic function with 1 method)
\end{joutput}
\hspace*{.14in}
\begin{jinput}
\sh{\# Ints add with the x86 \underline{add} instruction}
\vspace{-.06in}
\begin{verbatim}
@code_native f(2,3) \end{verbatim}
\end{jinput}\begin{joutput}
push RBP \\
mov RBP, RSP \\
\sh{add} RDI, RSI \\
mov RAX, RDI \\
pop RBP \\
ret
\end{joutput}
\hspace*{.14in}
\begin{jinput}
\sh{\# Floats add, for example, with the x86 \underline{vaddsd} instruction}
\vspace{-.06in}
\begin{verbatim}
@code_native f(1.0,3.0)
\end{verbatim}
\end{jinput}\begin{joutput}
push RBP \\
mov RBP, RSP\\
\sh{vaddsd} XMM0, XMM0, XMM1\\
pop RBP\\
ret
\end{joutput}
\hspace*{.14in}
\begin{jinput}
\sh{\# Int + Float requires a convert to scalar double precision, hence \\ \# the x86 \underline{vcvtsi2sd} instruction}
\vspace{-.06in}
\begin{verbatim}
@code_native f(1.0,3)
\end{verbatim}
\end{jinput}\begin{joutput}
push RBP \\
mov RBP, RSP \\
\sh{vcvtsi2sd} XMM1, XMM0, RDI \\
\sh{vaddsd} XMM0, XMM1, XMM0 \\
pop RBP \\
ret
\end{joutput}
\end{figure}
\subsubsection{Summing Matrices: Dense and Sparse}
\label{sec:summing}
We now move to a much higher level: matrix addition.
The versatile ``+" symbol lets us add matrices.
Mathematically, sparse matrices are thought of as being special matrices
with enough zero entries.
On a computer, dense matrices are (usually) contiguous blocks of data with a few parameters attached,
while sparse matrices (which may be stored in many ways) require storage of index information one way or another.
If we add two matrices, code selection must take place depending on whether the summands are (dense,dense),
(dense,sparse), (sparse,dense) or (sparse,sparse).
While this is at a much higher level, the basic pattern is unmistakably the same as that
of Section \ref{sec:bits}.
We show how to use a dense algorithm in the implementation
of $\oplus$ when either $A$ or $B$ (or both) are dense. A sparse algorithm
is used when both $A$ and $B$ are sparse.
\begin{jinput}
\sh{\# Dense + Dense} \\
$\oplus$(A::Matrix, B::Matrix) =\\
\hspace*{0.3in} [A[i,j]+B[i,j] for i in 1:size(A,1),j in 1:size(A,2)] \\
\sh{\# Dense + Sparse} \\
$\oplus$(A::Matrix, B::AbstractSparseMatrix) = A $\oplus$ full(B) \\
\sh{\# Sparse + Dense} \\
$\oplus$(A::AbstractSparseMatrix,B::Matrix) \ = B $\oplus$ A \sh{\# Use Dense + Sparse} \\
\sh{\# Sparse + Sparse is best written using the long form function definition:}
function $\oplus$(A::AbstractSparseMatrix, B::AbstractSparseMatrix)
\vspace{-0.06in}
\begin{verbatim}
C=copy(A)
(i,j)=findn(B)
for k=1:length(i)
C[i[k],j[k]]+=B[i[k],j[k]]
end
return C
end
\end{verbatim}
\end{jinput}
We now have eight methods for the function $\oplus$, four for the low level sum,
and four more for the high level sum.
\begin{jinput}
methods($\oplus$)
\end{jinput}\begin{joutput}
8 methods for generic function $\oplus$:\\
$\oplus$ (x::Int64,y::Int64) at In[23]:3 \\
$\oplus$ (x::Float64,y::Float64) at In[23]:4 \\
$\oplus$ (x::Int64,y::Float64) at In[23]:5\\
$\oplus$ (x::Float64,y::Int64) at In[23]:6 \\
$\oplus$ (A::Array\{T,2\},B::Array\{T,2\}) at In[29]:1 \\
$\oplus$ (A::Array\{T,2\},B::AbstractSparseArray\{Tv,Ti,2\}) at In[29]:1 \\
$\oplus$ (A::AbstractSparseArray\{Tv,Ti,2\},B::Array\{T,2\}) at In[29]:1 \\
$\oplus$ (A::AbstractSparseArray\{Tv,Ti,2\},B::AbstractSparseArray\{Tv,Ti,2\}) at In[29]:2
\end{joutput}
\subsection{The many levels of code selection}
In Julia as in mathematics, functions are as important as the data
they operate on, their arguments. Perhaps even more so.
We can create a new
function \verb+foo+ and gave it six definitions depending on the
combination of types. In the following example we
sensitize unfamiliar readers with terms from
computer science language research.
It is not critical that these terms be understood all at once.
\begin{jinput}
\sh {\# Define a} generic function \sh{with 6 methods. Each method is itself a} \\
\sh{ \# function. In Julia generic functions are far more convenient than the } \\
\sh{ \# multitude of case statements seen in other languages.
When Julia sees} \\
\sh{\#} foo, \sh{ it decides which method to use, rather than first seeing and deciding} \\
\sh{\# based on the type.}
\begin{verbatim}
foo() = "Empty input"
foo(x::Int) = x
foo(S::String) = length(S)
foo(x::Int, S::String) = "An Int and a String"
foo(x::Float64,y::Float64) = sqrt(x^2+y^2)
foo(a::Any,b::String)= "Something more general than an Int and a String"
\end{verbatim}
\sh{\# The function name} foo \sh{is overloaded. This is an example of} polymorphism. \\
\sh{\# In the jargon of computer languages this is called} ad-hoc polymorphism. \\
\sh{\# The} multiple dynamic dispatch \sh{idea captures the notion that the generic } \\
\sh{\# function is deciphered dynamically at runtime. One of the six choices} \\
\sh{\# will be made or an error will occur.}
\end{jinput}
\begin{joutput}
foo (generic function with 6 methods)
\end{joutput}
Any one instance of \verb+foo+ is known as a method or function. The
collection of six methods is referred to as a {\bf generic function}.
The word ``polymorphism" refers to the use of the same name
(foo, in this example) for functions with different types.
Contemplating the Poincar\'{e} quote in Footnote 5, it is handy to
reason about everything that you are giving the same name. In real
life coding, one tends to use the same name when the abstraction makes
a great deal of sense. That we use "+" for ints,floats, dense
matrices, and sparse matrices is the same name for different things.
Methods are grouped into generic functions.
While mathematics is the art of giving the same name to seemingly
different things, a computer has to eventually execute the right
program in the right circumstance. Julia's code selection operates at
multiple levels in order to translate a user's abstract ideas into
efficient execution. A generic function can operate on several
arguments, and the method with the most specific signature matching
the arguments is invoked. It is worth crystallizing some key aspects
of this process:
\begin{enumerate}
\item The same name can be used for different functions in different
circumstances. For example, \verb+select+ may refer to the selection
algorithm for finding the $k^{th}$ smallest element in a list, or to
select records in a database query, or simply as a user-defined
function in a user's own program. Julia's namespaces allow the
usage of the same vocabulary in different circumstances in a simple
way that makes programs easy to read.
\item A collection of functions that represent the same idea but
operate on different structures are naturally referred to by the
same name. Which method is called is based entirely on the types of
all the arguments - this is multiple dispatch. The function
\verb+det+ may be defined for all matrices at an abstract
level. However, for reasons of efficiency, Julia defines different
methods for different types of matrices, depending on whether they
are dense or sparse, or if they have a special structure such as
diagonal or tridiagonal.
\item Within functions that operate on the same structure, there may
be further differences based on the different types of data contained
within. For example, whether the input is a vector of Float64 values
or Int32 values, the norm is computed in the same exact way,
with a common body of code, but the compiler is able to generate
different executable code from the abstract specification.
\item Julia uses the same mechanism of code selection at the lowest
and highest levels - whether it is performing operations on matrices or
operations on bits. As a result, Julia is able to optimize the whole
program, picking the right method at the right time, either at
compile-time or run-time.
\end{enumerate}
\subsection{Is ``code selection" just traditional object oriented programming?}
\label{sec:traditional}
\begin{figure}
\centering
\hspace*{1in}\begin{minipage}{5in}
\begin{shaded}
\sh{\textbackslash * Polymorphic Java Example. Method defined by types of two arguments. *\textbackslash} \\
\begin{verbatim}
public class OverloadedAddable {
public int addthem(int i, int f} {
return i+f;
}
public double addthem(int i, double f} {
return i+f;
}
public double addthem(double i, int f} {
return i+f;
}
public double addthem(double i, double f} {
return i+f;
}
}
\end{verbatim}}
\end{shaded}
\end{minipage} \caption{\label{fig:java} Advantages of Julia: It is
true that the above Java code is polymorphic based on the types of
the two arguments.
(``Polymorphism" is the use of the same name for a function
that may have different type arguments.)
However, in Java if the method {\tt addthem} is
called, the types of the arguments must be known at compile time.
This is static dispatch. Java is also encumbered by encapsulation:
in this case {\tt addthem} is encapsulated inside the {\tt
OverloadedAddable} class. While this is considered a safety
feature in Java culture, it becomes a burden for numerical
computing. }
\end{figure}
The method to be executed in Julia is not chosen by only one argument,
which is what happens in the case of single dispatch, but through
multiple dispatch that considers the types of all the arguments.
Julia is not encumbered by the encapsulation restrictions (class based
methods) of most object oriented languages. The generic functions play
a more important role than the data types. Some call this ``verb"
based languages as opposed to most object oriented languages being
``noun" based. In numerical computing, it is the concept of ``solve
$Ax=b$" that often feels more primary, at the highest level, rather
than whether the matrix $A$ is full, sparse, or structured. Readers
familiar with Java might think, "So what? One can easily create
methods based on the types of the arguments". An example is provided
in Figure \ref{fig:java}. However a moment's thought shows that the
following dynamic situation in Julia is impossible to express in Java:
\begin{jinput}
\sh {\# It is possible for a static compiler to know that x,y are Float} \\
x = rand(Bool) ? 1.0 : 2.0 \\
y = rand(Bool) ? 1.0 : 2.0 \\
x+y \\
\sh {\# It is impossible to know until runtime if x,y are Int or Float} \\
x = rand(Bool) ? 1 : 1.0 \\
y = rand(Bool) ? 1 : 1.0 \\
x+y
\end{jinput}
\vspace{0.1in}
Readers are familiar with single dispatch mechanism, as in Matlab. It
is unusual in that it is not completely class based, as the code
selection is based on Matlab's own custom hierarchy. In Matlab the
leftmost object has precedence, but user-defined classes have
precedence over built-in classes. Matlab also has a mechanism to
create a custom hierarchy.
Julia generally shuns the notion of ``built-in" vs.\ ``user-defined"
preferring to focus on the method to be performed based on the
combination of types, and obtaining high performance as a byproduct.
A high level library writer, which we do not distinguish from any
user, has to match the best algorithm for the best input structure. A
sparse matrix would match to a sparse routine, a dense matrix to a
dense routine. A low level language designer has to make sure that
integers are added with an integer adder, and floating points are
added with a float adder. Despite the very different levels, the
reader might recognize that deep down, these are both examples of code
being selected to match the structure of the problem.
Readers familiar with object-oriented paradigms such as C++ or Java
are most likely familiar with the approach of encapsulating methods
inside classes.
Julia's more general multiple dispatch mechanism (also known as
generic functions, or multi-methods) is a paradigm where methods are
defined on combinations of data types (classes)
Julia has proven that this is remarkably well suited for numerical
computing.
A class based language might express the sum of a sparse matrix with a
full matrix as follows: {\tt A\_sparse\_matrix.plus(A\_full\_matrix)}.
Similarly it might express indexing as \newline {\tt
A\_sparse\_matrix.sub(A\_full\_matrix)} . If a tridiagonal were
added to the system, one has to find the method {\tt plus} or {\tt
sub} which is encapsulated in the sparse matrix class, modify it and
test it. Similarly, one has to modify every full matrix method, etc.
We believe that class-based methods, which can be taken quite far, are
not sufficiently powerful to express the full gamut of abstractions in
scientific computing. Further, the burdens of encapsulation create a
wall around objects and methods that are counterproductive for
numerical computing. \newline \hspace*{.2in} The generic function
idea captures the notion that a method for a general operation on
pairs of matrices may exist (e.g. ``+'') but if a more specific
operation is possible (e.g. ``+'' on sparse matrices, or ``+'' on a
special matrix structure like Bidiagonal), then the more specific
operation is used. We also mention indexing as another example, Why
should the indexee take precedence over the index?
\subsection{Quantifying the use of multiple dispatch}
In~\cite{juliaarray} we performed an analysis to substantiate the
claim that multiple dispatch, an esoteric idea for numerical computing from computer
languages, finds its killer application in scientific computing.
We wanted to answer for ourselves the question of whether there
was really anything different about how \text{Julia}\ uses multiple
dispatch.
Table \ref{dispatchratios} gives an answer in terms of Dispatch ratio (DR),
Choice ratio (CR). and Degree of specialization (DoS).
While multiple dispatch is an idea that has been circulating for some time,
its application to numerical computing appears to have significantly favorable
characteristics compared to previous applications.
To quantify how heavily a language feature is used,
we use the following metrics for evaluating the extent of multiple
dispatch \cite{Muschevici:2008}:
\begin{enumerate}
\item Dispatch ratio (DR): The average number of methods in a generic
function.
\item Choice ratio (CR): For each method, the total number of methods
over all generic functions it belongs to, averaged over all
methods. This is essentially the sum of the squares of the number of
methods in each generic function, divided by the total number of
methods. The intent of this statistic is to give more weight to
functions with a large number of methods.
\item Degree of specialization (DoS): The average number of
type-specialized arguments per method.
\end{enumerate}
Table~\ref{dispatchratios} shows the mean of each metric over the
entire Julia \code{Base} library, showing a high degree of multiple
dispatch compared with corpora in other languages
\cite{Muschevici:2008}. Compared to most multiple dispatch systems,
\text{Julia}\ functions tend to have a large number of definitions. To see
why this might be, it helps to compare results from a biased sample of
common operators. These functions are the most obvious candidates for
multiple dispatch, and as a result their statistics climb
dramatically. \text{Julia}\ is focused on numerical computing, and so is
likely to have a large proportion of functions with this character.
\begin{table}
\label{dispatchratios}
\begin{center}
\begin{tabular}{|l|r|r|r|}\hline
Language & DR & CR & DoS \\
\hline \hline
Gwydion & 1.74 & 18.27 & 2.14 \\
\hline
OpenDylan & 2.51 & 43.84 & 1.23 \\
\hline
CMUCL & 2.03 & 6.34 & 1.17 \\
\hline
SBCL & 2.37 & 26.57 & 1.11 \\
\hline
McCLIM & 2.32 & 15.43 & 1.17 \\
\hline
Vortex & 2.33 & 63.30 & 1.06 \\
\hline
Whirlwind & 2.07 & 31.65 & 0.71 \\
\hline
NiceC & 1.36 & 3.46 & 0.33 \\
\hline
LocStack & 1.50 & 8.92 & 1.02 \\
\hline
\text{Julia}\ & 5.86 & 51.44 & 1.54 \\
\hline
\text{Julia}\ operators & 28.13 & 78.06 & 2.01 \\
\hline
\end{tabular}
\end{center}
\caption{
A comparison of \text{Julia}\ (1208 functions exported from the \code{Base} library)
to other languages with multiple dispatch.
The ``\text{Julia}\ operators'' row describes 47 functions with special syntax
(binary operators, indexing, and
|
concatenation).
Data for other systems are from \cite{Muschevici:2008}.
The results indicate that \text{Julia}\ is using multiple dispatch far more heavily
than previous systems. }
\end{table}
\subsection{Case Study for Numerical Computing}
The complexity of linear algebra software has been nicely captured in the
context of LAPACK and ScaLAPACK by Demmel and Dongarra, et.al.,
\cite{lawn181} and reproduced verbatim here:
\begin{verbatim}
(1) for all linear algebra problems
(linear systems, eigenproblems, ...)
(2) for all matrix types
(general, symmetric, banded, ...)
(3) for all data types
(real, complex, single, double, higher precision)
(4) for all machine architectures
and communication topologies
(5) for all programming interfaces
(6) provide the best algorithm(s) available in terms of
performance and accuracy (``algorithms" is plural
because sometimes no single one is always best)
\end{verbatim}
In the language of Computer Science, code reuse is about taking
advantage of polymorphism. In the general language of mathematics
it's about taking advantage of abstraction, or the sameness of two
things. Either way, programs are efficient, powerful, and
maintainable if programmers are given powerful mechanisms to reuse
code.
Increasingly, the applicability of linear algebra has gone well beyond
the LAPACK world of floating point numbers. These days linear algebra
is being performed on, say, high precision numbers, integers, elements
of finite fields, or rational numbers. There will always be a special
place for the BLAS, and the performance it provides for floating point
numbers. Nonetheless, linear algebra operations transcend any one
data type. One must be able to write a general implemenation and as
long as the necessary operations are available, the code should just
work \cite{andreas}. That is the power of code reuse.
\subsubsection{Determinant: Simple Single Dispatch}
In traditional numerical computing there were people with special skills known as
library writers. Most users were, well, just users of libraries. In this case study,
we show how anybody can dispatch a new determinant function based solely
on the type of the argument.
For triangular and diagonal structures the obvious formulas are used.
For general matrices, the programmer will compute a QR decomposition
of the matrix and find the determinant as the product of the diagonal
elements of $R$.\footnote{LU is more efficient. We simply wanted to
illustrate other ways are possible.} For symmetric tridiagonals the
usual 3-term recurrence formula\cite{threetermrecurrence} is used. (The first four are defined
as one line functions; the symmetric tridiagonal uses the long form.)
\begin{jinput}
\sh{\# Simple determinants defined using the short form for functions} \\
newdet(x::Number) = x \\
newdet(A::Diagonal ) = prod(diag(A)) \\
newdet(A::Triangular) = prod(diag(A)) \\
newdet(A::Matrix) = -prod(diag(qrfact(full(A))[:R]))*(-1)\^{}size(A,1) \\
\sh{\# Tridiagonal determinant defined using the long form for functions} \\
function newdet(A::SymTridiagonal) \\
\, \sh{\# Assign c and d as a pair }\\
\, c,d = 1, A[1,1] \\
\, for i=2:size(A,1) \\
\, \, {\sh{ \# temp=d, d=the expression, c=temp}} \\
\, \, c,d = d, d*A[i,i]-c*A[i,i-1]{}\^ {}{2} \\
\, end \\
\, d \\
end \\
\end{jinput}
We have illustrated a mechanism to select a determinant formula at runtime based on the type of
the input argument. If Julia knows an argument type early, it can make use of this information
for performance. If it does not, code selection can still happen, at runtime. The reason why Julia
can still perform well is that once code selection based on type occurs, Julia can return to performing well
once inside the method.
\subsubsection{A Symmetric Arrow Matrix Type}
\label{sec:arrow}
In the field of Matrix Computations, there are matrix structures and operations on these matrices. In Julia, these
structures exist as Julia types.
Julia has a number of predefined matrix structure types: (dense) \verb+Matrix+, (compressed sparse column) \verb+SparseMatrixCSC+,
\verb+Symmetric+, \verb+Hermitian+, \verb+SymTridiagonal+,
\verb+Bidiagonal+, \verb+Tridiagonal+, \verb+Diagonal+,
and \verb+Triangular+ are all examples of Julia's matrix structures.
The operations on these matrices exist as Julia functions. Familiar examples of operations are indexing, determinant, size,
and matrix addition. Since matrix addition takes two arguments, it may be necessary to reconcile two different types when
computing the sum.
Some languages do not allow you to extend their built in functions and types.
This ability is known
as external dispatch. In the following example, we illustrate how the user can add symmetric
arrow matrices to the system, and then add a specialized \verb+det+ method to compute
the determinant of a symmetric arrow matrix efficiently.
We build on the symmetric arrow type introduced in Section \ref{sec:firstclass}.
\begin{jinput}
\sh{\# Define a Symmetric Arrow Matrix Type} \\
immutable SymArrow\{T\} <: AbstractMatrix\{T\} \\
\, dv::Vector\{T\} \sh{ \# diagonal }\\
\, ev::Vector\{T\} \sh{ \# 1st row{}[2:n{}] }\\
\> end
\end{jinput}
\begin{jinput}
\sh {\# Define its size} \\
importall Base \\
size(A::SymArrow, dim::Integer) = size(A.dv,1) \\
size(A::SymArrow)= size(A,1), size(A,1)
\end{jinput}\begin{joutput}
size (generic function with 52 methods)
\end{joutput}
\begin{jinput}
\sh {\# Index into a SymArrow} \\
function getindex(A::SymArrow,i::Integer,j::Integer) \\
\, if i==j; return A.dv[i] \\
\, \> elseif i==1; return A.ev{}[j-1{}] \\
\, \> elseif j==1; return A.ev{}[i-1{}] \\
\, \> else return zero(typeof(A.dv{}[1{}])) \\
\, end \\
end
\end{jinput}\begin{joutput}
getindex (generic function with 168 methods)
\end{joutput}
\begin{jinput}
\sh {\# Dense version of SymArrow} \\
full(A::SymArrow) =[A[i,j] for i=1:size(A,1), j=1:size(A,2)]
\end{jinput}\begin{joutput}
full (generic function with 17 methods)
\end{joutput}
\begin{jinput}
\sh {\# An example} \\
S=SymArrow({}[1,2,3,4,5{}],{}[6,7,8,9{}])
\end{jinput}\begin{joutput}
5x5 SymArrow\{Int64\}: \\
\sh{ 1 6 7 8 9} \\
\sh{ 6 2} 0 0 0 \\
\sh{7} 0 \sh{3} 0 0 \\
\sh{8} 0 0 \sh{4} 0 \\
\sh{9} 0 0 0 \sh{5}
\end{joutput}
\begin{jinput}
\sh{\# det for SymArrow (external dispatch example)} \\
function exc\_prod(v) \sh{\# prod(v)/v[i] } \\
\, [prod(v{}[{}[1:(i-1),(i+1):end{}]{}]) for i=1:size(v,1)] \\
end \\
\sh{\# det for SymArrow formula} \\
det(A::SymArrow) = prod(A.dv)-sum(A.ev.\textasciicircum 2.*exc\_prod(A.dv{}[2:end{}]))
\end{jinput}\begin{joutput}
det (generic function with 17 methods)
\end{joutput}
The above julia code uses the special formula
$$\det(A)=\prod_{i=1}^n d_i - \sum_{i=2}^n e_i^2 \prod_{2 \le j \ne i \le n} d_j ,$$
valid for symmetric arrow matrices with diagonal $d$ and first row starting with the second entry $e$.
In some numerical computing languages,
a function might begin with a lot of argument checking to pick which algorithm to use. In Julia, one creates
a number of {\it methods.} Thus \verb+newdet+ on a diagonal is one method for \verb+newdet+, and \verb+newdet+
on a triangular matrix is a second method. \verb+det+ on a \verb+SymArrow+ is a new method for \verb+det+. (See Section 4.6.1.)
Code is selected, in advance if the compiler knows the type, otherwise the code is selected at run time. The selection
of code is known as {\it dispatch.}
We have seen a number of examples of code selection for single dispatch, i.e.,
the selection of code based on the type of a single argument.
We can now turn to a powerful feature, Julia's multiple dispatch mechanism.
Now that we have created a symmetric arrow matrix, we might want to add it
to all possible matrices of all types. However, we might notice that
a symmetric arrow plus a diagonal does not require operations on full dense matrices.
The code below starts with the most general case, and then allows for specialization
for the symmetric arrow and diagonal sum:
\begin{jinput}
\sh{\# SymArrow + Any Matrix: (Fallback: add full dense arrays )} \\
+(A::SymArrow, B::Matrix) = full(A)+B \\
+(B::Matrix, A::SymArrow) = A+B \\
\sh{\# SymArrow + Diagonal: (Special case: add diagonals, copy off-diagonal) }\\
+(A::SymArrow, B::Diagonal) = SymArrow(A.dv+B.diag,A.ev) \\
+(B::Diagonal, A::SymArrow) = A+B
\end{jinput}
\begin{figure}
\centering
\includegraphics[width=6.5in]{benchmarks.pdf}
\caption{\label{fig:performance}{Performance comparison of various language performing simple micro-benchmarks. Benchmark execution time relative to C. (Smaller is better, C performance = 1.0).}}
\end{figure}
\section{Leveraging language design for high performance libraries}
\label{sec:lang}
Seemingly innocuous design choices in a language can have profound,
pervasive performance implications. These are often overlooked in
languages that were not designed from the beginning to be able to
deliver excellent performance. Other aspects of language and library
design affect the usability, composability, and power of the provided
functionality.
\subsection{Integer arithmetic}
A simple but crucial example of a performance-critical language design
choice is integer arithmetic. Julia uses machine arithmetic for
integer computations.
Consider what happens if we make the number of loop iterations fixed:
\begin{jinput}
\sh{\# 10 Iterations of f(k)=5k-1 on integers} \\
function g(k) \\
\, for i = 1:10 \\
\, \> k = f(k) \\
\, end \\
\, k \\
end
\end{jinput}\begin{joutput}
g (generic function with 2 methods)
\end{joutput}
\begin{jinput}
code\_native(g,(Int,))
\end{jinput}\begin{joutput}
Source line: 3 \\
\> push RBP \\
\> mov RBP, RSP \\
Source line: 3 \\
\> \sh{imul} RAX, RDI, 9765625 \\
\> \sh{add} RAX, -2441406 \\
Source line: 5 \\
\> pop RBP \\
\> ret \\
\end{joutput}
Because the compiler knows that integer addition and multiplication are associative and that multiplication distributes over addition it can optimize the entire loop down to just a multiply and an add. Indeed, if $f(k)=5k-1$, it is true that the tenfold iterate $f^{(10)}(k)=-2441406 + 9765625 k$.
\subsection{A powerful approach to linear algebra}
We describe how the Julia language features have been used to provide
a powerful approach to linear algebra\cite{andreas}.
\subsubsection{Matrix factorizations}
For decades, orthogonal matrices have been represented internally as
products of Householder matrices stored in terms of vectors, and displayed for humans as matrix
elements. $LU$ factorizations are often performed in place,
storing the $L$ and $U$ information together in the data locations
originally occupied by $A$. All this speaks to the fact that matrix
factorizations deserve to be first class objects in a linear algebra
system.
In Julia, thanks to the contributions of Andreas Noack Jensen \cite{andreas} and
many others, these structures are indeed first class objects. The
structure \verb+QRCompactWY+ holds a compact $Q$ and an $R$ in memory.
Similarly an \verb+LU+ holds an $L$ and $U$ in packed form in memory.
Through the magic of multiple dispatch, we can solve linear systems,
extract the pieces, and do least squares directly on these
structures.
The $QR$ example is even more fascinating. Suppose one computes $QR$ of
a $4 \times 3$ matrix. What is the size of $Q$? The right answer, of
course, is that it depends: it could be $4 \times 4$ or $4 \times 3$. The
underlying representation is the same.
In Julia one can compute \verb+Aqr = qrfact(rand(4,3))+. Then one
extract Q from the factorization with \verb+Q=Aqr[:Q]+. This $Q$ retains its clever underlying structure
and therefore is efficient and applicable when multiplying vectors
of length 4 or length 3, contrary to the rules of freshman linear
algebra, but welcome in numerical libraries for saving space and
faster computations.
\begin{jinput}
A=[1 2 3 \\
1 2 1 \\
1 0 1 \\
1 0 -1] \\
Aqr = qrfact(A)); \\
Q = Aqr[:Q] \end{jinput}\begin{joutput}
\begin{verbatim}
4x4 QRCompactWYQ{Float64}:
-0.5 -0.5 -0.5
-0.5 -0.5 0.5
-0.5 0.5 -0.5
-0.5 0.5 0.5
\end{verbatim}
\end{joutput}
\begin{jinput} Q*[1,0,0,0] \end{jinput}\begin{joutput}
\begin{verbatim}
4-element Array{Float64,1}:
-0.5
-0.5
-0.5
-0.5
\end{verbatim}
\end{joutput}
\begin{jinput} Q*[1, 0, 0] \end{jinput}\begin{joutput} \begin{verbatim}
4-element Array{Float64,1}:
-0.5
-0.5
-0.5
-0.5
\end{verbatim}
\end{joutput}
\subsubsection{User-extensible wrappers for BLAS and LAPACK}
The tradition in linear algebra is to leave the coding to LAPACK
writers, and call LAPACK for speed and accuracy. This has worked
fairly well, but Julia exposes considerable opportunities for
improvement.
Firstly, all of LAPACK is available to Julia users, not just the most
common functions. All LAPACK wrappers are implemented fully in Julia
code, using
\verb+ccall+\footnote{\url{http://docs.julialang.org/en/latest/manual/calling-c-and-fortran-code/}},
which does not require a C compiler, and can be called directly from
the interactive Julia prompt. This makes it easy for users to
contribute LAPACK functionality, and that is how Julia's LAPACK
functionality has grown bit by bit. Wrappers for missing LAPACK
functionality can also be added by users in their own code.
Consider the following example that implements the Cholesky
factorization by calling LAPACK's \code{xPOTRF}. It uses Julia's
metaprogramming facilities to generate four functions, each
corresponding to the \code{xPOTRF} functions for \code{Float32}, \code{Float64}, \code{Complex64},
and \code{Complex128} types. The actual call to the Fortran functions is
wrapped in {\tt ccall}. Finally, the {\tt chol} function provides a
user-accessible way to compute the factorization. It is easy to modify
the template below for any LAPACK call.
\begin{jinput}
\sh {\# Generate calls to LAPACK's Cholesky for double, single, etc.} \\
\sh {\# xPOTRF refers to POsitive definite TRiangular Factor} \\
\sh{\# LAPACK signature: SUBROUTINE DPOTRF( UPLO, N, A, LDA, INFO ) } \\
\sh{\# LAPACK documentation:}
\vspace{-0.05in}
\begin{verbatim}
* UPLO (input) CHARACTER*1
* = 'U': Upper triangle of A is stored;
* = 'L': Lower triangle of A is stored.
* N (input) INTEGER
* The order of the matrix A. N >= 0.
* A (input/output) DOUBLE PRECISION array, dimension (LDA,N)
* On entry, the symmetric matrix A. If UPLO = 'U', the leading
* N-by-N upper triangular part of A contains the upper
* triangular part of the matrix A, and the strictly lower
* triangular part of A is not referenced. If UPLO = 'L', the
* leading N-by-N lower triangular part of A contains the lower
* triangular part of the matrix A, and the strictly upper
* triangular part of A is not referenced.
* On exit, if INFO = 0, the factor U or L from the Cholesky
* factorization A = U**T*U or A = L*L**T.
* LDA (input) INTEGER
* The leading dimension of the array A. LDA >= max(1,N).
* INFO (output) INTEGER
* = 0: successful exit
* < 0: if INFO = -i, the i-th argument had an illegal value
* > 0: if INFO = i, the leading minor of order i is not
* positive definite, and the factorization could not be
* completed.
\end{verbatim} \\
\sh {\# Generate Julia method potrf!} \\
\vspace{-.1in}
\verb+ for (potrf, elty) in + \sh{\# Run through 4 element types}
\vspace{.05in}
\begin{verbatim}
((:dpotrf_,:Float64),
(:spotrf_,:Float32),
(:zpotrf_,:Complex128),
(:cpotrf_,:Complex64))
\end{verbatim}
\vspace{-.2in}
\sh {\# Begin function potrf!}
\vspace{-.07in}
\begin{verbatim}
@eval begin
function potrf!(uplo::Char, A::StridedMatrix{$elty})
lda = max(1,stride(A,2))
lda==0 && return A, 0
info = Array(Int, 1)
\end{verbatim}
\vspace{-.2in}
\sh {\# Call to LAPACK:ccall(LAPACKroutine,Void,PointerTypes,JuliaVariables)}
\vspace{-.07in}
\begin{verbatim}
ccall(($(string(potrf)),:liblapack), Void,
(Ptr{Char}, Ptr{Int}, Ptr{$elty}, Ptr{Int}, Ptr{Int}),
\end{verbatim}
\vspace{-.07in}
\verb+ +\sh{\ \ \ \&uplo, \ \ \ \&size(A,1), \ \ \ A, \ \ \ \ \ \ \ \&lda, \ \ info})
\vspace{-.07in}
\begin{verbatim}
return A, info[1]
end
end
end
chol(A::Matrix) = potrf!('U', copy(A))
\end{verbatim}
\end{jinput}
\subsection{High Performance Polynomials and Special Functions with Macros}
\label{subsection:macros}
Julia has a macro system that provides easy custom code generation,
bringing a level of performance that is otherwise difficult to
achieve.
A macro is a function that runs at parse-time, and takes parsed symbolic expressions in and returns transformed symbolic expressions out, which are inserted into the code for later compilation.
For example, a library developer implemented an
\verb+@evalpoly+ macro that uses Horner's rule to evaluate polynomials
efficiently.
Consider
\begin{jinput}
@evalpoly(10,3,4,5,6)
\end{jinput}
\noindent which returns 6543 (the polynomial $3+4x+5x^2+6x^3$, evaluated at $10$ with Horner's rule).
Julia allows us to see the inline generated code with the command
\begin{jinput}
macroexpand(:@evalpoly(10,3,4,5,6))
\end{jinput}
We reproduce the key lines below
\begin{joutput}
\#471\#t = 10 \sh{\# Store 10 into a variable named \#471\#t } \\
Base.Math.+(3,Base.Math.*(\#471\#t,Base.Math.+(4,Base.Math.*
(\#471\#t,Base.Math.+(5,Base.Math.*(\#471\#t,6))))
))
\end{joutput}
This code-generating macro only needs to produce the correct symbolic
structure, and Julia's compiler handles the remaining details of fast
native code generation. Since polynomial evaluation is so important
for numerical library software it is critical that users can evaluate
polynomials as fast as possible.
The overhead of implementing an explicit loop, accessing coefficients in an array, and possibly a subroutine call (if it is not inlined), is substantial compared to just inlining the whole polynomial evaluation.
Steven Johnson reports in his EuroSciPy notebook\footnote{\url{https://github.com/stevengj/Julia-EuroSciPy14/blob/master/Metaprogramming.ipynb}}
\begin{quotation}
This is precisely how erfinv is implemented in Julia (in single and double precision), and is 3
to 4 times faster than the compiled (Fortran?) code in Matlab, and 2 to 3 times faster than the compiled (Fortran Cephes) code used in SciPy.
The difference (at least in Cephes) seems to be mainly that they have explicit arrays of polynomial coefficients and call a subroutine for Horner's rule, versus inlining it via a macro.
\end{quotation}
Johnson also used the same trick in his implementation of the
digamma special function for complex arguments\footnote{\url{https://github.com/JuliaLang/julia/issues/7033}} following an idea of Knuth:
\begin{quote}
As described in Knuth TAOCP vol.\ 2, sec.\ 4.6.4, there is actually an
algorithm even better than Horner's rule for evaluating polynomials
p(z) at complex arguments (but with real coefficients): you can save
almost a factor of two for high degrees. It is so complicated that
it is basically only usable via code generation, so it would be
especially nice to modify the @horner macro to switch to this for
complex arguments.
\end{quote}
No sooner than this was proposed, the macro was rewritten to allow for
this case giving a factor of four performance improvement on all real
polynomials evaluated at complex arguments.
\subsection{Easy and flexible parallelism}
\label{sec:easypar}
Parallel computing remains an important research topic in numerical
computing. Parallel computing has yet to reach the level of richness and
interactivity required for innovation that has been achieved with
sequential tools. The issues discussed in Section
\ref{sec:humancomputer} on the balance between the human and the
computer become more pronounced in the parallel setting. Part of the
problem is that parallel computing means different things to different
people:
\begin{enumerate}
\item At the most basic level, one wants instruction level parallelism
within a CPU, and expects the compiler to discover such parallelism
in the code. In Julia, this can be achieved explicitly with the use
of the {\tt @simd} primitive. Beyond that,
\item In order to utilize multicore and manycore CPUs on the same node, one
wants some kind of multi-threading. Currently, we have experimental
multi-threading support in Julia, and this will be the topic of a
further paper. Julia currently does provide a {\tt SharedArray} data
structure where the same array in memory can be operated on by
multiple different Julia processes on the same node.
\item Then, there is distributed memory,
often considered the most difficult kind of parallelism.
This can mean running Julia
on anything between half a dozen to thousands of nodes, each with
multicore CPUs.
\end{enumerate}
In the fullness of time, there may be a unified programming model that
addresses this hierarchical nature of parallelism at different levels,
across different memory hierarchies.
Our experience with Star-P \cite{starpright} taught us a
valuable lesson.
Star-P parallelism \cite{starpstart,starpug} included global
dense, sparse, and cell arrays that were distributed on parallel shared or distributed memory computers. Before the evolution of the cloud as we know it today,
the user used a familiar front end (usually Matlab) as the client on a laptop or desktop, and connected seamlessly to a server (usually a large distributed computer).
Blockbuster functions from sparse and dense linear algebra, parallel FFTs,
parallel sorting, and many others were easily available and composable for the user.
In these cases Star-P called Fortran/MPI or C/MPI.
Star-P also allowed a kind of parallel for loop that worked on rows, planes or hyperplanes
of an array. In these cases Star-P used copies of the client language on the backend,
usually Matlab, octave, python, or R.
Our experience taught us that while we were able to get a useful
parallel computing system this way, bolting parallelism onto an
existing language that was not designed for performance or parallelism
is difficult at best, and impossible at worst. One of our (not so
secret) motivations to build Julia was to have the right language for
parallel computing.
Julia provides many facilities for parallelism, which are described in
detail in the Julia
manual\footnote{\url{http://docs.julialang.org/en/latest/manual/parallel-computing/}}. Distributed
memory programming in Julia is built on two primtives - {\it remote
calls} that execute a function on a remote processor and {\it remote
references} that are returned by the remote processor to the
caller. These primitives are implemented completely within Julia. On
top of these, Julia provides a distributed array data structure, a
{\tt pmap} implementation, and a way to parallelize independent
iterations of a loop with the {\tt @parallel} macro - all of which can
parallelize code in distributed memory. These ideas are exploratory in
nature, and will certainly evolve. We only discuss them here to
emphasize that well-designed programming language abstractions and
primitives allow one to express and implement parallelism completely
within the language, and explore a number of different parallel
programming models with ease. We hope to have a detailed discussion on
Juila's approach to parallelism in a future paper.
We proceed with one example that demonstrates {\tt @parallel} at work,
and how one can impulsively grab a large number of processors and
explore their problem space quickly.
\begin{jinput}
\verb& &\sh{ @everywhere} \verb&begin & \sh {\# define on every processor} \\
\verb& function stochastic(&$\beta$\verb&=2,n=200) &\\
\verb& h=n^-(1/3) & \\
\verb& x=0:h:10 & \\
\verb& N=length(x) & \\
\verb& d=(-2/h^2 .-x) + 2sqrt&(h*$\beta$\verb&)*randn(N)& \sh{ \# diagonal} \\
\verb& e=ones(N-1)/h^2 & \sh {\# subdiagonal} \\
\verb& eigvals(SymTridiagonal(d,e))[N] & \sh { \# smallest negative eigenvalue} \\
\verb&end& \\
\verb&end&
\vspace{-.1in}
\end{jinput}
\begin{jinput}
\vspace{-.1in}
\begin{verbatim}
t = 10000
\end{verbatim}
\vspace{-0.07in}
\verb&for &$\beta$\verb&=[1,2,4,10,20]& \\
\vspace{-0.07in}
\verb& hist([stochastic(&$\beta$\verb&) for i=1:t], -4:.01:1)[2]&
\begin{verbatim}
plot(midpoints(-4:.01:1),z/sum(z)/.01)
end
\end{verbatim}
\includegraphics[width=2.8in]{ScreenShot1.png}
\end{jinput}
Suppose we wish to perform a complicated histogram
in parallel. We use an example from Random Matrix Theory,
(but it could easily have been from finance),
the computation of the scaled largest eigenvalue in magnitude
of the so called stochastic Airy operator \cite{edelmansutton}
$$\frac{d^2}{dx^2}-x + \frac{1}{2\sqrt{\beta}} dW.$$
This is just the usual finite difference discretization of
$\frac{d^2}{dx^2}-x $ with a ``noisy" diagonal.
We illustrate an example of the famous
Tracy-Widom law being simulated with Monte Carlo experiments for
different values of the inverse temperature parameter $\beta$. The code on 1 processor is fuzzy and
unfocused, as compared to the same simulation on 1024 processors,
which is sharp and focused, and runs in exactly the same wall clock time as the sequential run. It is this ability of being able to perform scientific computation at the speed of thought conveniently without the traditional fuss associated with parallel computing, that we believe will make a new era of scientific discovery possible.
\begin{jinput}
\vspace{-.1in}
\sh{\# Readily adding 1024 processors sharpens the Monte Carlo simulation in}\\ \sh{\# the same time} \\
addprocs(1024)
\vspace{-.1in}
\end{jinput}
\begin{jinput}
\vspace{-.1in}
\begin{verbatim}
t = 10000
\end{verbatim}
\vspace{-0.07in}
\verb&for &$\beta$\verb&=[1,2,4,10,20]& \\
\hspace*{.07in} \sh{z = {@parallel (+)} for p=1:nprocs()} \\
\vspace{-0.07in}
\verb& hist([stochastic(&$\beta$\verb&) for i=1:t], -4:.01:1)[2]& \\[.06in]
\vspace{0.07in}
\hspace*{.07in} \sh {end}
\vspace{-0.12in}
\begin{verbatim}
plot(midpoints(-4:.01:1),z/sum(z)/.01)
end
end
\end{verbatim}
\includegraphics[width=2.8in]{ScreenShot2.png}
\end{jinput}
\subsection{Performance Recap}
In the early days of high level numerical computing languages, the thinking was that
the performance of the high level language did not matter so long
as most of the time was spent inside the numerical libraries.
These libraries consisted of blockbuster algorithms that would be highly tuned,
making efficient use of computer memory, cache, and low level instructions.
What the world learned was that
only a few codes were spending a majority of their time in the blockbusters.
Real codes were being caught by interpreter overheads, stemming from processing
more aspects of a program at run time than are strictly necessary.
As we explored in Section \ref{sec:types}, one of the hindrances of completing this analysis is
type information. Programming language design thus becomes an exercise in
balancing incentives to the programmer to provide type
information and the ability of the computer to infer type information. Vectorization
is one such incentive system. Existing numerical computing languages would have
us believe that this is the only system, or even if there were others, that somehow
this was the best system.
Vectorization at the software level can be elegant for some problems.
There are many matrix computation problems that look beautiful vectorized.
These programs should be vectorized. Other programs
require heroics and skill to vectorize sometimes producing unreadable code all in the name
of performance. These are the ones that we object to vectorizing. Still other programs
can not be vectorized very well even with heroics.
The Julia message is to vectorize when it is natural, producing nice code. Do not vectorize
in the name of speed.
Some users believe that vectorization is required to make use of special hardware
capabilities such as SIMD instructions, multithreading, GPU units,
and other forms of parallelism.
This is not strictly true, as compilers are increasingly able to
apply these performance features to explicit loops.
The Julia message remains: vectorize when natural, when you feel it is right.
\section{Conclusion and Acknowledgments}
We built Julia to meet our needs for numerical computing, and it
turns out that many others wanted exactly the same thing. At the
time of writing, not a day goes by where we don't learn that someone
else has picked up Julia at universities and companies around the
world, in fields as diverse as engineering, mathematics, physical and
social sciences, finance, biotech, and many others. More than just a
language, Julia has become a place for programmers, physical
scientists, social scientists, computational scientists,
mathematicians, and others to pool their collective knowledge in the
form of online discussions and in the form of code. Numerical
computing is maturing and it is exciting to watch!
Julia would not have been possible without the enthusiasm and
contributions of the Julia
community\footnote{https://github.com/JuliaLang/julia/graphs/contributors}.
We thank Michael La Croix for his beautiful Julia display macros.
We are indebted
at MIT to Jeremy Kepner, Chris Hill, Saman Amarasinghe,
Charles Leiserson, Steven Johnson and Gil Strang for their collegial
support which not only allowed for the possibility of an academic
research project to update technical computing, but made it more fun too.
The
authors gratefully acknowledge financial support from the MIT Deshpande center
for numerical innovation, the Intel Technology Science Center for Big
Data, the DARPA Xdata program, the Singapore MIT Alliance, NSF Awards
CCF-0832997 and DMS-1016125, VMWare Research, a DOE grant with
Dr. Andrew Gelman of Columbia University for petascale hierarchical
modeling, grants from Aramco oil thanks to Ali Dogru and Shell oil thanks
to Alon Arad,
and a Citibank grant for High Performance Banking Data Analysis, and the Gordon and Betty Moore foundation.
|
\section{Introduction}
Observations of water vapor in the interstellar medium (ISM) by the
Infrared Space Observatory \citep{vD1999} and
the Submillimeter Wave Astronomy Satellite (SWAS) \citep{Bergin2000}
show general agreement with chemical models for
warm ($> 300$ K) conditions in the ISM \citep{Melnick2000,Neufeld2000}.
However, in cold conditions, most of the water is frozen onto dust
grains \citep{Viti2001,vanDishoeck2013},
and the production of water occurs mainly on the
grain surfaces. In order to test chemical models that
include grain-surface chemistry we used the Heterodyne
Instrument for the Far-Infrared (HIFI) \citep{deGraauw2010} on
the Herschel
Space Observatory to observe the H$_2$O ($1_{10}-1_{01}$) line
in the cold, dense
cloud L1544 \citep{Caselli2010, Caselli2012}.
The first of these two Herschel observations was
made with the wide-band spectrometer (WBS) and
detected water vapor in absorption against the weak
continuum radiation of dust in the cloud. Follow-up observations with higher spectral
resolution and sensitivity, made with the
high resolution spectrometer (HRS), confirmed the absorption
and detected a blue-shifted emission line that was
predicted by theoretical modeling \citep{Caselli2010}, but
too narrow to be seen by the WBS in the first observation.
With the better constraints provided by the second observation,
we improved the chemical and
radiative transfer modeling in our previous papers.
We modified the
radiative transfer code MOLLIE to calculate the line emission
in the approximation that the molecule is sub-critically
excited. This assumes that the collision rate is so
slow that every excitation leads immediately to a radiative
de-excitation and the production of one photon which
escapes the cloud, possibly after many absorptions and
re-emissions, before another excitation. The
emission behaves as if the line were optically thin with
the line brightness proportional to the column density. This
approximation can be correct even at very high
optical depth as long as the excitation rate is slow enough,
C $<$ A/$\tau$, where C is the collision rate, A is the spontaneous
emission rate and $\tau$ the optical depth \citep{Linke1977}.
\citet{Caselli2012} presented the observations and the results
of this modeling.
In this paper, we discuss in detail the theory
behind the modeling.
A comparison of the spectral line
observation with theory requires three models.
First, we require
a hydrodynamical model to describe the density, velocity,
and temperature across
the cloud. We use a model
of slow contraction in
quasi-static unstable equilibrium
that we developed in our previous
research
\citep{KF05,KC10}.
Second, we require
a chemical model to predict the molecular abundance
across the varying conditions in the cloud.
Following the philosophy for simplified chemical networks in
\citet{KC08} or \citet{BethellBergin2009}, we extract
from a general chemical model for photo-dissociation
regions \citep{Hollenbach2009} a subset of reactions
expected to be active in cold conditions, principally
grain-surface reactions as well as freeze-out and
photodissociation.
Third, we require
a radiative transfer model to
generate a simulated molecular line.
We modify our non-LTE radiative transfer code
MOLLIE to use the escape
probability approximation. This allows better
control of the solution in extreme optical depth.
The three models are described in more detail in three
sections below. The relevant equations are included in
the appendices.
\section{The three models}
\subsection{The cold, dense clouds}
Given their importance as the nurseries of star formation, the small ($< 0.5$ pc),
cold ($<15$ K), dense ($n>10^3$ cm$^{-3}$)
clouds in low-mass
star ($< 2$ M$_\odot$) forming regions
such as Taurus are widely studied \citep{BerginTafalla2007,diFrancesco2007}.
Observations show a unique simplicity.
They contain no internal sources of heat, stars or protostars.
Their internal turbulence is subsonic, barely broadening
their molecular line widths above thermal \citep{MyersBenson1983}.
With most of their internal energy in
simple thermal energy, and the weak turbulence
just a perturbation \citep{Keto2006,Broderick2010},
the observed density structure
approximates the solution of the
Lane-Emden equation for hydrostatic equilibrium
\citep{Lada2003,Kandori2005}.
Correspondingly, most are nearly
spherical with an average aspect ratio
of about 1.5 \citep{Jijina1999}.
They are heated from the outside both
by cosmic rays and by the UV background of starlight
and are cooled from the inside by long wavelength molecular line
and dust continuum radiation \citep{Evans2001}.
Because of their simplicity,
we understand the structure and dynamics of these small, cold, dense
clouds
better than any other molecular clouds in the
interstellar medium.
They are therefore uniquely
useful as a laboratory for testing hypotheses
of more complex phenomena such as the chemistry of
molecular gas.
\subsection{Structure and dynamics}
Our physical model for cold, dense clouds is computed with
a spherical Lagrangian
hydrodynamic code with the gas temperature set by
radiative equilibrium between heating by external
starlight and cosmic rays and cooling by molecular line
and dust radiation. The theory is discussed in \citet{KF05} and
\citet{KC08}.
In our previous research \citep{KC10}, we
generated a dynamical model for the particular case of L1544
by comparing observations and
snapshots in time out of a theoretical model for the
contraction toward star formation.
We began the hydrodynamic evolution with a 10 M$_\odot$ Bonnor-Ebert (BE)
sphere with
a central density of $10^4$ cm$^{-3}$ in unstable dynamical equilibrium
and in radiative equilibrium with an external UV field of one Habing flux.
In the early stages of contraction,
the cloud evolves most rapidly in the center.
As long as the velocities remain subsonic, the evolving
density profile closely follows
a sequence of spherical equilibria or BE spheres with increasing
central densities.
We compared modeled CO and N$_2$H$^+$ spectra during the
contraction against those observed in L1544 and
determined that
the stage of contraction that best matches the data has a central
density of $1\times 10^7$ cm$^{-3}$and a maximum inward
velocity just about the sound speed \citep{KC10}.
Figure \ref{fig:StructureLVG} shows the density and velocity
at this time
along with the H$_2$O abundance and temperature.
In the present investigation we modify our numerical hydrodynamic
code to include cooling by atomic oxygen.
This improves the accuracy of the
calculated gas temperature in the photodissociation region
outside the molecular cloud. The equations
governing the cooling by the fine structure lines of atomic oxygen
are presented in the appendix.
\subsection{Chemistry of H$_2$O in cold conditions}
The cold conditions in L1544 allow us to simplify the
chemical model for gas phase water.
We include the four oxygen-bearing species most abundant in cold, dark clouds,
O, OH, H$_2$O gas, and H$_2$O ice.
Even though all three gas phase molecules may freeze onto the grains,
we consider only one species of ice because
the formation
of water from OH and the formation of OH from O are rapid enough on
the grain surface that most of the ice is in the form of H$_2$O.
To provide a back reaction for the freeze-out of atomic oxygen and
preserve detailed balance, we arbitrarily
assign a desorption rate for atomic O equal to that of H$_2$O even though
the production of atomic O from H$_2$O ice is not indicated.
Our simplified model
is shown in
figure \ref{fig:oxygen_chemistry}. The resulting abundances, calculated
as equilibria between creation and destruction, are shown
in figures \ref{fig:oxygenplot} and \ref{fig:oxygenplot-center}.
Figure \ref{fig:oxygenplot}
shows the abundances near the photodissociation region (PDR) boundary
as a function of the visual extinction, $A_V$.
Figure \ref{fig:oxygenplot-center} shows the abundances
against the log of the radius to emphasize the center.
Gas phase water is created by UV photodesorption of water ice which
also creates gas phase OH in a ratio H$_2$O/OH = 2 \citep{Hollenbach2009}.
In the outer part of the cloud,
the UV radiation derives from the background field of external starlight.
The inward attenuation of the UV flux is
modeled from the visual extinction as $\exp{(-1.8 {\rm A_V})}$.
In the interior where all the external UV radiation has been attenuated,
the only UV radiation is generated by cosmic ray strikes on H$_2$.
In our previous paper \citep{Caselli2012}, we set this
secondary UV radiation to $1\times 10^{-3}$ times the
Habing flux (G$_0$=1) \citep{Hollenbach2009}. In our current model,
we use a lower level, $1\times 10^{-4}$, that is more consistent
with estimated rates \citep{Shen2004}. The difference in abundance for
the two rates is shown in figure \ref{fig:oxygenplot-center}.
H$_2$O and OH are removed from the gas phase by UV photodissociation
and by freezing onto dust grains.
To preserve detailed balance with the photodissociation
we include the back reactions, the gas phase production of H$_2$O,
O + H$_2$ $\rightarrow$ OH and OH + H$_2$ $\rightarrow$ H$_2$O,
even though these are not expected to be important in cold gas.
Removal of gas phase water by freeze-out is
important in the interior where the higher gas density increases
the dust-gas collision rate, and hence the freeze-out rate.
We assume that
the gas-phase ion-neutral reactions that lead to the production of water are
less important
at cold temperatures ($< 15$ K)
than the reactions that produce water on the
surfaces of ice-coated dust grains.
Thus, we do not include
gas-phase ion-neutral reactions in the model.
This is valid if the oxygen is quickly removed from the gas-phase by freeze-out
and efficiently converted into water
ice on the grain surface.
By leaving out CO, we avoid coupling in
the carbon chemistry. Although we already have a simple model for the
carbon chemistry \citep{KC08,KC10}, we prefer to keep our oxygen model
as simple as possible. This could create an error of a factor of a few
in the abundance of the oxygen species. Carbon is one-third
as abundant as oxygen, and in certain conditions CO is the dominant
carbon molecule. Therefore as much as one-third of
the oxygen could potentially be bound in CO. Ignoring O$_2$ is less of a problem.
Created primarily by
the reaction of OH with atomic oxygen, O$_2$ tends to closely follow the
abundance of OH. Since the amount of oxygen in OH should be 1\% or less
(figure \ref{fig:oxygenplot}),
the abundance of O$_2$ does not affect
the abundances of the other oxygen species, O, OH, and H$_2$O.
Figures \ref{fig:oxygenplot} and \ref{fig:oxygenplot-center}
compare the abundances from our simplified network with those from
the more complex network of \citet{Hollenbach2009} (courtesy of E. Bergin)
that
includes gas-phase
neutral-neutral and ion-neutral reactions.
In this calculation, we hold the cloud at the same time in its dynamical
evolution and allow the chemistry to evolve for 10 Myr
from the assumed starting conditions in which all
species are atomic and neutral.
Both models generally agree. The gas-phase water
peaks in a region near the
boundary. Here there is enough external UV to rapidly desorb the
water from the ice, but not so much as to dissociate all the molecules.
Further inward, the abundance of water falls as
the gas density and the dust-gas collision
rate (freeze-out rate) both increase while the photodesorption rate decreases with the
attenuation of the UV radiation.
At high A$\rm _v$, the water is desorbed only by cosmic rays and
the UV radiation they produce in collisions with H$_2$.
The general agreement between the two models suggests that the simple model
includes the processes that are significant in the cold environment.
The rate equations for the
processes selected for the simplified model
are listed in
the appendix (\S \ref{appendixChemistry}).
Our simple model calculates equilibrium abundances. We can estimate the
equilibrium time scale from the combined rates for creation and
destruction
\citep{Caselli2002},
\begin{equation}
t = \frac {t_{creation}t_{destruction}} {t_{creation} + t_{destruction}}
\label{eq:time}
\end{equation}
where the time scales are the inverses of the rates.
Figure \ref{fig:time} shows the equilibrium time scales
for each species as a function of radius. These may be compared with
the time for the hydrodynamic evolution.
A cloud with a mass of 10 M$_\odot$ and a central
density of $2\times 10^6$ cm$^{-3}$ has a free-fall time,
$t_{ff}=0.03$ Myr
using the central density in the standard equation whereas the sound
crossing time is about 2 Myr \citep{KC10}.
Because the chemical time scales are
all shorter than the dynamical time scales
the chemistry reaches equilibrium before the conditions, density, temperature,
and UV flux change.
In this estimate of the time scale for chemical evolution, we are asking
whether the oxygen chemistry in the contracting molecular cloud can maintain
equilibrium as the cloud evolves dynamically. This is different from the
question of how long it would take for the chemistry to equilibrate
if the gas were held at molecular conditions but evolving from an atomic
state.
\begin{figure}
$
\begin{array}{cc}
\includegraphics[width=3.25in]{StructureLVG}
\end{array}
$
\caption{
Model of a slowly contracting cloud in quasi-static
unstable equilibrium. The log of the density profile in
cm$^{-3}$ is shown in blue (dotted line), the fractional
abundance of H$_2$O with respect
to H$_2$ is shown in green (dashed line),
the velocity as the black (solid) line, and
the gas temperature as the red (dot-dashed) line.
The model spectrum is shown in figure \ref{fig:SpectrumLVG}.
}
\label{fig:StructureLVG}
\end{figure}
\begin{figure
\includegraphics[width=3.25in]{pathways3}
\caption{Simplified model of the oxygen chemistry in a cold cloud.
The model includes 3 gas-phase species and H$_2$O ice.
The significant reactions at cold temperatures (T $<300$ K) are the
freeze-out of molecules colliding with dust grains, cosmic ray and
photodesorption of the ice, and photodissociation of the gas phase molecules.}
\label{fig:oxygen_chemistry}
\end{figure}
\begin{figure
\includegraphics[trim=0.00in 0.0in 0.0in 0.0in, clip, width=3.25in]{abundance-edge3}
\caption{Abundances of oxygen species as a function of $A_V$
for the model of L1544 based on a
slowly contracting Bonnor-Ebert sphere. The figure emphasizes the variation
of abundances in the PDR at the edge.
The figure compares
the abundances for the physical conditions in (figure \ref{fig:StructureLVG}) from
two models:
\citet{Hollenbach2009} (dashed lines) (courtesy E. Bergin);
and our simplified model (figure \ref{fig:oxygen_chemistry}).
Figure \ref{fig:oxygenplot-center} shows abundances from the same
models but plotted against log radius to emphasize the variations
in the center.
}
\label{fig:oxygenplot}
\end{figure}
\begin{figure
\includegraphics[trim=0.00in 0.0in 0.0in 0.0in, clip, width=3.25in]{abundance-center3}
\caption{Abundances of oxygen species, same as figure \ref{fig:oxygenplot},
except plotted against the log of the radius rather than visual extinction.
This figure emphasizes the variations in abundance in the center.
The figure shows the H$_2$O abundance calculated with our simplified model
using two values for the cosmic ray-induced UV photodesorption
(equation \ref{eq:CRUV}). The solid green line shows the abundance calculated
with factor $\alpha = 10^{-4}$. The dotted line shows the abundance calculated
with factor $\alpha = 10^{-3}$. The abundance calculated with the
\citet{Hollenbach2009} model assumes $\alpha = 10^{-3}$ (dashed line).
}
\label{fig:oxygenplot-center}
\end{figure}
\begin{figure
\includegraphics[trim=0.00in 0.0in 0.0in 0.0in, clip, width=3.25in]{time2}
\caption{
Time scales for chemical equilibrium. From top to bottom, the three lines
show the equilibration time scales for H$_2$O, OH, and O calculated from
equation \ref{eq:time} and the reaction rates in the appendix.
}
\label{fig:time}
\end{figure}
\subsection{Radiative Transfer} \label{RadiativeTransfer}
We use our radiative transfer code MOLLIE \citep{K90,KR10}
to compute model H$_2$O spectra to compare with the Herschel observation.
Here we encounter an interesting question. The large Einstein A
coefficient of the H$_2$O ($1_{10}-1_{01}$) line results in optical depths across
the cloud of several hundred to a thousand depending on excitation.
High optical depths generally result in
radiative trapping and enhanced excitation of the line. In this case,
the line brightness could have a non-linear relationship to the
column density. For example, the line could be saturated.
On the other hand, the large Einstein A means that the critical
density for collisional de-excitation is quite high ($1\times 10^8$ cm$^{-3}$)
at the temperatures $< 15$ K, higher than
the maximum density ($1\times 10^7$ cm$^{-3}$) in our
dynamical model of L1544. This suggests that the line emission
should be proportional to the column density.
This question was addressed by \citet{Linke1977} who proposed a solution
using the escape probability
approximation \citep{Kalkofen1984}.
They assumed a two level molecule, equal statistical
weights in both levels
|
, and the mean radiation field, $\bar{J}$,
set by the escape probability, $\beta$,
\begin{equation} \label{eq:barJ}
\bar{J} = J_0\beta + (1-\beta)S
\end{equation}
where $J_0$ is the continuum from dust and the cosmic microwave
background, $S$ is the line source function, and
\begin{equation} \label{eq:tau}
\beta = (1-\exp{(-\tau)}) / \tau .
\end{equation}
After a satisfying bout with three pages of elementary algebra and
some further minor approximations, they show that as long as $C< A/\tau$,
the line brightness is linearly dependent on the column density, no matter
whether the optical depth is low or high, provided that
the line is not too bright.
To determine whether the water emission line brightness
in L1544 has a non-linear
or linear dependence, we numerically
solve the equations for the two-level molecule with no approximations
other than the escape probability and plot the result. Figures
\ref{fig:growth-low3} and \ref{fig:growth-high3} show the dependence of the
antenna temperature on the density for low and high densities respectively.
Since the column density, the optical depth, and the ratio C/A are
all linearly dependent on the density, any of these may be used
on the abcissa. The latter two are shown just above the axis.
Figure \ref{fig:growth-low3}
shows that the antenna temperature of the water line emission
is linearly dependent on the
column density even at high density or high optical depth.
Figure \ref{fig:growth-high3}
shows that the linear relation breaks down when C/A is no longer small.
The densities in both figures show that the water line emission
in L1544 is in the linear regime.
\begin{figure}
$
\begin{array}{cc}
\includegraphics[width=3.25in]{growth-low4}
\end{array}
$
\caption{
The dependence of the observed antenna
temperature of the H$_2$O ($1_{10} - 1_{01}$) line
on the H$_2$ number density (cm$^{-3}$).
Because the optical depth and the
ratio of the collision rate to spontaneous emission rate (C/A) are
both linearly dependent on the density, the abscissa can be labeled in
these units as well. Both are shown above the axis. The antenna
temperature is linearly dependent on the density or column density even
at very high optical depth as long as the ratio C/A is small.
}
\label{fig:growth-low3}
\end{figure}
\begin{figure}
$
\begin{array}{cc}
\includegraphics[width=3.25in]{growth-high4}
\end{array}
$
\caption{
The dependence of the observed antenna temperature of the H$_2$O line
($1_{10} - 1_{01}$) on the number density.
Same as figure \ref{fig:growth-low3} but at higher densities where the
ratio C/A is no longer small and the
dependence of the antenna temperature on the density is no longer linear.
}
\label{fig:growth-high3}
\end{figure}
For an intuitive explanation, suppose that a photon is absorbed on average
once per optical depth of one. A photon may be absorbed and another re-emitted
many times in escaping a cloud of high optical depth. The time scale for
each de-excitation is $A^{-1}$. Therefore, the time that it takes a photon
to escape the cloud is $\tau /A$.
As long as this time is shorter than the collisional excitation time ($1/C$),
then on average, an emitted photon will escape the cloud before another
photon is created by the next collisional excitation event and
radiative de-excitation. In this case, the line remains subcritically excited.
The molecules are in the lower state almost all the time. This is the same
condition that would prevail if the cloud were optically thin
($\bar{J}=0$ or $\beta=1$). On this basis, in our earlier paper we
determined the emissivity and opacity of the H$_2$O line
in L1544 by setting $\bar{J}=0$ \citep{Caselli2012}.
This approximation was earlier adopted in analyzing water emission observed by the
SWAS satellite \citep{Snell2000} where it is referred to as "effectively
optically thin".
In this current paper, we seek an improved estimate of
$\bar{J} > 0$ and $\beta < 1$ by using the escape probability formalism
as suggested by \citet{Linke1977}.
We determine $\beta$ using
the local velocity gradient as given by our
hydrodynamical model along with the local opacity
using the Sobolev or large velocity gradient (LVG)
approximation \citep[eqn. 3-40][]{Kalkofen1984}. We use the
6-ray approximation for the angle averaging.
We allow
for one free scaling parameter on $\beta$ to match the modeled
emission line brightness to the observation. We scale the
LVG opacity by 1/2.
Because the opacity, column density,
and line brightness,
are all linearly related, the scaling could be considered
to derive from any or any combination of these parameters.
Given all the uncertain parameters, for example the
mean grain cross-section which also affects the
line brightness (appendix \ref{appendixChemistry}),
this factor of 2 is not significant.
An alternative method to calculate the excitation is the
accelerated $\Lambda$-iteration algorithm (ALI).
We do not know if this method is reliable with the
extremely high optical depth, several hundred to a
thousand.
$\Lambda$-iteration generally converges, but whether it converges to
the correct solution cannot be determined from the algorithm itself
\citep[eqn. 6-33][]{Mihalas1978}.
The excitation may be uncertain, but analysis with the
escape probability method allows us to
understand the effect of the uncertainty.
For example, because we know that the dependence of the line brightness
on the opacity or optical depth is linear, we can say that any
uncertainty in excitation results in the same percentage
uncertainty in the abundance of the chemical model,
or the pathlength of the structural model.
Once $\bar{J}$ is determined everywhere in the cloud,
the equations of statistical equilibrium are solved
to determine the emissivity and opacity.
These are then
used in the radiative transfer equation to
produce the simulated spectral line emission
and absorption. This calculation is done in MOLLIE
in the same way as if $\bar{J}$ were determined
by any other means, for example, by $\Lambda$-iteration.
Both the emissivity and opacity
depend on frequency through the Doppler shifted line
profile function \citep[eqn. 2.14][]{Kalkofen1984}
that varies as a function of position
in the cloud. We use a line profile function that is the
thermal width plus a microturbulent Gaussian broadening
of 0.08 km s$^{-1}$ derived from
our CO modeling \citep{KC10}. By the approximation of
complete frequency redistribution
\citep[eqn. 10-39][]{Mihalas1978},
both have the same frequency dependence.
This also implies that each photon emitted
after an absorption event
has no memory of the frequency of the absorbed
photon. It is emitted with the frequency probability
distribution described by the
line profile function Doppler shifted by the
local velocity along the direction of emission.
We also assume complete redistribution in angle.
Figure \ref{fig:SpectrumLVG} shows the modeled line
profile against the observed profile.
The V$_{LSR}$ is assumed to be 7.16 kms$^{-1}$, slightly different
than 7.2 kms$^{-1}$ used in \citet{Caselli2012}. The lower value
is chosen here as the best
fit to the H$_2$O observation.
The combination of blue-shifted emission and
red-shifted absorption is the inverse P-Cygni
profile characteristic of contraction, with
the emission and absorption split by the
inward gas motion in the front and rear of the cloud.
The absorption against the dust continuum is
unambiguously from the front side indicating
contraction rather than expansion.
This profile has also been seen in other molecules
in other low-mass cold, dense clouds, with the
absorption against the dust continuum
\citep{DiFrancesco2001}.
In L1544, because the inward velocities are below the sound speed,
and the H$_2$O line width is just larger than
thermal,
the emission is shifted with respect to the
absorption by less than a line width.
In the observations,
what appears
to be a blue-shifted emission line is just the blue
shoulder and wing of
the complete emission, most of which is brighter,
redder and wider than the observed emission.
Our model also shows weaker emission to the
red of the absorption line. This emission is
from inward moving gas in the front side of
the contracting zone. Again most of the emission
is absorbed by the envelope and only the blue shoulder
of the line is seen. The asymmetry between
the red and blue emission comes about because
the absorbing envelope, which is on the front
side of the cloud, is closer in velocity to
inward flowing gas (red) on the front side of the
contraction. This is the same effect that
produces the blue asymmetric or double-peaked
line profiles seen in contraction
in molecular lines without such significant envelope
absorption \citep{Anglada1987}. The model shows
more red emission than is seen in the observations.
This red emission may be absorbed by foreground
gas that is not in the model. Figure 1 of \citet{Caselli2012}
shows additional red shifted absorption in H$_2$O
and red shifted emission in CO, both centered
around 9 kms$^{-1}$. The blue wing of this red
shifted water line may be absorbing the red
wing of the emission from the dense cloud.
If L1544 were static, no inward contraction,
the emission from the center would be at
the same frequency as the envelope. Because
of the extremely high optical depth, the absorption
line is saturated and would absorb all the
emission. We would
see only the absorption line. The depth
of the absorption line is set by brightness
of the dust continuum which is weak (0.011 K)
and not by the optical
depth of the line which is high
(few hundred to a thousand).
In the current radiative transfer
calculation, we also use a slightly different
collisional excitation rate
than before. The collisional rates for ortho-H$_2$O are
different with ortho and para-H$_2$. In our previous
paper \citep{Caselli2012} we modeled the H$_2$ ortho-to-para
ratio as a lower limit 1:1 or higher. Here we assume
that almost all the hydrogen, 99.9\%, is in the para state.
This is suggested by recent chemical models that require
a low ortho-to-para ratio to produce the high deuterium fraction
observed in cold, dense clouds.
\citep[e.g][]{Kong2013, Sipila2013}.
\section{Interpretation}
The shape of the line profile
(figure \ref{fig:SpectrumLVG})
is unaffected by
any uncertainty in the excitation which scales the emission
across the spectrum. The absorption is saturated and does
not scale with the excitation.
Because of the very
high critical density for collisional de-excitation,
we know that the line emission is generated only in the densest gas
($>10^6$ cm$^{-3}$)
within a few thousand AU of the center.
Thus the observation of the inverse P-Cygni profile seen in
H$_2$O confirms the model for quasi-hydrostatic
contraction with the
highest velocities near the center
(figure \ref{fig:StructureLVG}).
The chemical model requires external UV to create the gas
phase water by photodesorption. This confirms the physical
model of L1544 as a molecular cloud bounded by a photodissociation
region. The UV flux necessarily creates a higher temperature,
up to about 100 K
at the boundary by photoelectric heating.
This helps maintain the pressure balance at the boundary
consistent with the model of a BE sphere.
\section{Uncertainties}
The comparison of the simulated and observed spectral line
involves three models each with multiple parameters. Unavoidably
the choice of parameters in any one of the three models affects
not only the choice of other parameters in the other two models
but also the interpretation.
It would be a mistake to focus on the uncertainties in any one
of the models to the exclusion of the others.
For example, because of the linear relationship between
the line brightness, the optical depth, and the opacity,
uncertainties in the excitation, pathlength, and abundance,
have equal effect on the spectrum.
A factor of two uncertainty in the excitation can be
compensated by a factor of two in the pathlength or a
factor of two in the abundance of H$_2$O. The pathlength
is unknown.
On the plane of the sky, L1544 has an axial ratio of 2:1,
but we are using a spherical model for the cloud. Our rates
in the chemical model involve estimation of the surface density
of sites for desorption and the covering fraction of water ice
on the grains. The latter is assumed to be one even though we
know that CO and methane ice, not included in the simple model,
make up
a significant fraction of the ice mantle.
The radiative excitation, parameterized as $\beta$ in
the escape probability is also uncertain because of
the competing effects of high optical depth and subcritical
excitation.
On a linear plot, a factor of two difference in the brightness
of the simulated and observed spectral line looks to be a
damning discrepancy. However, there is at least this much
uncertainty in each of the three models and this does not
significantly affect the conclusions of the study, namely that
the cloud
can be modeled as a slowly contracting BE sphere bounded
by a photodissociation region with the gas phase water
abundance set by grain surface reactions.
In this paper, we concentrate on the observation of H$_2$O, but
there are also other constraints that define the model. These
are both observational and theoretical. In
an earlier paper, we showed how observations of CO and N$_2$H$^+$
define the physical model with the two spectral lines giving us
information on the outer and inner regions of the cloud
respectively.
In this regard, the water emission gives us information
in the central few thousand AU
of the cloud where the density approaches or exceeds the critical
density for de-excitation. This small volume of rapid
inflow and high density does not much affect the N$_2$H$^+$
spectrum which is generated in a much larger volume, and has no
affect at all on the CO spectrum. A successful model for
L1544 has to satisfy the constraints of all the data.
On the theoretical side, there is an infinite
space of combinations of abundance, density, velocity,
and temperature that would form models that match the data.
Only models that are physically motivated are of interest.
It may be tempting to change the abundances, velocities, or
densities arbitrarily, but this is unlikely to be a
useful exercise giving the infinite possibilities. A successful
model for L1544 has to be relevant to plausible theory.
There is a natural prejudice for more complex models that
in principle contain more details.
The goal of our simplified models is to enhance our
understanding of the most significant
phenomena.
In our research on cold, dense clouds, spanning a number of papers,
we have developed simplified models for the density and temperature
structure, for the dynamics including oscillations,
for the CO chemistry, and in this paper simplified
models for H$_2$O chemistry and radiative transfer. Each of
these models isolates one or a few key physical processes
and shows how they generate the observables and
operate to control the evolution
toward star formation.
\begin{figure}
$
\begin{array}{cc}
\includegraphics[width=3.25in]{SpectrumLVG}
\end{array}
$
\caption{
Observed spectrum
of H$_2$O (1$_{10} - 1_{01}$) (black lines with crosses)
compared with modeled spectrum (simple red line)
for slow contraction at the time that
the central density reaches $1 \times 10^7$ cm$^{-3}$.
The model structure is shown in figure \ref{fig:StructureLVG}.
}
\label{fig:SpectrumLVG}
\end{figure}
\section{Conclusions}
A simplified chemical model for cold oxygen
chemistry primarily by grain
surface reactions is verified by comparing the simulated spectrum
of the H$_2$O ($1_{10}-1_{01}$) line against an observation of water
vapor in L1544 made with HIFI spectrometer on the Herschel Space
Observatory.
This model reproduces the observed
spectrum of H$_2$O, and also approximates the abundances calculated
by a more complete model that includes gas-phase neutral-neutral and ion-neutral
reactions.
The gas phase water
is released from ice grains by ultraviolet (UV) photodesorption.
The UV radiation derives from two sources:
external starlight and collisions of cosmic rays with
molecular hydrogen. The latter may be important deep inside the
cloud where the visual
extinction is high enough ($>50$ mag) to block out the
external UV radiation.
Water is removed from the gas phase by photodissociation and
freeze-out onto grains. The former is important
at the boundary where the UV from external
starlight is intense enough to create
a photodissociation region.
Here, atomic oxygen replaces water as the
most abundant oxygen species.
In the center where the external UV radiation is
completely attenuated, freeze-out is the significant
loss mechanism.
Time dependent chemistry is not required to match the
observations because the time scale for the chemical
processes
is short compared
to the dynamical time scale.
The molecular cloud L1544 is
bounded by a photodissociation region.
The water emission derives only from the central
few thousand AU
where the gas density approaches the critical density
for collisional de-excitation of the water line. In the
model of hydrostatic equilibrium, the gas density in the
center is rising with decreasing radius more steeply than
the abundance of water is decreasing by freeze-out.
Thus the water spectrum provides unique information on the dynamics
in the very center.
The large Einstein A coefficient ($3\times 10^{-3}$ s$^{-1}$)
of the 557 GHz H$_2$O ($1_{10}-1_{01}$) line
results in extremely high optical depth, several hundred to a
thousand.
However, the density ($< 10^7$ cm$^{-3}$)
and temperature ($<15$ K) are
low enough that the line is subcritically excited. The
result is that the line brightness under these conditions
is directly proportional to the column density.
\section{Acknowledgements}
The authors acknowledge Simon Bruderer, Fabien Daniel, Michiel Hogerheijde,
Joe Mottram, Floris van der Tak for interesting discussions on the radiative transfer of water.
PC acknowledges the financial support of the European Research Council (ERC; project PALs 320620),
of successive rolling grants awarded by the UK Science and Technology Funding Council.
JR acknowledges the financial support of the Submillimeter Array Telescope.
|
\section{Introduction}
\IEEEPARstart{A}{s} radio signals travel along a certain propagation environment, their interaction with objects, system agents and the propagation medium itself causes a number of effects that ultimately determine the amount of power being received at a given location. Classically, these phenomena have been well-characterized through the central limit theorem (CLT), that gives rise to a family of Gaussian models, being the most popular and widespread-used the Rayleigh/Rician models and also including more general ones like the Hoyt and Beckmann models \cite{Simon,Beckmann1962}. While CLT-based models have sufficed to characterize the effects of fading for decades, the rapid growth of wireless technologies along the $21^{\rm st}$ century requires more sophisticated efforts to properly capture the intrinsic nature of wireless propagation as we move up in frequency (where the CLT assumption may no longer hold) \cite{ftr,Marins2019}, or when new use cases demand for improved accuracy, especially in very low-outage regimes \cite{Eggers2019,Mehrnia2022}.
The key advances in wireless fading channel modeling during the last years have been largely based on two different approaches. On the one hand, power envelope-based formulations were used by Yacoub to propose two families of generalized models: the $\kappa$-$\mu$ and the $\eta$-$\mu$ distributions \cite{kappamuYacub}. These distributions were later generalized and unified through the $\kappa$-$\mu$ shadowed fading distribution \cite{kms,Laureano2016}, which has become rather popular in the literature thanks to its ability to model a wide number of propagation conditions through a limited set of physically-justified parameters. On the other hand, ray-based formulations express the received signal as a superposition of a number of scattered waves with random phases \cite{durgin2000theory}. Despite their clear physical motivation, the mathematical complexity associated to ray-based models grows when more than a few dominant waves are individually accounted for \cite{Romero2022}. However, Durgin's Two-Wave with Diffuse Power (TWDP) fading model and its subsequent generalizations \cite{durgin,Rao2015,ftr} have also managed to become popular in the context of wireless channel modeling for higher-frequency bands. Specifically, the Fluctuating Two-Ray (FTR) fading model \cite{ftr} has enthusiastically been adopted in mm-Wave environments, and even recently in terahertz (THz) bands \cite{Du2022}.
Key to their success, the $\kappa$-$\mu$ shadowed and the FTR fading models share a number of features that are desirable for a stochastic fading model to be of practical use: (\textit{i}) have a small number (three, in both cases) of physically justified parameters; (\textit{ii}) have a reasonably good mathematical tractability; (\textit{iii}) include simpler but popular fading models as special cases. However, because these two models arise from different physical formulations, they do not capture the same type of propagation behaviors. For instance, the FTR model has the ability to exhibit bimodality that often appears in field measurements \cite{Mavridis15,Yoo16,Du2022}, while the $\kappa$-$\mu$ shadowed one is inherently unimodal in its original formulation \cite{kms}. On the other hand, the $\kappa$-$\mu$ shadowed model can exhibit any sort of power-law asymptotic decay (diversity order), while ray-based models asymptotically decay with unitary slope \cite{Romero2022}.
Aiming to reconcile the two dominant approaches in the literature of stochastic fading channel modeling, which have hitherto evolved separately, we here introduce the Multi-cluster Fluctuating Two-Ray (MFTR) fading model. This newly proposed model is presented as the natural generalization and unification of \emph{both} the FTR and the \mbox{$\kappa$-$\mu$} shadowed fading models.
Such generalization enables the presence of additional multipath clusters in the purely ray-based FTR model, and the consideration of two fluctuating specular components within \mbox{$\kappa$-$\mu$} shadowed formulation. Despite being a generalization of the FTR and the \mbox{$\kappa$-$\mu$} shadowed models, only one additional parameter is required over these baseline models. As we will later see, the MFTR model not only inherits the bimodality and asymptotic decay properties exhibited separately by the FTR and $\kappa$-$\mu$ shadowed models, respectively, but also brings out additional flexibility to model propagation features not captured by the models from which it is originated. The resulting formulations of the MFTR statistics are as tractable as those of the simpler baseline models, and of other fading models in the state-of-the-art. The key contributions of this work can be listed as follows:
\begin{itemize}
\item We derive the chief probability functions for the MFTR model, i.e., probability density function (PDF), cumulative density function (CDF), and moment generation function (MGF) in closed-form. These expressions allow for the computation of the MFTR statistics using special functions similar to those used in simpler, well-established fading models such as Rician shadowed \cite{Abdi2003}, $\kappa$-$\mu$ shadowed \cite{kms}, or FTR \cite{ftr}.
\item Aiming to facilitate the use of the MFTR fading model for performance analysis purposes, we provide two alternative analytical formulations for the PDF and the CDF: one in terms of a continuous mixture of $\kappa$-$\mu$ shadowed distributions, and another one as an infinite {discrete} mixture of Gamma distributions. This allows to leverage the rich literature of performance analysis, so that existing results available either for the $\kappa$-$\mu$ shadowed or the Nakagami-$m$ cases can be used as a starting point to \textit{straightforwardly} analyze the performance under MFTR fading.
\item These results are used to analyze the outage probability performance under MFTR fading, both in exact and asymptotic form, and to study the impact of the model parameters on the amount of fading (AoF) metric.
\end{itemize}
The remainder of this paper is organized as follows: preliminaries and channel models are described in Section~\ref{sec2}. In Section~\ref{sec3}, analytical expressions are derived for the main statistics of the MFTR model. In Section~\ref{sec5}, performance analysis over MFTR fading channels is exemplified.
Section~\ref{sec6} shows illustrative numerical results and discussions. Finally, concluding remarks are provided in Section~\ref{sec7}.
\textit{Notation}: In what follows, $f_{(\cdot)}(\cdot)$ and $F_{(\cdot)}(\cdot)$ denote the PDF and CDF, respectively; $\mathbb{E}\left \{ \cdot \right \}$ and $\mathbb{V}\left \{ \cdot \right \}$ are the expectation and variance operators; $\Pr\left \{ \cdot \right \}$ represents probability; $\abs{\cdot}$ is the absolute value, $\simeq$ refers to ``asymptotically equal~to'', $\equiv$ reads as ``is equivalent~to'', $\approx$ denotes ``approximately equal~to'' {and $\sim$ refers to ``statistically distributed as''}.
In addition, $\Gamma(\cdot)$ denotes the gamma function~\cite[Eq.~(6.1.1)]{Abramowitz}, $\gamma(\cdot,\cdot)$ is the lower incomplete gamma function~\cite[Eq.~(6.5.2)]{Abramowitz}, $\left ( a \right )_ n =\tfrac{\Gamma\left ( a+n \right )}{\Gamma\left ( a \right )}$ represents the Pochhammer's symbol \cite{Abramowitz}, ${}_2F_1\left(\cdot,\cdot;\cdot;\cdot\right)$ is the Gauss hypergeometric function~\cite[Eq.~(15.1.1)]{Abramowitz}, $P_\alpha(z)={}_2F_1\left(-\alpha,\alpha+1;1;\tfrac{1-z}{2}\right)$ is the Legendre function of the first kind of real degree
$\alpha$~\cite[Eq.~(8.1.2)]{Abramowitz}, $\Phi_2^{(4)}\left(\cdot,\cdot,\cdot,\cdot;\cdot;\cdot,\cdot,\cdot,\cdot \right)$ denotes the confluent hypergeometric function in four variables~\cite[Eq.~(8)]{Srivastava} , and $\Phi_2\left(\cdot,\cdot;\cdot;\cdot,\cdot \right)$ is the bivariate confluent hypergeometric function~\cite[Eq.~(4.19)]{Brychkov}.
\section{Preliminaries and Channel Models}\label{sec2}
According to~\cite{durgin}, the small-scale random fluctuations of a radio signal transmitted over a wireless channel can be structured at the receiver as a superposition of $N$ waves arising from dominant specular components, {plus a group of multipath} waves associated to diffuse scattering. Therefore, under this model, the received complex baseband signal representing the wireless channel can be expressed as
\begin{align}
\label{eq1}
V_r =\sum_{n=1}^{N} V_n \exp\left({j\phi_n}\right)+X_1+jY_1 ,
\end{align}
where $V_n \exp(j\phi_n)$ denotes the $n$th specular component with constant amplitude $V_n$ and uniformly distributed random phase $\phi_n \sim \mathcal{U}(0, 2\pi)$.
On the other hand, $X_1+jY_1$ is a {circularly-symmetric} complex Gaussian random variable (RV) with total power $2\sigma^2$, such that $X_1,Y_1 \sim \mathcal{N}(0,\sigma^2)$, representing the scattering components associated to the non-line-of-sight (NLOS) propagation. This model allows to individually account for a number of dominant waves, together with the application of the CLT to the diffuse component, where a sufficiently large number {of weak diffuse waves with independent phases is assumed.}
The FTR channel model was proposed in~\cite{ftr} as a generalization of the TWDP model \cite{durgin}, where the latter arises when considering two dominant specular components (i.e., $N=2$) in \eqref{eq1}. For its definition, the FTR model considers that the amplitudes of these two dominant components experience a joint fluctuation due to the natural situations in different wireless scenarios (e.g., human body shadowing do to user motion, electromagnetic disturbances, and many others). Based on this, the complex signal under FTR fading can be formulated {as}~\cite{ftr}
\begin{align}
\label{eq2}
V_{\rm{FTR}} =\sqrt{\zeta } V_1 \exp\left({j\phi_{{1}}}\right)+\sqrt{\zeta } V_2 \exp\left({j\phi_2}\right)+X_1+jY_1 ,
\end{align}
where $\zeta $ is a unit-mean Gamma distributed RV
whose PDF is given by
\begin{align}\label{eq3}
f_{\zeta}(x)=\frac{m^{m}x^{m-1}}{\Gamma(m)}\exp\left ( -m x \right ),
\end{align}
where $m$ denotes the shadowing severity index of the specular components, often
linked to line-of-sight (LOS) propagation. {Note that when $m\rightarrow\infty$, then $\zeta$ degenerates into a deterministic value and the amplitudes of the two dominant specular components in~\eqref{eq2} become constant.} The FTR model in~\eqref{eq2}, besides fitting well with the field measurements in different wireless scenarios, also encompasses important statistical wireless channel models as particular cases. For instance, when no LOS components are present in~\eqref{eq2}, i.e., $N=0$, the classical Rayleigh fading model arises. For a single LOS component, i.e., $N=1$, two fading models are obtained, namely, Rician and Rician shadowed~\cite{Abdi2003} for constant and fluctuating amplitudes, respectively. Finally, for the case in which there {are} two dominant components (i.e., $N=2$) with constant amplitudes,~\eqref{eq2} reduces to the TWDP fading model, also referred to as the Generalized Two-Ray fading model with uniformly distributed phases (GTR-U)~\cite{Rao2015}.
On the other hand, power-envelope based formulations as those originally proposed by Yacoub \cite{kappamuYacub} are defined from a different approach. Specifically, the squared amplitude (or instantaneous received power) of the $\kappa$-$\mu$ shadowed fading model is expressed as \cite{kms}
\begin{equation}
\label{eqclusters}
R^2=\sum_{i=1}^{\mu}|Z_i+\sqrt{\zeta} p_i|^2,
\end{equation}
where $\mu$ wave clusters are defined, the complex variables $Z_i$ denote the diffuse components associated to each cluster, $\zeta$ is a unit-mean Gamma distributed RV, as given in \eqref{eq3}, and $p_i$ are complex amplitudes for the dominant components within each cluster. Notice that the FTR model in \eqref{eq2} can be physically interpreted as a single cluster in which both LOS and NLOS components are part of the same cluster structure. With this in mind, we can combine a power-envelope definition as the one in \eqref{eqclusters} with the ray-based structure in \eqref{eq2} as follows.
As in \cite{kappamuYacub}, we consider a wireless signal composed of clusters of waves propagating in a non-homogeneous environment. Within each cluster, the scattered waves have random phases and similar delay times, while the intercluster delay-time spreads are assumed to be relatively large. All clusters of the multipath waves are assumed to have scattered waves with identical powers. Now, in the first cluster (which typically represents the first one arriving), two dominant specular components with random phases and arbitrary power are considered, whereas $p_i=0,\forall i=2\ldots\mu$. Similarly to the model in \cite{ftr}, these dominant components are subject to the same source of random fluctuations.
Under this channel model, the squared amplitude of the received signal is expressed as
\begin{align}
\label{eq4}
R^2=\abs{ \underset{\rm{cluster \ 1:}V_{\rm{FTR}} }{\underbrace{ \sqrt{\zeta }\left (V_1e^{j\phi_1}+ V_2e^{j\phi_2} \right )+X_1+jY_1}}}^2 + \underset{\rm{ \textcolor{black}{additional \ clusters } } }{\underbrace{ \sum_{i=2}^{\mu}\abs{Z_i}^2},}
\end{align}
where $Z_i=X_i+jY_i$, for $i=\left \{2,\ldots,\mu \right \}$ in which $X_i$ and $Y_i$ are mutually independent zero-mean Gaussian processes with $\sigma^2$ variance, i.e., $\mathbb{E}\{X_i^2\}=\mathbb{E}\{Y_i^2\}=\sigma^2$. The insightful interpretation of~\eqref{eq4} is the following. Each additional multipath cluster is modelled by one term of the sum; thus, $\mu$ is the total number of multipath clusters. The scattered components (e.g., NLOS waves) of the $i$th cluster are denoted by a {circularly-symmetric} complex Gaussian RV, i.e., $Z_i$. Hence, for the $i$th cluster, the total power of the scattered multipath signals is $2\sigma^2$. Moreover, in cluster 1, the complex RVs represented by $V_n \exp(j\phi_n)$, for $n=\left \{1,2 \right \}$ denote the dominant specular components (e.g., LOS waves) of the first arriving cluster.
Finally, the two specular components in cluster 1 are subject to the same source of random fluctuations as in the original FTR model, denoted by the normalized RV $\zeta$. For the model in~\eqref{eq4}, we coin the name multicluster FTR (MFTR) model, to indicate the presence of additional multipath clusters in the original FTR model.
It must be noted that the statistical formulation of the $\kappa$-$\mu$ shadowed distribution depends (through parameter $\kappa$) on the total aggregate power of the specular components, but it is independent of which or how many clusters incorporate a specular component \cite{kms}; therefore, the MFTR model defined in \eqref{eq4} fully includes \emph{both} the original FTR model and the $\kappa$-$\mu$ shadowed distribution as special cases.
Both models have independently been validated, empirically matching different wireless scenarios \cite{ftr,kms}, which guarantees the practical usefulness of the proposed MFTR model.
As in \cite{kappamuYacub,kms}, even though the cluster number $\mu$ is inherently a natural number, the MFTR model admits a generalization for $\mu\in\mathbb{R}^+$, which is considered in the subsequent derivations.
\section{Statistical Characterization of the {MFTR Fading Model}}\label{sec3}
In this section, the key first-order statistics of the newly proposed MFTR model are derived: the {MGF, PDF and CDF}. In the following derivations, let us consider {the} received power signal of the MFTR model, i.e., $W=R^2$, which {from~\eqref{eq4}} can be rewritten as
\begin{align}
\label{eq5}
W&={{\left( \sqrt{\zeta }\left( {{V}_{1}}\cos {{\phi }_{1}}+{{V}_{2}}\cos {{\phi }_{2}} \right)+{{X}_{1}} \right)}^{2}} \nonumber \\ &+{{\left( \sqrt{\zeta }\left( {{V}_{1}}\sin {{\phi }_{1}}+{{V}_{2}}\sin {{\phi }_{2}} \right)+{{Y}_{1}} \right)}^{2}} +\sum\limits_{i=2}^{\mu }{X_{i}^{2}+Y_{i}^{2}}.
\end{align}
As in \cite{ftr}, the MFTR model can be conveniently expressed by introducing the parameters $K$ and $\Delta$, which are respectively defined as
\begin{align}
\label{eq6}
K= \frac{V_{1}^{2}+V_{2}^{2}}{2{{\sigma }^{2}}\mu },
\end{align}
\begin{align}
\label{eq7}
\Delta = \frac{2{{V}_{1}}{{V}_{2}}}{V_{1}^{2}+V_{2}^{2}}.
\end{align}
The MFTR fading model is univocally defined by four shape parameters: $\{K,m,\mu\}\in\mathbb{R}^+$ and $\Delta\in[0,1]$. Similar to the interpretation of the Rician $K$ factor, $K$ represents the ratio of the average power of the specular components to the power of the remaining scattered components. On the other hand, the $\Delta$ parameter ranging from 0 to 1 shows how similar to each other are the average received powers of the dominant components in cluster 1. For instance, when the magnitudes of two {components} are equal, $\Delta= 1$. In the absence of a second component ($V_1$ or $V_2 = 0$), $\Delta = 0$, {which corresponds} to the $\kappa$-$\mu$ shadowed model. Furthermore, notice that for $\mu=1$ {the MFTR model} yields the FTR fading model.
Next, we introduce the distribution of the received power signal (or equivalently the instantaneous signal-to-noise ratio (SNR) when the noise comes into play) of the MFTR fading model. It is worth mentioning that the statistical characterization of the instantaneous received SNR, here denoted by $\gamma$, plays a pivotal role in designing or evaluating the performance of many practical wireless systems. {Let $\gamma \stackrel{\Delta}{=} \overline\gamma W/\Omega$ be the instantaneous received SNR through the MFTR fading channel, where $\overline\gamma$ and $\Omega$ denote the average SNR and the mean of $W$, respectively}.
\textcolor{black}{
Mathematically speaking, from \eqref{eq5}, $\overline{\gamma}$ can be formulated as
\begin{align}
\label{eq8}
\overline{\gamma} =\frac{\overline\gamma}{\Omega} \mathbb{E}\left \{ W\right \}=&\frac{\overline\gamma}{\Omega}(V_1^2+V_2^2+2\sigma^2\mu) \nonumber \\ =& \frac{\overline\gamma}{\Omega}2\sigma^2\mu(1+K).
\end{align}}
With the aid of the previous definitions, the chief probability functions concerning the MFTR channel model can be derived as follows.
\subsection{MGF}
{In the first Lemma, presented below, we obtain a closed-form expression for the MGF.}
\begin{lemma}\label{lemma1}
The MFG of the instantaneous received SNR $\gamma$ {under MFTR fading} can be expressed as
\begin{align}
\label{eq9}
{{{\mathcal{M}}}_{\gamma }}\left( s \right)=&\frac{{{m}^{m}}{{\mu }^{\mu }}{{\left( 1+K \right)}^{\mu }}{{\left( \mu \left( 1+K \right)-\overline{\gamma }s \right)}^{m-\mu }}}{{{\left( \sqrt{\mathcal{R}(\mu,m,K,\Delta;s)} \right)}^{m}}} \nonumber \\ & \times
{{P}_{m-1}}\left( \frac{m\mu \left( 1+K \right)-\left( \mu K+m \right)\overline{\gamma }s}{\sqrt{\mathcal{R}(\mu,m,K,\Delta;s)}} \right)
\end{align}
\end{lemma}
where $\mathcal{R}(\mu,m,K,\Delta;s)$ is a polynomial in $s$ given by
\begin{align}
\label{eq10}
\mathcal{R}(\mu,m,K,\Delta;s)&=\left[ {{\left( m+\mu K \right)}^{2}}-{{\left( \mu K\Delta \right)}^{2}} \right]{{\overline{\gamma }}^{2}}{{s}^{2}} -2m\mu \nonumber \\ & \times
\left( 1+K \right) \left( m+\mu K \right)\overline{\gamma }s+{{\left[ \left( 1+K \right)m\mu \right]}^{2}}.
\end{align}
\begin{proof}
See Appendix~\ref{ap:MGF}.
\end{proof}
{Note that the result in Lemma \ref{lemma1} is valid for any positive real value of $m$.}
\subsection{PDF and CDF}
Here, we derive the PDF and CDF of the SNR of the MFTR model\footnote{The PDF and CDF of the received signal envelope $R$ under MFTR channels can be obtained through a standard variable transformation. Specifically, by performing the change of variables, $f_R(r)=2r f_\gamma(r^2)$ and $F_R(r)=F_\gamma(r^2)$, with $\overline{ \gamma}$ replaced by $\Omega=\mathbb{E}\left \{ R^2 \right \}$.}. Even though these can be computed by performing a numerical Laplace inverse transform over the MGF in Lemma \ref{lemma1} for any arbitrary set of values of the shape parameters \cite{Simon}, it is possible to obtain closed-form expressions by assuming that the fading parameter $m$ takes integer values. For this purpose, we take advantage of the fact that, for $m\in \mathbb{Z}^+$, the Legendre function in the MGF obtained in (\ref{eq9}) has an integer degree;
{thus, such Legendre function becomes a Legendre polynomial}. The Legendre polynomial of an integer degree $n$ can be formulated as in~\cite[Eq.~(22.3.8)]{Abramowitz} by
\begin{align}
\label{eq11}
P_n(z)=\frac{1}{2^n}\sum_{q=0}^{\left \lfloor n/2 \right \rfloor}(-1)^nC_q^nz^{n-2q},
\end{align}
where the $C_q^n$ term is expressed as
\begin{align}
\label{eq12}
C_q^n=\binom{n}{q}\binom{2n-2q}{n}=\frac{\left ( 2n-2q \right )!}{q!(n-q)!(n-2q)!}.
\end{align}
{From the MGF in~\eqref{eq9} and with the help of~\eqref{eq11}, the closed-form expressions for the PDF and CDF of the RV $\gamma$ are obtained in the following Lemma.}
\begin{lemma}\label{lemma2}
Assuming that $m\in \mathbb{Z}^+$, the PDF and CDF of the SNR under MFTR fading can be formulated as \eqref{eq13} and \eqref{eq14}, respectively.
\end{lemma}
\begin{proof}
See Appendix~\ref{ap:PFDCDF}.
\end{proof}
The chief statistics of the MFTR fading model in~\eqref{eq13} and~\eqref{eq14} are given in terms of the {multi-variate confluent hypergeometric function} $\Phi_2^{(4)} \left(\cdot \right)$, which is rather common in well-established fading models such as Rician {shadowed} \cite{Abdi2003}, $\kappa$-$\mu$ shadowed \cite{kms}, or FTR \cite{ftr}. Moreover, the computation of this function can be performed by resorting to an inverse Laplace transform as described in~\cite[Appendix~9B]{Simon}, and whose implementation in a simple and efficient way through MATLAB is given in~\cite{phi2function}. Therefore, the evaluation of MFTR probability distributions does not pose any additional challenge compared to other well-known fading models in the state-of-the-art.
\subsection{Alternative formulations}
Expressions in Lemmas \ref{lemma1} and \ref{lemma2} provide a complete formulation for the MFTR, equivalent in complexity to those originally proposed in \cite{ftr} and \cite{kms} for the
|
baseline FTR and $\kappa$-$\mu$ shadowed fading distributions, respectively. However, aiming to provide additional flexibility to the newly proposed MFTR model, as well as to facilitate its use for performance evaluation purposes, we now provide two alternative formulations for the PDF and CDF of the MFTR model.
We first propose a formulation of the MFTR model as a \textit{continuous} mixture of $\kappa$-$\mu$ shadowed distributions. Secondly,
we propose a formulation of the MFTR model as an infinite \textit{discrete} mixture of Gamma distributions. These formulations are provided in the following two lemmas, and are valid for the entire range of values of the shape parameters $\kappa, \mu, m$ and $\Delta$.
\begin{figure*}[ht]
\begin{normalsize}
\begin{align}\label{eq13}
f_{\gamma }( x)=&\frac{\left ( 1+K \right )^{\mu}\mu ^{\mu}}{2^{m-1}\Gamma(\mu)\overline{\gamma}^\mu}\left ( \frac{m}{\sqrt{\left ( m+\mu K \right )^2-\mu^2 K^2 \Delta^2}} \right )^m\sum_{q=0}^{ \left \lfloor \frac{m-1}{2} \right \rfloor}\left( -1 \right)^q C_q^{m-1} \left ( \frac{m+\mu K}{\sqrt{\left ( m+\mu K \right )^2-\mu^2 K^2 \Delta^2}} \right )^{m-1-2q} \nonumber \\ \times & \ x^{\mu-1} \Phi_2^{(4)}\Biggl(1+2q-m,m-q-1/2,m-q-1/2,\mu-m;\mu;-\frac{\left( 1+K \right)m\mu }{\left( m+\mu K \right)\overline{\gamma }}x,-\frac{\left( 1+K \right)m\mu }{\left( m+\mu K\left( 1+\Delta \right) \right)\overline{\gamma }}x, \nonumber \\ &
-\frac{\left( 1+K \right)m\mu }{\left( m+\mu K\left( 1-\Delta \right) \right)\overline{\gamma }}x,-\frac{\left( 1+K \right)\mu }{\overline{\gamma }}x \Biggl).
\end{align}
\end{normalsize}
\vspace{-5mm}
\end{figure*}
\begin{figure*}[ht]
\begin{normalsize}
\begin{align}\label{eq14}
F_{\gamma }( x)=&\frac{ \left ( 1+K \right )^{\mu}\mu ^{\mu}}{2^{m-1}\Gamma(\mu+1) \overline{\gamma}^\mu}\left ( \frac{m}{\sqrt{\left ( m+\mu K \right )^2-\mu^2 K^2 \Delta^2}} \right )^m\sum_{q=0}^{ \left \lfloor \frac{m-1}{2} \right \rfloor}\left( -1 \right)^q C_q^{m-1} \left ( \frac{m+\mu K}{\sqrt{\left ( m+\mu K \right )^2-\mu^2 K^2 \Delta^2}} \right )^{m-1-2q} \nonumber \\ \times &\ x^{\mu} \Phi_2^{(4)}\Biggl(1+2q-m,m-q-1/2,m-q-1/2,\mu-m;\mu+1;-\frac{\left( 1+K \right)m\mu }{\left( m+\mu K \right)\overline{\gamma }}x,-\frac{\left( 1+K \right)m\mu }{\left( m+\mu K\left( 1+\Delta \right) \right)\overline{\gamma }}x, \nonumber \\ &
-\frac{\left( 1+K \right)m\mu }{\left( m+\mu K\left( 1-\Delta \right) \right)\overline{\gamma }}x,-\frac{\left( 1+K \right)\mu }{\overline{\gamma }}x \Biggl).
\end{align}
\end{normalsize}
\hrulefill
\vspace{-5mm}
\end{figure*}
\begin{figure*}[h!]
\begin{normalsize}
\begin{align}\label{eq15}
f_{\gamma }( x)= \sum_{i=0}^{\infty}\textcolor{black}{w_i} f_X^{\rm G} \left( \mu+i;\frac{\overline{\gamma}(\mu+i)}{\mu(K+1)} ;x\right).
\end{align}
\end{normalsize}
\vspace{-5mm}
\end{figure*}
\begin{figure*}[h!]
\begin{normalsize}
\begin{align}\label{eq16}
F_{\gamma }( x)= \sum_{i=0}^{\infty}\textcolor{black}{w_i} F_X^{\rm G}\left( \mu+i;\frac{\overline{\gamma}(\mu+i)}{\mu(K+1)} ;x\right).
\end{align}
\end{normalsize}
\vspace{-5mm}
\end{figure*}
\begin{figure*}[h!]
\begin{normalsize}
\begin{align}\label{eqWeights}
\textcolor{black}{
w_i=\frac{\Gamma(m+i)(\mu K)^i m^m}{\Gamma(m)\Gamma(i+1)}\frac{{\left( {1 - \Delta } \right)^i }}{{\sqrt \pi (\mu K(1 - \Delta ) + m)^{m + i} }}}&\textcolor{black}{\sum_{q=0}^{i}\binom{i}{q}\frac{{\Gamma \left( {q + \frac{1}{2}} \right)}}{{\Gamma \left( {q + 1} \right)}} \left( {\frac{{2\Delta }}{{1 - \Delta }}} \right)^q} \nonumber \\ & \textcolor{black}{\times
{}_2F_1\left( {m + i,q + \frac{1}{2};q + 1;\frac{{ - 2\mu K\Delta }}{{\mu K(1 - \Delta ) + m}}} \right),\quad m \in R^+,}
\end{align}
\end{normalsize}
\vspace{-5mm}
\end{figure*}
\begin{figure*}[h!]
\begin{normalsize}
\begin{align}\label{eq17}
f_X^{\rm G}\left( \lambda;\nu ;y\right)=\frac{ \lambda^ \lambda}{\Gamma( \lambda) \nu^ \lambda}y^{\lambda-1}\exp\left( - \frac{\lambda y}{\nu} \right), \quad F_X^{\rm G}\left( \lambda;\nu ;y\right)=\frac{1}{\Gamma(\lambda)}\gamma\left ( \lambda,\frac{\lambda y}{\nu} \right ).
\end{align}
\end{normalsize}
\hrulefill
\vspace{-5mm}
\end{figure*}
\begin{lemma}\label{lemma4}
When $m\in \mathbb{R}^+$, the PDF and CDF of the SNR of the MFTR distribution can be obtained by averaging the conditional $\kappa$-$\mu$ shadowed statistics over all possible
realizations of $\theta$, a
\end{lemma}
\begin{align}\label{eq18}
f_\gamma(x)=\frac{1}{\pi}\int_{0}^{\pi}{{f}_{\gamma \left| \theta \right.}}(x)d\theta,
\end{align}
\begin{align}\label{eq19}
F_\gamma(x)=\frac{1}{\pi}\int_{0}^{\pi}{{F}_{\gamma \left| \theta \right.}}(x)d\theta,
\end{align}
where
\begin{align}
\label{eq20}
&{{f}_{\gamma \left| \theta \right.}}(x)=\frac{{{\mu }^{\mu }}{{m}^{m}}{{\left( 1+K \right)}^{\mu }}}{\Gamma (\mu )\overline{\gamma }{{\left( \mu K\left( 1+\Delta \cos \theta \right)+m \right)}^{m}}}{{\left( \frac{x}{\overline{\gamma }} \right)}^{\mu -1}} \nonumber \\ & \times {{e}^{-\frac{\mu \left( 1+K \right)}{\overline{\gamma }}x}}
{{}_{1}}{{F}_{1}}\left( m;\mu ;\frac{{{\mu }^{2}}K\left( 1+\Delta \cos \theta \right)\left( 1+K \right)}{\mu K\left( 1+\Delta \cos \theta \right)+m}\frac{x}{\overline{\gamma }} \right){,}
\end{align}
\begin{align}
\label{eq21}
{{F}_{\gamma \left| \theta \right.}}(x)=&\frac{{{\mu }^{\mu-1 }}{{m}^{m}}{{\left( 1+K \right)}^{\mu }} }{\Gamma (\mu ){{\left( \mu K\left( 1+\Delta \cos \theta \right)+m \right)}^{m}}}{{\left( \frac{x}{\overline{\gamma }} \right)}^{\mu}} \Phi_2\Bigl(\mu-m, \nonumber \\ & m;\mu+1;-\frac{\mu\left ( 1+K \right )x}{\overline{\gamma}},-\frac{\mu\left ( 1+K \right )}{\overline{\gamma}}
\nonumber \\ & \times
\frac{m x}{\mu K\left (1+\Delta \cos\theta \right )+m } \Bigl).
\end{align}
\begin{proof}
The conditional $\kappa$-$\mu$ shadowed PDF and CDF are obtained from~\cite[Eq.~(4)]{kms} and~\cite[Eq.~(6)]{kms} by substituting $\kappa$ and $\overline{\gamma}$ by~\eqref{Lema1eq4} and~\eqref{Lema1eq5}, respectively. Then, using the relationships given in~\eqref{Lema1eq7} and~\eqref{Lema1eq10} that connect the $\kappa$-$\mu$ shadowed and MFTR models,~\eqref{eq20} and~\eqref{eq21} are obtained
\end{proof}
It must be noted that the Rician shadowed distribution is a particular case of the $\kappa$-$\mu$ shadowed distribution for the case when $\mu=1$. Thus, the integral connection between the FTR and the Rician shadowed distributions presented in \cite{ftr2}, for arbitrary positive real $m$, is a particular case of Lemma~\ref{lemma4} for $\mu=1$.
\begin{lemma}\label{lemma3}
{When $m\in \mathbb{R}^+$, the PDF and CDF of the SNR of the MFTR distribution can be obtained as an infinite discrete mixture of Gamma distributions. The corresponding expressions are given in equations \eqref{eq15} and \eqref{eq16}, respectively, where $f_X^{\rm G}(\cdot)$ and $F_X^{\rm G}(\cdot)$ represent the PDF and CDF, respectively, of a Gamma distribution, which are both given in \eqref{eq17}. Note that $\gamma(\cdot,\cdot)$ in \eqref{eq17} represents the incomplete gamma function defined in~\cite[Eq.~(6.5.2)]{Abramowitz}.
}
\end{lemma}
\begin{proof}
See Appendix~\ref{ap:PFDCDFreal}.
\end{proof}
Here, we point out that the PDFs and CDFs in Lemmas~\ref{lemma4} and~\ref{lemma3} are valid for non-constrained fading values of the MFTR model. Expressions in Lemma~\ref{lemma4}, i.e.,~\eqref{eq18} and~\eqref{eq19}, are expressed in simple finite-integral form in terms of well-known functions in communication theory, where the integrands are continuous bounded functions and the integration interval is finite. Therefore, the evaluation of these integrals through numerical integration routines in commercial mathematical software packages poses no challenge, and in fact is an standard approach in communication theory -- cfr. the proper integral forms of the Gaussian $Q$-function \cite{Weinstein1974}, or Simon and Alouini's MGF approach to the performance analysis of wireless communication systems \cite{Simon}. Expressions in Lemma~\ref{lemma3}, i.e.,~\eqref{eq15} and~\eqref{eq16}, are given as weighted sums of gamma distributions, which are a basic building block in many communication theory applications, and correspond to the case of assuming Nakagami-$m$ fading. These sets of expressions allow for leveraging the rich literature devoted to study baseline fading models like $\kappa$-$\mu$ shadowed and Nakagami-$m$, to directly evaluate the case of the more general MFTR model, when desired.
\begin{table}[t]
\footnotesize
\caption{Conventional and generalized fading channel models derived from the MFTR distribution. }
\centering
\begin{tabular}{c|c}
\hline \hline
\multicolumn{1}{l|}{ \textbf{Fading Distribution}} & \multicolumn{1}{c}{\textbf{MFTR fading parameters } }
\\ \hline \hline
One-sided Gaussian & $a)$ $\underline{\Delta}=0$, $\underline{K} \rightarrow \infty$, $\underline{m}=0.5$, $\underline{\mu}=1$
\\ & $b)$ $\underline{\Delta}=1$, $\underline{K} \rightarrow \infty$, $\underline{m}=1$, $\underline{\mu}=1$
\\ \hline Rayleigh & $a)$$\underline{\Delta}=0$, $\underline{K} \rightarrow \infty$, $\underline{m}=1$, $\underline{\mu}=1$
\\ & $b)$ $\underline{\Delta}=0$, $\underline{K} =0$, $\forall \underline{m}$, $\underline{\mu}=1$
\\ \hline & $a)$ $\underline{\Delta}=0$, $\underline{K} =\tfrac{1-q^2}{2q^2}$, $\underline{m}=0.5$, $\underline{\mu}=1$
\\ Nakagami-$q$ (Hoyt) & $b)$ $\forall \left \{\underline{\Delta},\underline{K} \right \}$, with $q=\sqrt{\tfrac{1+K(1-\Delta)}{1+K(1+\Delta)}}$, \\ & $\underline{m}=1$, $\underline{\mu}=1$
\\ \hline Nakagami-$m$ & $\underline{\Delta}=0$, $\underline{K} \rightarrow \infty$, $\underline{m}=m$, $\underline{\mu}=1$
\\ \hline Rician & $\underline{\Delta}=0$, $\underline{K}=K$, $\underline{m}\rightarrow \infty$, $\underline{\mu}=1$
\\ \hline Rician shadowed & $\underline{\Delta}=0$, $\underline{K}=K$, $\underline{m}=m$, $\underline{\mu}=1$
\\ \hline $\kappa$-$\mu$ shadowed & $\underline{\Delta}=0$, $\underline{K}=\kappa$, $\underline{m}=m$, $\underline{\mu}=\mu$
\\ \hline $\kappa$-$\mu$ & $\underline{\Delta}=0$, $\underline{K}=\kappa$, $\underline{m}=\rightarrow \infty$, $\underline{\mu}=\mu$
\\ \hline $\eta$-$\mu$
& $\underline{\Delta}=0$, $\underline{K}=(1-\eta)/(2\eta)$, $\underline{m}= \mu$, $\underline{\mu}=2 \mu$
\\ \hline TWDP
& $\underline{\Delta}=\Delta$, $\underline{K}=K$, $\underline{m}=\rightarrow \infty$, $\underline{\mu}=1$ \\ \hline Two-Wave
& $\underline{\Delta}=\Delta$, $\underline{K}=\rightarrow \infty$, $\underline{m}=\rightarrow \infty$, $\underline{\mu}=1$ \\ \hline FTR
& $\underline{\Delta}=\Delta$, $\underline{K}=K$, $\underline{m}=m$, $\underline{\mu}=1$ \\ \hline
\multicolumn{1}{c|}{Fluctuating Two-Wave} & \multicolumn{1}{c}{$\underline{\Delta}=\Delta$, $\underline{K}=\rightarrow \infty$, $\underline{m} = m$, $\underline{\mu}=1$} \\ \hline \hline
\end{tabular}\label{Cases}
\end{table}
\begin{figure}[t]
\centering
\psfrag{A}[Bc][Bc][0.7]{\rm Case $\mathrm{A}$}
\psfrag{B}[Bc][Bc][0.7]{\rm Case $\mathrm{B}$}
\includegraphics[width=1\linewidth]{./fig1mu.eps} \caption{PDF of the MFTR signal envelope by varying $\mu$ in two different scenarios:{ \text{ Case $\mathrm{A}$}: $\Delta=0.9$, $m=8$, $K=8$, $ \overline{\gamma}=2$ and \text{Case $\mathrm{B}$}: $\Delta=0.9$, $m=4$, $K=15$, $\overline{\gamma}=1$.} }
\label{fig1}
\vspace{-2mm}
\end{figure}
\begin{figure}[t]
\centering
\psfrag{H}[Bc][Bc][0.6]{$K_{\mathrm{dB}}^{\mathrm{B}}=25$ dB}
\psfrag{Z}[Bc][Bc][0.6][0]{$K_{\mathrm{dB}}^{\mathrm{B}}=15$ dB}
\includegraphics[width=1\linewidth]{./fig2delta.eps} \caption{PDF of the MFTR signal envelope for different values of $\Delta$, with $\mu=2$, $m=6$, $K=15$, and $\overline{\gamma}=1.5$. \textcolor{black}{Markers correspond to MC simulations.}}
\label{fig2}
\end{figure}
\begin{figure}[t]
\centering
\psfrag{H}[Bc][Bc][0.6]{$K_{\mathrm{dB}}^{\mathrm{B}}=25$ dB}
\psfrag{Z}[Bc][Bc][0.6][0]{$K_{\mathrm{dB}}^{\mathrm{B}}=15$ dB}
\includegraphics[width=1\linewidth]{./fig3m.eps} \caption{PDF of the MFTR signal envelope for different values of $m$, with $\Delta=0.9$, $\mu=3$, $m=4$, $K=20$, and $\overline{\gamma}=1$. \textcolor{black}{Markers correspond to MC simulations.}}
\label{fig3}
\end{figure}
\begin{figure}[t]
\centering
\psfrag{H}[Bc][Bc][0.6]{$K_{\mathrm{dB}}^{\mathrm{B}}=25$ dB}
\psfrag{Z}[Bc][Bc][0.6][0]{$K_{\mathrm{dB}}^{\mathrm{B}}=15$ dB}
\includegraphics[width=1\linewidth]{./fig4K.eps} \caption{PDF of the MFTR signal envelope for different values of $K$, with $m=5$, $\Delta=0.9$, $\mu=3$, and $\overline{\gamma}=1$. \textcolor{black}{Markers correspond to MC simulations}}
\label{fig4}
\end{figure}
\subsection{Special cases and effect of parameters}
The MFTR model derived here is connected to other fading distributions commonly used in several wireless application scenarios, by specializing the corresponding set of parameters as stated in Table~\ref{Cases}. In order to avoid confusion, the parameters related to the MFTR distribution are underlined. We would like to mention that a multicluster version of the TWDP model in \cite{durgin,Rao2015} naturally appears as we let $m\rightarrow\infty$; however, this model alone has its own entity an deserves special attention as a separate item, which is left for future research activities.
In Figs.~\ref{fig1}-\ref{fig4}, we illustrate how different propagation conditions affect the shape of the MFTR distribution, by evaluating the PDF of the received signal envelope $f_R(r)$ for a set of values of the shape parameters: $K$ (power ratio between dominant and diffuse components), $\Delta$ (amplitude imbalance between the two dominant components), $m$ (severity of dominant waves' fluctuation), and $\mu$ (clustering of scattered multipath waves). The PDFs illustrated in the figures have been obtained by evaluating \eqref{eq13}, although all mathematical expressions along this section have been double-checked through Monte Carlo (MC) simulations. These MC simulations are also included with markers in the figures, whenever they don't affect the readability of the figures. Also, we also checked that the same results are obtained when eqs. \eqref{eq15} and \eqref{eq18} are used to evaluate the PDFs.
In Fig.~\ref{fig1}, we clearly perceive that the MFTR model's behavior is inherently bimodal\footnote{The bimodality of the distribution is related to the existence of two local maxima in its PDF.} {(see \textit{Case $\mathrm{A}$})}. Interestingly, thanks to the presence of the $\mu$ parameter adequately combined with the other fading parameters, the MFTR model can exhibit a more pronounced bimodality as $\mu$ increases {(see \textit{Case $\mathrm{B}$})}. Specifically, the MFTR model can exhibit both a left-bimodality (i.e., the first local maximum is larger) and a right-bimodality (i.e., the second local maximum is larger). This is in stark contrast with the behavior of the baseline $\kappa$-$\mu$ shadowed (unimodal) or FTR distributions (only right-bimodality) from which the MFTR distribution originates. This feature brings additional flexibility to improve the versatility of the MFTR model to fit field measurements in emerging wireless scenarios such as mm-Wave and sub-terahertz bands \cite{Du2022}.
In Fig.~\ref{fig2}, it is confirmed that for low values of $\Delta$, i.e., one specular component is dominant in cluster 1, then the MFTR distribution exhibits a unimodal behavior similar to the $\kappa$-$\mu$ shadowed case. Conversely, for larger values of $\Delta$, i.e., two specular components with similar amplitudes are present in cluster 1, then the MFTR distribution exhibits a bimodal behavior. From Figs.~\ref{fig3}-\ref{fig4}, we can see that such bimodality of the MFTR distribution is also closely linked to parameters $m$ and $K$. Specifically, large values of $m$ or $K$ yield a more pronounced bimodality. Conversely, low values of $m$ or $K$ tend to smoothen such bimodality.
\section{Performance Analysis in Wireless Systems}\label{sec5}
In this section, we illustrate the flexibility of the MFTR model when used for performance analysis. For exemplary purposes, we analyze key performance metrics such as the outage probability (OP), both in exact and asymptotic forms, as well as the AoF \cite{Simon}
\subsection{Outage Probability}
The instantaneous Shannon channel capacity per unit bandwidth is defined as
\begin{align}
\label{eq22}
C=\log_2(1+\gamma).
\end{align}
The outage probability is defined as the probability that the capacity $C$ falls below a certain
threshold rate $R_{\rm th}$, i.e.,
\begin{align}
\label{eq23}
P_{\rm out}&=\Pr\left \{ C<R_{\rm th} \right \}=\Pr\left \{ \log_2 (1+\gamma)<R_{th} \right \} \nonumber \\ & = \Pr\left \{ \gamma <\underbrace{2^{R_{\rm th}}-1}_{\gamma_{\rm th}} \right \}.
\end{align}
{Consequently,} the OP is given in terms of SNR's CDF as
\begin{align}
\label{eq24}
P_{\rm out}&=F_\gamma\left ( 2^{R_{\rm th}}-1 \right ),
\end{align}
in which $F_\gamma\left ( \cdot \right )$
is given by either \eqref{eq14}, \eqref{eq16} or \eqref{eq19}. Although expressed in exact form in terms of the MFTR CDFs previously derived, $ P_{\rm out}$ provides a limited insight concerning the effect of fading parameters on the system performance. Thus, we introduce a closed-form asymptotic OP expression to evaluate the high-SNR regime's system performance below.
\subsection{Asymptotic Outage \textcolor{black}{ Probability}}
Here, to get further insights about the role of the fading parameters on system performance, we derive an asymptotic expression to investigate the behavior of the OP given in~\eqref{eq24} in the high-SNR regime. Our goal is to obtain an asymptotic expression in the form $P_{\rm out}\simeq \mathrm{G}_c\left(\gamma_{\rm th}/\overline{\gamma}\right)^{\mathrm{G}_d}$~\cite{Giannakis2003}, where $\mathrm{G}_c$ and $\mathrm{G}_d$ denote the power offset and the diversity order, respectively. Hence, the corresponding asymptotic OP is given in the following Proposition.
\begin{prop}\label{propo
|
^l\mathfrak{g}_{i\bar i l}-
S_{i\bar i}^l\mathfrak{g}_{1\bar 1 l})
-C\sqrt{Q}
\end{aligned}
\end{equation}
where as above one denotes
\[Q=|\partial\overline{\partial}u|^2+|\partial \partial u|^2.\]
Differentiating equation \eqref{main-equ2} twice
(using covariant derivative), we obtain
\begin{equation}
\label{gqy-45diff the equation}
\begin{aligned}
F^{i\bar i}\mathfrak{g}_{i\bar i l} \,& =\psi_l, \\ \end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
F^{i\bar i}\mathfrak{g}_{i\bar i 1\bar 1}
=\psi_{1\bar 1}-F^{i\bar j,l\bar m }\mathfrak{g}_{i\bar j1}\mathfrak{g}_{l\bar m\bar 1}.
\end{aligned}
\end{equation}
Then we have
\begin{equation}
\label{mp3}
\begin{aligned}
F^{i\bar i}\mathfrak{g}_{1\bar 1 i\bar i}
\geq \,&
2\mathfrak{Re} (F^{i\bar i}\bar T^j_{1i}\mathfrak{g}_{1\bar j i})
- 2\mathfrak{Re}
F^{i\bar i}S_{i\bar i}^l\mathfrak{g}_{1\bar 1 l}
-C\sqrt{Q}\sum F^{i\bar i}.
\end{aligned}
\end{equation}
Putting the above inequalities
into \eqref{gblq-C90} we get
\begin{equation}
\label{gblq-C90-2}
\begin{aligned}
0 \geq \,&
\mathfrak{g}_{1\bar{1}}F^{i\bar i} (\phi_{i\bar i} - \phi_i \phi_{\bar i})
- 2\mathfrak{Re}
(F^{i\bar i}S_{i\bar i}^l\mathfrak{g}_{1\bar 1 l})
+
2\mathfrak{Re} (F^{i\bar i}\bar T^1_{1i}\mathfrak{g}_{1\bar 1 i})
-C\sqrt{Q}\sum F^{i\bar i}
\\
=\,&
\mathfrak{g}_{1\bar 1} \mathcal{L}\phi -\mathfrak{g}_{1\bar 1}F^{i\bar i}\phi_i\phi_{\bar i}
-2\mathfrak{g}_{1\bar 1}\mathfrak{Re} (F^{i\bar i}\bar T^1_{1i}\phi_{i})
-C\sqrt{Q}\sum F^{i\bar i}. \nonumber
\end{aligned}
\end{equation}
Let $\phi=\log\eta+\varphi(w)$, where as above $w = |\nabla u|^2$, and $\eta$ is the cutoff function given by \eqref{2-22}. Then
\begin{equation}
\begin{aligned}
\mathcal{L}\phi=\,&
\frac{\mathcal{L}\eta}{\eta} -F^{i\bar i}\frac{|\eta_i|^2}{\eta^2}+\varphi' \mathcal{L}w +\varphi''F^{i\bar i}|w_i|^2,
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
F^{i\bar i}|\phi_i|^2
+2 \mathfrak{Re} F^{i\bar i}\bar T^1_{1i}\phi_{i} \leq \frac{4}{3}F^{i\bar i}|\phi_i|^2+C\sum F^{i\bar i}
\end{aligned}
\end{equation}
and
\begin{equation}
\begin{aligned}
F^{i\bar i} |\phi_i|^2
\leq \frac{3}{2}F^{i\bar i}|\varphi_i|^2+3F^{i\bar i}\frac{|\eta_i|^2}{\eta^2}. \nonumber
\end{aligned}
\end{equation}
As in \cite{Guan2008IMRN} we set $$\varphi=\varphi(w)=\left(1-\frac{w}{2N}\right)^{-\frac{1}{2}} \mbox{ where $N=\sup_{\{\eta>0\}}|\nabla u|^2$}. $$
One can check $\varphi'=\frac{\varphi^3}{4N},$ $\varphi''=\frac{3\varphi^5}{16N^2}$ and $1\leq \varphi\leq \sqrt{2}$.
And so
\begin{equation}
\begin{aligned}
\varphi''-2\varphi'^2=\frac{\varphi^5}{16N^2}(3-2\varphi)>\frac{\varphi^5}{96N^2}.
\end{aligned}
\end{equation}
By Lemma \ref{lemma-Lw} we have
\begin{equation}
\begin{aligned}
\mathcal{L}(w) \geq \frac{3\theta Q}{4} \sum F^{ii}-C (1+\sum F^{i\bar i}).
\end{aligned}
\end{equation}
By \eqref{2-22} we obtain
\begin{equation}
\begin{aligned}
0\leq \eta \leq 1, ~~\eta|_{B_{\frac{r}{2}}}\equiv 1,
~~F^{i\bar i}\frac{|\eta_i|^2}{\eta^2} \leq \frac{C}{r^2 \eta},
~~\frac{\mathcal{L}\eta}{\eta} \leq \frac{C}{r^2\eta} \sum F^{i\bar i}.
\end{aligned}
\end{equation}
In conclusion we finally derive
\begin{equation}
\begin{aligned}
0
\geq
\frac{9\theta Q}{16N} \sum F^{i\bar i}-\frac{C}{r^2\eta}
\sum F^{i\bar i}
-C\frac{\sqrt{Q}}{\mathfrak{g}_{1\bar{1}}} \sum F^{i\bar i}.
\end{aligned}
\end{equation}
This gives
\[\eta \mathfrak{g}_{1\bar 1}\leq \frac{C}{r^2}.\]
\end{proof}
\subsection{Completion the proof of Theorem \ref{thm1-inter}}
First, Theorem \ref{mainthm-1} implies the following result.
\begin{theorem}
\label{thm-3}
Assume \eqref{concave}, \eqref{elliptic-weak} and \eqref{key-123} hold. Then for fixed $\sigma$,
\begin{itemize}
\item[$\mathrm{\bf (1)}$] If $ \partial\Gamma^\sigma\setminus\bar\Gamma_n \neq \emptyset,$
then $\kappa_{\Gamma_{\sigma,f}}\geq 1$.
\item[$\mathrm{\bf (2)}$]
If $\Gamma$ is of type 2 cone
and
\begin{equation}
\label{unbound-1}
\begin{aligned}
f(0,\cdots,0,t)>\sigma \mbox{ for some } t>0,
\end{aligned}
\end{equation}
then $\kappa_{\Gamma_{\sigma,f}}=n-1$ and $f$ is of fully uniform ellipticity when restricted to
$\partial\Gamma^\sigma$.
\end{itemize}
\end{theorem}
\begin{proof}
Prove $\mathrm{\bf (1)}$: By \eqref{key-123}, $\tau_\sigma\geq 0$. Let $\lambda\in \partial\Gamma^\sigma\setminus\bar\Gamma_n$, and we assume $\lambda_n<0$. Note that $\lambda-\tau_\sigma \vec{\bf 1}\in \Gamma_{\sigma,f}$ and $\lambda_n-\tau_\sigma\leq \lambda_n<0$. Thus $\kappa_{\Gamma_{\sigma,f}}\geq1.$
Prove $\mathrm{\bf (2)}$: From \eqref{unbound-1}, there are some positive constants $\epsilon_0$, $t_0$ such that $f(-\epsilon_0,\cdots,-\epsilon_0,t_0)>\sigma$. And then there is $0<\beta<1$ such that $f(-\beta\epsilon_0,\cdots,-\beta\epsilon_0,\beta t_0)=\sigma.$ Thus $(-\beta\epsilon_0-\tau_\sigma,\cdots,-\beta\epsilon_0-\tau_\sigma,\beta t_0-\tau_\sigma)\in \Gamma_{\sigma,f}$ as required.
\end{proof}
Next we present
functions being of fully uniform ellipticity.
\begin{corollary}
\label{coro-n-1-equ}
Let $(\tilde{f},\tilde{\Gamma})$ be as defined in \eqref{n-1-equation}. Let $\sup_{\partial\Gamma}f<\sigma<\sup_\Gamma f$.
Suppose that $(f,\Gamma)$ satisfies \eqref{concave}, \eqref{elliptic-weak}, \eqref{key-123}, $\Gamma\neq\Gamma_n$
and
\begin{equation}
\label{unbound-2}
\begin{aligned}
f(t,\cdots,t, 0)>\sigma \mbox{ for some } t>0.
\end{aligned}
\end{equation}
Then $\tilde{f}$ is of fully uniform ellipticity when restricted to $\{\lambda\in \tilde{\Gamma }: \tilde{f}(\lambda)=\sigma\}$.
\end{corollary}
Corollary \ref{coro-n-1-equ} immediately implies the following:
\begin{proposition}
If \eqref{concave}, \eqref{elliptic-weak}, \eqref{neqgamma_n}, \eqref{con10} and \eqref{con11} hold, then equation \eqref{n-1-equ1} is of fully uniform ellipticity at any solution
satisfying \eqref{admissible-f1}.
\end{proposition}
This proposition confirms all the assumptions imposed in Theorem \ref{thm1-inter-3},
thereby obtaining Theorem \ref{thm1-inter}.
\section{Applications to Hessian
equations
}
\label{sec5}
Let $(M,g)$ be a $n$-dimensional compact Riemannian manifold, possibly with boundary $\partial M$, $\bar M=M\cup \partial M$.
We consider
the Hessian equations
\begin{equation}
\label{hessianequ1-riemann}
\begin{aligned}
f(\lambda(\nabla^2 u+A))=\psi.
\end{aligned}
\end{equation}
where $\psi$ is a smooth function and $A$ is a smooth symmetric $(0,2)$-type tensor.
The second order estimate for \eqref{hessianequ1-riemann} on curved Riemannian manifolds was studied by
Guan \cite[Section 3]{Guan12a},
extending previous results in literature, see e.g. \cite{LiYY1990,Urbas2002}.
The gradient estimate is the remaining task to the study of Hessian equations.
However, it is rather hard to prove gradient estimate for Hessian equations on curved Riemannian manifolds.
The gradient estimate was obtained in \cite{LiYY1990} under assumptions \eqref{t1-to-0},
$\lim_{|\lambda|\rightarrow +\infty}\sum_{i=1}^n f_i(\lambda)=+\infty$ and that the Riemannian manifold admits nonnegative sectional curvature, and later extended by Urbas \cite{Urbas2002}
with replacing such restrictions by \eqref{key2-yuan} and \eqref{positive-1}.
One may use Lemmas \ref{yuan-lemma1-weingarten} and \ref{yuan-lemma2-weingarten}
to improve their results.
In fact, Theorem \ref{coro1.6} and Proposition \ref{mainthm-2} allow us to derive the
gradient estimate for more general equations
when
\begin{equation}
\label{key3-yuan}
\sum_{i=1}^n f_i(\lambda)\lambda_i\geq -K_0\sum_{i=1}^n f_i(\lambda)
\mbox{ for some } K_0\geq0.
\end{equation}
If \eqref{key3-yuan} holds for any $\lambda\in \Gamma^{\underline{\psi},\overline{\psi}}$, then according to
Proposition \ref{mainthm-2} and \eqref{sumfi-
|
2},
we obtain
a more general inequality than \eqref{key2-yuan}
\begin{equation}
\begin{aligned}
f_i(\lambda) \geq \theta+\theta \sum_{j=1}^n f_i(\lambda) \quad\mbox{if } \lambda_i\leq -K_0, \mbox{ } \forall \lambda\in \Gamma^{\underline{\psi},\overline{\psi}}.
\end{aligned}
\end{equation}
As an application, one can follow a strategy, analogous to that used in \cite{Urbas2002}, to
derive gradient bound for solutions to \eqref{hessianequ1-riemann} under the $\mathcal{C}$-subsolution assumption
\begin{equation}
\label{existenceofsubsolution2}
\begin{aligned}
\lim_{t\rightarrow +\infty}f(\lambda(\nabla^2\underline{u}+A)+t\mathrm{e}_i)>\psi
\mbox{ in } \bar M, \quad \forall 1\leq i\leq n
\end{aligned}
\end{equation}
where
$\mathrm{e}_i$ is the $i$-$\mathrm{th}$ standard basis vector of $\mathbb{R}^n$.
\begin{proposition}
\label{thm1-gradient}
In addition to \eqref{concave}, \eqref{elliptic-weak} and \eqref{key3-yuan},
we assume there is a $C^2$-smooth $\mathcal{C}$-subsolution $\underline{u}$.
Let $u\in C^3(M)\cap C^1(\bar M)$ be a solution to \eqref{hessianequ1-riemann} with $\lambda(\nabla^2 u+A)\in \Gamma$,
then
\begin{equation}
\begin{aligned}
\sup_{M}|\nabla u|\leq C(1+\sup_{\partial M}|\nabla u|), \nonumber
\end{aligned}
\end{equation}
where $C$ depends on $|\psi|_{C^1(\bar M)}$, $|\underline{u}|_{C^2(\bar M)}$ and other known data.
\end{proposition}
With gradient estimate at hand, as in \cite{Guan12a,yuan2021cvpde}, we can prove the following:
\begin{theorem}
Let $(M^n,g)$ be a compact Riemannian manifold with smooth smooth boundary.
Let $\varphi\in C^\infty(\partial M)$ and $\psi\in C^\infty(\bar M)$ be a function satisfying $\inf_M\psi>\sup_{\partial\Gamma}f$.
Suppose that there is an admissible function
$\underline{u}\in C^{3,1}(\bar M)$ satisfying $$f(\lambda(\nabla^2 \underline{u}+A))\geq \psi, \quad \underline{u}|_{\partial M }=\varphi.$$ In addition to \eqref{elliptic}, \eqref{concave}, we assume \eqref{key3-yuan} holds for $$\inf_M\psi\leq f(\lambda)\leq \sup_M f(\lambda(\nabla^2 \underline{u}+A)).$$
Then there exists a unique smooth admissible function $u$ solving \eqref{hessianequ1-riemann} with $u|_{\partial M}=\varphi$.
\end{theorem}
\begin{remark}
Condition \eqref{key3-yuan} for $K_0=0$ was also used by \cite{Guan12a} to derive boundary estimate for Dirichlet problem of
\eqref{hessianequ1-riemann}, and later by \cite{GSS14} (for $K_0\geq0$) to
study first initial boundary problems.
Our results
indicate that the technique assumptions $\mathrm{\bf (i)}$-$\mathrm{\bf (iii)}$ imposed in \cite[Theorem 1.10]{Guan12a} as well as assumptions $\mathrm{\bf (i)}$-$\mathrm{\bf (iv)}$ in \cite[Theorem 1.9]{GSS14} can be removed.
\end{remark}
\subsection*{Acknowledgements}
The author was supported by the National Natural Science Foundation of China under grant 11801587.
\begin{appendix}
\section{Some standard lemmas}
In this appendix we summarize some standard lemmas.
\begin{lemma}
\label{k-buchong2}
Let $f$ satisfy \eqref{concave} and \eqref{elliptic-weak}, then \eqref{sumfi-02} holds.
\end{lemma}
The above lemma has been used in \cite{GQY2018}.
Below we present another one.
\begin{lemma}
\label{k-buchong1}
Suppose $f$ satisfies \eqref{concave} and \eqref{elliptic-weak}.
Then for any $\sigma$
with $\sigma<\sup_\Gamma f$ and $\partial\Gamma^\sigma\neq \emptyset$,
there exists $c_\sigma \vec{\bf 1}\in \partial\Gamma^\sigma.$
\end{lemma}
\begin{proof}
For $\sigma<\sup_\Gamma f$, the level set $\partial\Gamma^\sigma$ (if $\partial\Gamma^\sigma\neq \emptyset$) is a
convex noncompact
hypersurface contained in $\Gamma$. Moreover, $\partial\Gamma^\sigma$ is symmetric with respect to the diagonal.
Let $\lambda^0\in \partial\Gamma^\sigma$ be the closest point to the origin. (Such a point exists, since $\partial\Gamma^\sigma$ is a closed set). The idea is to prove $\lambda^0$ is
the one what we look for.
Assume $\lambda^0$ is not in the diagonal. Then by the Implicit Function Theorem, and the convexity and symmetry of $\partial\Gamma^\sigma$, one can choose $\lambda\in \partial\Gamma^\sigma$
which has strictly less distance than that of $\lambda^0$. It is a contradiction.
\end{proof}
\section{Criterion for
$f$ satisfying \eqref{addistruc}}
\label{appendix1}
We summarize characterizations of $f$ when it satisfies \eqref{concave} and \eqref{addistruc}.
The following lemma was first proposed by
\cite{yuan2021-2}\renewcommand{\thefootnote}{\fnsymbol{footnote}}\footnote{The paper is extracted from
[arXiv:2203.03439] and the first parts of
[arXiv:2001.09238; arXiv:2106.14837].}
and further reformulated in
\cite{yuan-PUE-conformal}.
\begin{lemma}[\cite{yuan2021-2,yuan-PUE-conformal}]
\label{lemma3.4}
For $f$ satisfying \eqref{concave}, the following statements are equivalent.
\begin{itemize}
\item $f$ satisfies \eqref{addistruc}.
\item $\sum_{i=1}^n f_i(\lambda)\mu_i>0, \mbox{ } \forall \lambda, \mbox{ } \mu\in \Gamma. $
\item $f(\lambda+\mu)>f(\lambda), \mbox{ }\forall \lambda, \mbox{ } \mu\in \Gamma.$
\end{itemize}
\end{lemma}
\begin{corollary}[\cite{yuan-PUE-conformal}]
\label{coro3.2}
Assume \eqref{concave} and \eqref{addistruc} hold. Then we have \eqref{elliptic-weak} and
$\sum_{i=1}^n f_i(\lambda)>0$.
\end{corollary}
Inspired by the following key observation derived from \eqref{concave-1}
\begin{equation}
\label{010}
\begin{aligned}
\mbox{For any } \lambda, \mbox{ } \mu\in\Gamma, \mbox{ } \sum_{i=1}^n f_i(\lambda)\mu_i \geq \limsup_{t\rightarrow+\infty} f(t\mu)/t \nonumber
\end{aligned}\end{equation}
the author \cite{yuan2021-2} introduced the following two conditions:
\begin{equation}
\label{addistruc-2}
\begin{aligned}
\mbox{For any } \lambda\in\Gamma, \mbox{ }
\lim_{t\rightarrow+\infty} f(t\lambda)>-\infty,
\end{aligned}
\end{equation}
\begin{equation}
\label{addistruc-5}
\begin{aligned}
\mbox{For any } \lambda\in\Gamma, \mbox{ }
\limsup_{t\rightarrow+\infty} f(t\lambda)/t\geq0.
\end{aligned}
\end{equation}
Obviously, it leads to
\begin{lemma}
[\cite{yuan2021-2}]
\label{lemma-new-1}
Suppose $f$ satisfies \eqref{concave} and \eqref{addistruc-5}.
Then \begin{equation}
\label{addistruc-04}
\begin{aligned}
\sum_{i=1}^n f_i(\lambda)\mu_i\geq 0 \mbox{ for any } \lambda, \mbox{ } \mu\in\Gamma.
\end{aligned}
\end{equation}
In addition, $f_i(\lambda)\geq 0$ in $\Gamma$ for all $1\leq i\leq n$.
In particular it satisfies
\begin{equation}
\label{addistruc-3}
\begin{aligned}
\sum_{i=1}^n f_i(\lambda)\lambda_i\geq0, \quad \forall \lambda\in\Gamma.
\end{aligned}
\end{equation}
\end{lemma}
We have criteria for concave symmetric functions.
\begin{lemma}
\label{lemma-4b}
In the presence of \eqref{concave}, the following statements are equivalent.
\begin{itemize}
\item $f$ satisfies \eqref{addistruc-2}.
\item $f$ satisfies \eqref{addistruc-5}.
\item $f$ satisfies \eqref{addistruc-04}.
\item $f$ satisfies \eqref{addistruc-3}.
\end{itemize}
\end{lemma}
We can deduce the following lemma when \eqref{sumfi>0} holds.
\begin{lemma}
\label{lemma-new-2}
Suppose that \eqref{concave} and \eqref{sumfi>0} hold.
Then the following statements are equivalent to each other.
\begin{itemize}
\item $f$ satisfies \eqref{addistruc}.
\item $f$ satisfies \eqref{addistruc-2}.
\item $f$ satisfies \eqref{addistruc-5}.
\item $f$ satisfies \eqref{addistruc-04}.
\item $f$ satisfies \eqref{addistruc-3}.
\end{itemize}
\end{lemma}
\begin{proof}
From Lemmas \ref{lemma3.4} and \ref{lemma-4b},
it requires only to prove
\[\eqref{addistruc-5} \Rightarrow \eqref{addistruc}.\]
Fix $\lambda$, $\mu\in\Gamma$.
Note that $t\mu-\lambda- \vec{\bf 1} \in\Gamma$ for $t> t_{\mu,(\lambda+{\vec{\bf 1}})}>0$.
Together with \eqref{concave-1} and \eqref{sumfi>0},
we derive
\[f(t\mu)\geq f(\lambda) + \sum_{i=1}^n f_i(t\mu)>f(\lambda)\mbox{ for } t>t_{\mu,(\lambda+{\vec{\bf 1}})}.\]
\end{proof}
\begin{lemma}
[\cite{yuan2021-2}]
\label{lemma1-con-addi}
If $f$ satisfies \eqref{concave}, \eqref{elliptic} and \eqref{t1-to-0}, then it obeys \eqref{addistruc}.
\end{lemma}
\begin{remark}
Lemma \ref{lemma-new-2} was proved in \cite{yuan2021-2}
when $f$ satisfies \eqref{concave}-\eqref{elliptic}.
\end{remark}
\end{appendix}
|
\section{Introduction}
After receiving paper reviews, authors may optionally submit a rebuttal to address the reviewers' comments, which will be limited to a {\bf one page} PDF file.
Please follow the steps and style guidelines outlined below for submitting your author response.
The author rebuttal is optional and, following similar guidelines to previous CVPR conferences, is meant to provide you with an opportunity to rebut factual errors or to supply additional information requested by the reviewers.
It is NOT intended to add new contributions (theorems, algorithms, experiments) that were absent in the original submission and NOT specifically requested by the reviewers.
You may optionally add a figure, graph, or proof to your rebuttal to better illustrate your answer to the reviewers' comments.
Per a passed 2018 PAMI-TC motion, reviewers should refrain from requesting significant additional experiments for the rebuttal or penalize for lack of additional experiments.
Authors should refrain from including new experimental results in the rebuttal, especially when not specifically requested to do so by the reviewers.
Authors may include figures with illustrations or comparison tables of results reported in the submission/supplemental material or in other papers.
Just like the original submission, the rebuttal must maintain anonymity and cannot include external links that reveal the author identity or circumvent the length restriction.
The rebuttal must comply with this template (the use of sections is not required, though it is recommended to structure the rebuttal for ease of reading).
\subsection{Response length}
Author responses must be no longer than 1 page in length including any references and figures.
Overlength responses will simply not be reviewed.
This includes responses where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide.
Note that this \LaTeX\ guide already sets figure captions and references in a smaller font.
\section{Formatting your Response}
{\bf Make sure to update the paper title and paper ID in the appropriate place in the tex file.}
All text must be in a two-column format.
The total allowable size of the text area is $6\frac78$ inches (17.46 cm) wide by $8\frac78$ inches (22.54 cm) high.
Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch (0.8 cm) space between them.
The top margin should begin 1 inch (2.54 cm) from the top edge of the page.
The bottom margin should be $1\frac{1}{8}$ inches (2.86 cm) from the bottom edge of the page for $8.5 \times 11$-inch paper;
for A4 paper, approximately $1\frac{5}{8}$ inches (4.13 cm) from the bottom edge of the page.
Please number any displayed equations.
It is important for readers to be able to refer to any particular equation.
Wherever Times is specified, Times Roman may also be used.
Main text should be in 10-point Times, single-spaced.
Section headings should be in 10 or 12 point Times.
All paragraphs should be indented 1 pica (approx.~$\frac{1}{6}$ inch or 0.422 cm).
Figure and table captions should be 9-point Roman type as in \cref{fig:onecol}.
List and number all bibliographical references in 9-point Times, single-spaced,
at the end of your response.
When referenced in the text, enclose the citation number in square brackets, for example~\cite{Alpher05}.
Where appropriate, include the name(s) of editors of referenced books.
\begin{figure}[t]
\centering
\fbox{\rule{0pt}{0.5in} \rule{0.9\linewidth}{0pt}}
\caption{Example of caption. It is set in Roman so that mathematics
(always set in Roman: $B \sin A = A \sin B$) may be included without an
ugly clash.}
\label{fig:onecol}
\end{figure}
To avoid ambiguities, it is best if the numbering for equations, figures, tables, and references in the author response does not overlap with that in the main paper (the reviewer may wonder if you talk about \cref{fig:onecol} in the author response or in the paper).
See \LaTeX\ template for a workaround.
\subsection{Illustrations, graphs, and photographs}
All graphics should be centered.
Please ensure that any point you wish to make is resolvable in a printed copy of the response.
Resize fonts in figures to match the font in the body text, and choose line widths which render effectively in print.
Readers (and reviewers), even of an electronic copy, may choose to print your response in order to read it.
You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic.
When placing figures in \LaTeX, it is almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below
{\small\begin{verbatim}
\usepackage{graphicx} ...
\includegraphics[width=0.8\linewidth]
{myfile.pdf}
\end{verbatim}
}
{\small
\bibliographystyle{ieee_fullname}
\section{Conclusion}
\label{sec:conclusion}
We present ViewSeg\xspace, an end-to-end approach which learns a 3D representation of a scene from merely 4 input views.
ViewSeg\xspace enables models to query its learnt 3D representation with a previously unseen target viewpoint of a novel sene to predict semantics and depth from that view, without access to any visual information (\eg RGB) from the view.
We discuss our work's limitations with an extensive quantitative and qualitative analysis.
We believe that we present a very promising direction to learn 3D from 2D data with lots of potential for exciting future work.
Regarding ethical risks, we do not foresee any immediate dangers.
The datasets used in this work do not contain any humans or any other sensitive information.
\mypar{Acknowledgments}
We thank Shuaifeng Zhi for his help of Semantic-NeRF, Oleksandr Maksymets and Yili Zhao for their help of AI-Habitat, and Ang Cao, Chris Rockwell, Linyi Jin, Nilesh Kulkarni for helpful discussions.
{\small
\bibliographystyle{ieee_fullname}
\section{Introduction}
\label{sec:intro}
Humans can build a rich understanding of scenes from a handful of images. The pictures in Fig.~\ref{fig:teaser}, for instance, let us imagine how the objects would occlude, disocclude, and change shape as we walk around the room. This skill is useful in new environments, for example if one were at a friend's house, looking for a table to put a cup on. One can reason that a side table may be by the couch without first mapping the room or worrying about what color the side table is. As one walks into a room, one can readily sense floors behind objects and chair seats behind tables, etc. This ability is so intuitive that entire industries like hotels and real estate depend on persuading users with a few photos.
The goal of this paper is to give computers the same ability.
In AI, this skill allows autonomous agents to purposefully and efficiently interact with the scene and its objects bypassing the expensive step of mapping.
AR/VR and graphics also benefit from 3D scene understanding.
To this end, we propose to learn a 3D representation that enables machines to recognize a scene by segmenting it into semantic categories from {\it novel views}.
Each novel view, provided in the form of camera coordinates, queries the learnt 3D representation to produce a semantic segmentation of the scene from that view {\it without access to the view's RGB image}, as we show in Fig.~\ref{fig:teaser}.
We aim to learn this 3D representation without onerous 3D supervision.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figures/teaser.pdf}
\vspace{-6mm}
\caption{We propose ViewSeg\xspace which takes as input (a) a few images of a novel scene, and recognizes the scene from novel viewpoints. The novel viewpoint, in the form of camera coordinates, queries (b) our learnt 3D representation to produce (c) semantic segmentations from the view {\it without access to the view's RGB image}. The view query additionally produces (c) depth. ViewSeg\xspace trains on hundreds of scenes using multi-view 2D annotations and no 3D supervision. Depth colormap: 0.1m \includegraphics[width=0.13\linewidth]{figures/depth_colormap.png} 20m.}
\label{fig:teaser}
\vspace{-5mm}
\end{figure}
Making progress on the problem in complex indoor scenes requires connecting three key areas of computer vision: semantic understanding (i.e., naming objects), 3D understanding, and novel view synthesis (NVS). This puts the problem out of reach of work in any one area. For instance, there have been substantial advances in NVS, including methods like NeRF~\cite{mildenhall2020nerf} or SynSin~\cite{wiles2020synsin}. These approaches perform well on NVS, including results in new scenes~\cite{yu2021pixelnerf,wiles2020synsin}, but are focused on only appearance. In addition to numerous semantic-specific details, recognition in novel viewpoints via direct appearance synthesis is suboptimal: one may be sure of the presence of a rug behind a couch, but unsure of its particular color. Similarly, there have been advances in learning to infer 3D properties of scenes from image cues~\cite{factored3dTulsiani18,Nie_2020_CVPR,Gkioxari2019}, or with differentiable rendering~\cite{kato2018neural,liu2019soft,chen2019learning,ravi2020pytorch3d} and other methods for bypassing the need for direct 3D supervision~\cite{3DRCNN_CVPR18,kanazawa2018learning,kulkarni2020acsm,ye2021shelf}.
However, these approaches do not connect to complex scene semantics; they primarily focus on single objects or small, less diverse 3D annotated datasets.
More importantly, we empirically show that merely using learned 3D to propagate scene semantics to the new view is insufficient.
We tackle the problem of predicting scene semantics from novel viewpoints, which we refer to as \emph{novel view semantic understanding}, and propose a new end-to-end model, ViewSeg\xspace.
ViewSeg\xspace fuses advances and insights from semantic segmentation, 3D understanding, and novel view synthesis with a number of task-specific modifications.
As input, our model takes a few posed RGB images from a previously unseen scene and a target pose (but {\it not} image). As output, it recognizes and segments objects from the target view by producing a semantic segmentation map. As an additional product, it also produces depth in the target view. During training, ViewSeg\xspace depends only on posed 2D images and semantic segmentations, and in particular does not use ground truth 3D annotations beyond camera pose.
Our experiments validate ViewSeg\xspace's contributions on two challenging datasets, Hypersim~\cite{roberts2020hypersim} and Replica~\cite{straub2019replica}. We substantially outperform alternate framings of the problem that build on the state-of-the-art: image-based NVS~\cite{yu2021pixelnerf} followed by semantic segmentation~\cite{chen2018encoder}, which tests whether NVS is sufficient for the task; and lifting semantic segmentations~\cite{chen2018encoder} to 3D and differentiably rendering, like~\cite{wiles2020synsin}, which tests the value of using implicit functions to tackle the problem.
Our ablations further quantify the value of our problem-specific design.
Among others, they reveal that ViewSeg\xspace trained for novel view semantic segmentation obtains more accurate depth predictions compared to a variant which trains without a semantic loss and image-based NVS~\cite{yu2021pixelnerf}, indicating that semantic understanding from novel viewpoints positively impacts geometry understanding.
Overall, our results demonstrate ViewSeg\xspace's ability to jointly capture the semantics and geometry of unseen scenes when tested on new, complex scenes with diverse layouts, object types, and shapes.
\begin{comment}
How well can we understand a scene from only a few views, like the ones shown in Fig.~\ref{fig:teaser}?
Now from a new viewpoint, \eg the target camera in Fig.~\ref{fig:teaser}, what objects can you see?
Humans have the ability to perceive scenes and objects from few images.
And our perception of our environment is not limited to the views we observe.
On the contrary, we build a 3D model of the scene anchored by sparse 2D observations.
For example, imagine walking into the living room of your friend's new house.
You don't need to densely scan the room to understand it; a few glimpses are enough.
And it is this ability of ours that allows us to purposefully navigate through the scene and interact with its objects, even when it is dark or we have our eyes closed.
For machines, reconstructing a scene from a novel viewpoint given few input images is challenging.
Novel view synthesis (NVS) predicts the scene's appearance in RGB space from novel cameras by capturing the underlying scene geometry, illumination and object materials.
For example, NeRF~\cite{mildenhall2020nerf} model radiance and density with a continuous volumetric function and learn a model per scene from hundreds of views,
and SynSin~\cite{wiles2020synsin} uses a latent 3D point cloud and generalizes to novel scenes from only one view.
Both NeRF~\cite{mildenhall2020nerf} and SynSin~\cite{wiles2020synsin} demonstrate that they can capture depth merely by reconstructing appearance.
However, they completely ignore semantics, \ie what objects are present and where they are located.
But, semantics are equally important to geometry;
a home robot should be able to predict free space or the distance of obstacles, but it should also understand what objects exist in its environment and recognize a human from a pillow or the walls.
To this end, we tackle the problem of \emph{novel view semantic understanding} and propose to jointly learn semantics and geometry from few RGB views.
From a novel viewpoint and without any visual observations from that view, our model recognizes the scene and its objects, as shown in Fig.~\ref{fig:teaser}.
Trained for appearance and semantic reconstruction, our model additionally predicts depth, also in Fig.~\ref{fig:teaser}.
Understanding 3D scenes from 2D views is a longstanding goal of computer vision.
Traditional techniques, such as Structure from Motion and Binocular Stereo~\cite{Hartley04}, lift scenes to 3D from hundreds of narrow baseline views but ignore semantics.
Learning based methods~\cite{gupta2014learning,factored3dTulsiani18,Nie_2020_CVPR,Gkioxari2019}
jointly tackle geometry and semantics but rely on expensive 3D annotations which are hard to collect at scale for a large variety of object and scene types.
Several efforts~\cite{3DRCNN_CVPR18,kanazawa2018learning,kulkarni2020acsm,ye2021shelf} bypass the need for 3D supervision and show impressive 3D object reconstructions but experiment on images of isolated objects or rely on object-specific priors, precluding their applicability to real-world multi-object scenes.
Differentiable rendering~\cite{kato2018neural,liu2019soft,chen2019learning,ravi2020pytorch3d} has emerged as a powerful tool to learn 3D shape from many views via 2D re-projections of silhouettes and textures.
But these methods demonstrate its effectiveness on single-object benchmarks like ShapeNet~\cite{shapenet}.
In this work, we tackle novel view semantic understanding on real-world scenes with many objects.
We propose a model which takes as input few posed RGB views of a novel scene and recognizes the scene and its objects by segmenting them from a new viewpoint.
We call our approach ViewSeg\xspace.
ViewSeg\xspace does not use 3D ground truth but learns from 2D object annotations via 2D re-projections in semantic and appearance space using differentiable rendering.
To solve for novel view semantic understanding, our approach, ViewSeg\xspace, reasons about the scene from a novel target view, similar to novel view synthesis (NVS).
Unlike NVS, ViewSeg\xspace shifts the focus from RGB synthesis to semantic understanding.
To this end, we pair an implicit 3D representation~\cite{mildenhall2020nerf} with advances in 2D scene recognition.
In 2D, semantic segmentation is the task of segmenting an input image into objects, \eg chair and table, and stuff, \eg wall and floor.
ViewSeg\xspace also segments the scene from the target view but with a crucial difference: it does not have access to the target RGB image.
We experiment on two challenging datasets, Hypersim~\cite{roberts2020hypersim} and Replica~\cite{straub2019replica}.
We prove the value of predicting semantics from novel viewpoints end-to-end by significantly outperforming a two-stage solution: NVS followed by image-based semantic segmentation.
We demonstrate our model's ability to jointly capture semantics and geometry of unseen scenes and show compelling results on novel complex scenes with diverse layouts, object types and shapes.
\end{comment}
\section{Approach}
\label{sec:approach}
We tackle novel view semantic understanding end-to-end with a new model, ViewSeg\xspace.
ViewSeg\xspace takes as input RGB images from N \emph{source} views and segments objects and stuff from a novel \emph{target} viewpoint without access to the target image.
We pair semantic segmentation with an implicit 3D representation to learn semantics and geometry from hundreds of scenes and thousands of source-target pairs.
An overview of our approach is shown in Fig.~\ref{fig:overview}.
Our model consists of a 2D semantic segmentation module which embeds each source view to a higher dimensional semantic space.
An implicit 3D representation~\cite{mildenhall2020nerf} samples features from the output of the segmentation module and predicts radiance, density and a distribution over semantic categories at each spatial 3D location with an MLP.
The 3D predictions are projected to the target view to produce the segmentation from that view via volumetric rendering.
\subsection{Semantic Segmentation Module}
The role of the semantic segmentation module is to project each source RGB view to a learnt feature space, which will subsequently be used to make 3D semantic predictions.
Our 2D segmentation backbone takes as input an image $I$ and outputs a spatial map $M$ of the same spatial resolution, $b^\textrm{seg}: I^{H\times W \times3} \rightarrow M^{H \times W \times K}$, using a convolutional neural network (CNN). Here, $K$ is the dimension of the feature space ($K=256$).
We build on the state-of-the-art for single-view semantic segmentation and follow the encoder-decoder DeepLabv3+~\cite{chen2018encoder,chen2017rethinking} architecture which processes an image
by downsampling and then upsampling it to its original resolution with a sequence of convolutions.
We remove the final layer, which predicts class probabilities, and use the output from the penultimate layer.
We initialize our segmentation module by pre-training on ADE20k~\cite{zhou2019semantic}, a dataset of over 20k images with semantic annotations.
We empirically show the impact of the network architecture and pre-training in our experiments (Sec.~\ref{sec:experiments}).
\subsection{Semantic 3D Representation}
To recognize a scene from novel viewpoints, we learn a 3D representation which predicts the semantic class for each 3D location.
To achieve this we learn a function $f$ which maps semantic features from our segmentation module to distributions over semantic categories conditioned on the 3D coordinates of each spatial location.
Assume N source views $\{I_j\}_{j=1}^{N}$ and corresponding cameras $\{\pi_j\}_{j=1}^{N}$.
For each view we extract the semantic maps $\{M_j\}_{j=1}^{N}$ with our 2D segmentation module, or $M_j = b^\textrm{seg}(I_j)$.
We project every 3D point $\xB$ to the $j$-th view with the corresponding camera transformation, $\pi_j(\mathbf{x})$, and then sample the $K$-dimensional feature vector from $M_j$ from the projected 2D location.
This yields a semantic feature from the $j$-th image denoted as $\phi^\textrm{seg}_j(\mathbf{x}) = M_j(\pi_j(\mathbf{x}))$.
Following NeRF~\cite{mildenhall2020nerf} and PixelNeRF~\cite{yu2021pixelnerf}, $f$ takes as input a positional embedding of the 3D coordinates of $\mathbf{x}$, $\gamma(\mathbf{x})$, and the viewing direction $\mathbf{d}$.
We additionally feed the semantic embeddings $\{\phi^\textrm{seg}_j\}_{j=1}^{N}$.
As output, $f$ produces
\begin{equation}
(\cB, \, \sigma, \, \sB) = f\bigl( \gamma(\mathbf{x}), \, \mathbf{d}, \, \phi^\textrm{seg}_0(\mathbf{x}), \, ..., \, \phi^\textrm{seg}_{N-1}(\mathbf{x}) \bigr)
\end{equation}
where $\cB \in \mathbb{R}^3$ is the RGB color, $\sigma \in \mathbb{R}$ is the occupancy, and $\sB \in \mathbb{R}^{|\mathcal{C}|}$ is the distribution over semantic classes $\mathcal{C}$.
We model $f$ as a fully-connected ResNet~\cite{he2016deep}, similar to PixelNeRF~\cite{yu2021pixelnerf}.
The positional embedding of the 3D location and the direction are concatenated to each semantic embedding $\phi^\textrm{seg}_j$, and each is passed through 3 residual blocks with 512 hidden units.
The outputs are subsequently aggregated using average pooling and used to predict the final outputs of $f$ via two branches: one predicts the semantics $\sB$, the other the color $\cB$ and density $\sigma$.
Each branch consists of two residual blocks, each with 512 hidden units.
Read about the network architecture in the Supplementary.
\mypar{Predicting Semantics.}
Rendering the semantic predictions, $s$, from a given viewpoint gives the semantic segmentation of the scene from that view.
Following NeRF~\cite{mildenhall2020nerf}, we accumulate predictions on rays, $\mathbf{r}(t) = \mathbf{o} + t \cdot \mathbf{d}$, originating at the camera center $\mathbf{o}$ with direction $\mathbf{d}$,
\begin{equation}
\hat{S}(\mathbf{r}) = \int_{t_n}^{t_f} T(t) \sigma(t) \sB(t) dt
\label{eq:semantic}
\end{equation}
where $T(t) = \exp(- \int_{t_n}^{t} \sigma(s) ds)$ is the accumulated transmittance along the ray, and $t_n$ and $t_f$ are near and far sampling bounds, which are hyperparameters.
The values of the sampling bounds $(t_n, t_f)$ are crucial for good performance.
In the original NeRF~\cite{mildenhall2020nerf} method, $(t_n, t_f)$ are manually set to tightly bound the scene.
PixelNeRF~\cite{ZhiICCV2021} uses manually selected parameters \emph{for each object category}, \ie different values for chairs, different values for cars, \etc.
In Semantic-NeRF~\cite{ZhiICCV2021}, the values are selected for the Replica rooms, which vary little in size.
In the datasets we experiment on in Sec.~\ref{sec:experiments}, scene scale varies drastically from human living spaces with regular depth extents (\eg living rooms) to industrial facilities (\eg warehouses), lofts or churches with large far fields.
With the goal of scene generalization in mind, we set $(t_n, t_f)$ globally, regardless of the true near/far bounds of each scene we encounter.
This more realistic setting makes the problem harder: our model needs to predict the right density values for a large range of depth fields, reasoning about occupancy within each scene but also about the depth extent of the scene as a whole.
Replacing the semantic predictions $\sB$ with the RGB predictions $\cB$ in Eq.~\ref{eq:semantic} produces the RGB view $\hat{C}(\mathbf{r})$ from the target viewpoint, as in~\cite{mildenhall2020nerf}.
While photometric reconstruction is not our goal, we use $\hat{C}$ during learning and show that it helps capture the scene's geometry more accurately.
\mypar{Predicting Depth.}
In addition to the semantic segmentation $\hat{S}$ and the RGB reconstruction $\hat{C}$ from a target viewpoint, we also predict the pixel-wise depth of the scene from that view, as in~\cite{deng2021depth} by computing
\begin{equation}
\hat{D}(\mathbf{r}) = \int_{t_n}^{t_f} T(t) \sigma(t) t dt.
\label{eq:depth}
\end{equation}
We use the depth output $\hat{D}$ only during evaluation.
By comparing single-view depth ($\hat{D}$) and semantic segmentation ($\hat{S}$) from many novel viewpoints, we measure our model's ability to capture geometry and semantics.
\subsection{Learning Objective}
\label{sec:optimization}
Our primary objective is to segment the scene from the target view. We jointly train the segmentation module $b^\textrm{seg}$ and implicit 3D function $f$ to directly solve this task. We also find that auxiliary photometric and source-view losses are crucial for performance. Our objectives require RGB images and 2D semantics from various views in the scene as well as poses to perform re-projection.
Since our goal is to predict a semantic segmentation in the target view, our primary objective is a per-pixel cross-entropy loss between the true class labels $S$ and predicted class distribution $\hat{S}$,
\begin{equation}
L^S_{\textrm{target}} = - \sum_{\mathbf{r} \in \mathbf{R}} \sum_{j=1}^{|\mathcal{C}|} S^j(\mathbf{r}) \log \hat{S}^j (\mathbf{r})
\end{equation}
where $\mathcal{C}$ is the set of semantic classes and $\mathbf{R}$ is the set of rays in the target view. Here, $S^j(\rB)$ is the $\{0,1\}$ true label for the $j$-th class at the intersection of ray $\rB$ and the image.
In addition to this, we minimize auxiliary losses that improve performance on our primary task. The first is a photometric loss on RGB images, namely the squared L$_2$ distance between the prediction $\hat{C}$ and actual image $C$, or
\begin{equation}
L^P_{\textrm{target}} = \sum_{\mathbf{r} \in \mathbf{R}} \left|\left| \hat{C}(\mathbf{r}) - C(\mathbf{r}) \right|\right|^2_2
\end{equation}
where $C(\mathbf{r})$ is the true RGB color at the intersection of ray $\rB$ and the image.
Finally, in addition to standard losses on the target view~\cite{mildenhall2020nerf,yu2021pixelnerf,ZhiICCV2021}, we find it is important to apply losses on the source views.
Specifically, we create $L_{\textrm{source}}^S$ and $L_{\textrm{source}}^P$ that are the semantic and photometric losses, respectively, applied to rays from the source views. These losses help enforce consistency with the input views.
Our final objective is given by
\begin{equation}
L = L^P_{\textrm{target}} + L^P_{\textrm{source}} + \lambda \cdot (L^S_{\textrm{target}} + L^S_{\textrm{source}})
\label{eq:objective}
\end{equation}
where $\lambda$ scales the semantic and photometric losses.
\section{Qualitative Results}
Figure~\ref{fig:hypersim_more} shows more qualitative results on Hypersim.
We draw a few interesting observations.
Overall our model is able to segment objects and stuff from novel viewpoints and capture the underlying 3D geometry as seen in the predicted depth maps.
In addition, our model is able to generalize to completely unseen parts of a scene.
For example, consider the 4$^\textrm{th}$ example in Figure~\ref{fig:hypersim_more} of a museum hall.
Here, the four input images show the left side of the hall while the novel view queries the right side, which is not captured by the inputs.
As seen from our RGB prediction, our model struggles to reconstruct that side of the scene in appearance space.
It does much better in semantic space, by correctly placing the wall and the ceiling and by extending the floor.
The predicted depth also shows that the model can reason about the geometry of the scene as the right wall starts close to the camera and extends backward consistent with the overall structure of the room.
Though both depth and semantic predictions are not perfect, they are evidence that our model has learnt to reason about 3D semantics.
We believe this is attributed to the scene priors our model has captured after being trained on hundreds of diverse scenes.
The 8$^\textrm{th}$ (last) example in Figure~\ref{fig:hypersim_more} leads to a similar conclusion.
Our model correctly predicts the \emph{pillows} in the target viewpoint but additionally predicts a sofa which is sensibly placed relative to the location and extent of the predicted pillows.
The sofa prediction, though not in the ground truth, is a reasonable one and is likely driven by the scene priors our model has captured during training; a line of pillows usually exists on a sofa.
Finally, the examples in Figure~\ref{fig:hypersim_more} and Figure 3 in the main paper also reveal our work's limitations.
Our model does not make pixel-perfect predictions and often misses parts of objects or misplaces them by a few pixels in the target view.
It is likely that more training data would lead to significantly better predictions.
Figure~\ref{fig:replica_more} shows more qualitative results on Replica.
Last but not least, we provide video animations of our predictions in the supplementary folder.
These complement the static visualizations in the pdf submissions and better demonstrate our model's ability to make predictions from novel viewpoints on novel scenes.
\begin{figure*}[t]
\centering
\includegraphics[width=0.99\linewidth]{figures/viewseg_hypersim_suppl.png}
\vspace{-2mm}
\caption{More predictions on Hypersim.
For each example, we show the 4 input RGB views (left), the ground truth RGB, semantic and depth maps for the novel target view (middle) and ViewSeg\xspace's predictions (right).
Our model does not have access to the true observations from the target view at test time. See our video animations.
Depth: 0.1m \includegraphics[width=0.07\linewidth]{figures/depth_colormap.png} 20m.}
\label{fig:hypersim_more}
\vspace{-3mm}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=0.99\linewidth]{figures/viewseg_replica_suppl.png}
\caption{More predictions on Replica.
For each example, we show the 4 input RGB views (left), the ground truth RGB, semantic and depth maps for the novel target view (middle) and ViewSeg\xspace's predictions (right).
Our model does not have access to the true observations from the target view at test time. See our video animations.
Depth: 0.1m \includegraphics[width=0.07\linewidth]{figures/depth_colormap.png} 20m.}
\label{fig:replica_more}
\vspace{-2mm}
\end{figure*}
\section{Network Architecture}
\begin{figure*}[t]
\centering
\includegraphics[width=0.99\linewidth]{figures/architecture_supp.pdf}
\caption{Detailed architecture of our semantic 3D representation $f$. Each cuboid is a linear layer where the number around it represents the input dimension. The output dimension is always 128 except the last layer of RGB and semantic prediction. Before the mean aggregation, the network takes inputs from each source view but weights are shared.}
\label{fig:architecture_supp}
\vspace{-2mm}
\end{figure*}
The detailed architecture of $f$ is illustrated in Figure \ref{fig:architecture_supp}.
It takes as input a positional embedding of the 3D coordinates of $\mathbf{x}$, $\gamma(\mathbf{x})$, the viewing direction $\mathbf{d}$ and the semantic embeddings $\{\phi^\textrm{seg}_j\}_{j=1}^{N}$.
As output, $f$ produces
\begin{equation}
(\cB, \, \sigma, \, \sB) = f\bigl( \gamma(\mathbf{x}), \, \mathbf{d}, \, \phi^\textrm{seg}_0(\mathbf{x}), \, ..., \, \phi^\textrm{seg}_{N-1}(\mathbf{x}) \bigr)
\end{equation}
where $\cB \in \mathbb{R}^3$ is the RGB color, $\sigma \in \mathbb{R}$ is the occupancy, and $\sB \in \mathbb{R}^{|\mathcal{C}|}$ is the distribution over semantic classes $\mathcal{C}$.
Hypersim~\cite{roberts2020hypersim} provides annotations for 40 semantic classes.
We discard \emph{\{otherstructure, otherfurniture, otherprop\}} and thus $|\mathcal{C}| = 37$ for both Hypersim~\cite{roberts2020hypersim} and Replica~\cite{straub2019replica}.
We largely follow PixelNeRF~\cite{yu2021pixelnerf} for the design of our network.
We deviate from PixelNeRF and use 128 instead of 512 for the dimension of hidden layers (as in NeRF~\cite{mildenhall2020nerf}) for a more compact network.
The dimension of the linear layer which inputs $\phi^\textrm{seg}_{j}(\mathbf{x})$ is set to 256 to match the dimension of semantic features from DeepLabv3+~\cite{chen2018encoder}.
\section{Training Details}
\mypar{Pretraining the Semantic Segmentation Module.}
We first pretrain DeepLabv3+~\cite{chen2018encoder} on ADE20k~\cite{zhou2019semantic}, which has 20,210 images for training and 2,000 images for validation.
We implement DeepLabv3+ in PyTorch with Detectron2~\cite{wu2019detectron2}.
We train on the ADE20k training set for 160k iterations with a batch size of 16 across 8 Tesla V100 GPUs.
The model is initialized using ImageNet weights \cite{deng2009imagenet}.
We optimize with SGD and a learning rate of 1e-2.
During training, we crop each input image to 512x512.
We remove the final layer, which predicts class probabilities, and use the output from the penultimate layer as our semantic encoder.
We do not freeze the model when training ViewSeg\xspace, allowing finetuning on Hypersim semantic categories.
\mypar{Training Details on Hypersim.}
We implement ViewSeg\xspace in PyTorch3D~\cite{ravi2020pytorch3d} and Detectron2~\cite{wu2019detectron2}.
We initialize the semantic segmentation module with ADE20k pretrained weights.
We train on the training set for 13 epochs with a batch size of 32 across 32 Tesla V100 GPUs.
The input and render resolution are set to 1024x768.
We optimize with Adam and a learning rate of 5e-4.
We follow the PixelNeRF~\cite{yu2021pixelnerf} strategy for ray sampling: We sample 64 points per ray in the coarse pass and 128 points per ray in the fine pass;
we sample 1024 rays per image.
\mypar{Training Details on Replica.}
We finetune our model on the Replica training set~\cite{straub2019replica}.
Replica has 18 scenes.
In practice, we find Replica does not have room-level annotations and our sampled source and target views can be at different rooms within the Replica apartments.
Hence, we exclude them from our data.
We split the rest 15 scenes into a train/val split of 12/3 scenes.
We use the same hyperparameters as Hypersim to finetune our model on Replica.
\section{Noisy Camera Experiment}
In our experiments, we assume camera poses both during training and evaluation.
We perform an additional ablation assuming \emph{noisy} cameras for both during training and testing.
During evaluation, source view cameras are noisy but not the target camera, as we wish to compare to the target view ground truth.
We insert noise to the cameras by perturbing the rotation matrix with random angles sampled from $[-10^\circ, 10^\circ]$ in all three axis ($X$, $Y$ \& $Z$) .
This results in a significant camera noise and stretch tests our method under such conditions.
Table~\ref{tab:replica_noisy_camera} shows results on the noisy Replica test set.
The 1$^\textrm{st}$ row shows the performance of our ViewSeg\xspace pre-trained on Hypersim and without any finetuning.
The 2$^\textrm{nd}$ row shows the performance of our model finetuned on the noisy Replica training set.
We observe that performance for both model variants is worse compared to having perfect cameras, as is expected.
Performance improves after training with camera noise, suggesting that our ViewSeg\xspace is able to generalize better when trained with noise in cameras.
Figure~\ref{fig:replica_noisy_camera} shows qualitative results on the noisy Replica test set.
The predicted RGB targets are significantly worse compared to the perfect camera scenario, which suggests that RGB prediction with imperfect cameras is very challenging.
However, the semantic predictions are much better computer to their RGB counterparts.
This shows that our approach is able to capture scene priors and generalize to new scenes even under imperfect conditions.
\begin{figure*}
\centering
\includegraphics[width=0.99\linewidth]{figures/viewseg_replica_noisy.png}
\caption{Predictions on Replica with noisy cameras.
For each example, we show the 4 input RGB views (left), the ground truth RGB, semantic and depth maps for the novel target view (middle) and ViewSeg\xspace's predictions (right).
Our model does not have access to the true observations from the target view at test time. Depth: 0.1m \includegraphics[width=0.07\linewidth]{figures/depth_colormap.png} 20m.}
\label{fig:replica_noisy_camera}
\vspace{-2mm}
\end{figure*}
\begin{table*}
\centering
\scalebox{0.86}{
\begin{tabular}{L{.13\linewidth}|S{.05\linewidth}S{.05\linewidth}S{.05\linewidth}S{.05\linewidth}S{.05\linewidth}S{.05\linewidth}|D{.05\linewidth}D{.07\linewidth}D{.07\linewidth}D{.07\linewidth}D{.08\linewidth}D{.09\linewidth}D{.09\linewidth}}
Model & mIoU & IoU$^\textrm{T}$ & IoU$^\textrm{S}$ & fwIoU & pACC & mACC & L$_1$ ($\downarrow$) & Rel ($\downarrow$) & Rel$^\textrm{T}$ ($\downarrow$) & Rel$^\textrm{S}$ ($\downarrow$) & $\delta < 1.25$ & $\delta < 1.25^2$ \\
\Xhline{2\arrayrulewidth}
ViewSeg\xspace noft & 8.57 & 35.2 & 61.3 & 40.3 & 18.3 & 57.9 & 1.10 & 0.253 & 0.240 & 0.268 & 0.579 & 0.840\\
ViewSeg\xspace & 14.1 & 40.1 & 62.8 & 42.8 & 24.6 & 60.6 & 0.887 & 0.208 & 0.232 & 0.180 & 0.631 & 0.888\\
\hline
\end{tabular}
}
\vspace{-2mm}
\caption{Performance for semantic segmentation (\textcolor{llblue}{blue}) and depth (\textcolor{llgreen}{green}) on the Replica~\cite{straub2019replica} test set with noisy cameras.}
\label{tab:replica_noisy_camera}
\vspace{-3mm}
\end{table*}
\section{Evaluation}
We report the complete set of depth metrics for all the tables in the main submission.
The comparison with PixelNeRF and CloudSeg is in Table~\ref{tab:supp_hypersim_comp}.
The ablation of different loss terms is in Table~\ref{tab:supp_hypersim_loss_ablations}.
The comparison of different backbones is in Table~\ref{tab:supp_hypersim_backbone_ablations}.
The input study is in Table~\ref{tab:supp_hypersim_views_ablations}.
Our experiments on the Replica dataset are in Table~\ref{tab:supp_replica_comp}.
\begin{table*}[h!]
\centering
\scalebox{0.47}{
\begin{tabular}{L{.1\linewidth}|S{.05\linewidth}S{.05\linewidth}S{.05\linewidth}S{.05\linewidth}S{.05\linewidth}S{.05\linewidth}|D{.05\linewidth}D{.06\linewidth}D{.07\linewidth}D{.07\linewidth}D{.07\linewidth}D{.07\linewidth}D{.08\linewidth}D{.09\linewidth}D{.09\linewidth}D{.09\linewidth}D{.1\linewidth}D{.1\linewidth}D{.09\linewidth}D{.1\linewidth}D{.1\linewidth}D{.09\linewidth}}
Model & mIoU & IoU$^\textrm{T}$ & IoU$^\textrm{S}$ & fwIoU & pACC & mACC & L$_1$ ($\downarrow$) & L$_1^\textrm{T}$ ($\downarrow$) & L$_1^\textrm{S}$ ($\downarrow$) & Rel ($\downarrow$) & Rel$^\textrm{T}$ ($\downarrow$) & Rel$^\textrm{S}$ ($\downarrow$) & $\delta < 1.25$ & $\delta^\textrm{T} < 1.25$ & $\delta^\textrm{S} < 1.25$ & $\delta < 1.25^2$ & $\delta^\textrm{T} < 1.25^2$ & $\delta^\textrm{S} < 1.25^2$ & $\delta < 1.25^3$ & $\delta^\textrm{T} < 1.25^3$ & $\delta^\textrm{S} < 1.25^3$\\
\Xhline{2\arrayrulewidth}
PixelNeRF++ & 1.58 & 14.9 & 47.9 & 17.9 & 3.63 & 35.9 & 2.80 & 2.69 & 2.90 & 0.746 & 0.856 & 0.653 & 0.300 & 0.276 & 0.319 & 0.531 & 0.500 & 0.557 & 0.689 & 0.663 & 0.712\\
CloudSeg\xspace & 0.46 & 29.6 & 4.42 & 1.80 & 3.31 & 3.25 & 3.81 & 3.77 & 3.83 & 0.856 & 0.997 & 0.737 & 0.145 & 0.105 & 0.178 & 0.277 & 0.211 & 0.332 & 0.389 & 0.314 & 0.451 \\
ViewSeg\xspace & \bf{17.1} & \bf{33.2} & \bf{58.9} & \bf{44.8} & \bf{23.9} & \bf{62.2} & \bf{2.29} & \bf{2.18} & \bf{2.38} & \bf{0.646} & \bf{0.721} & \bf{0.584} & \bf{0.409} & \bf{0.393} & \bf{0.421} & \bf{0.656} & \bf{0.633} & \bf{0.676} & \bf{0.794} & \bf{0.772} & \bf{0.812} \\
\hline
Oracle & 40.0 & 58.1 & 71.3 & 66.6 & 52.1 & 79.1 & 0.96 & 1.10 & 0.83 & 0.235 & 0.317 & 0.163 & 0.731 & 0.651 & 0.800 & 0.898 & 0.848 & 0.942 & 0.954 & 0.925 & 0.978 \\
\end{tabular}
}
\vspace{-3mm}
\caption{Extended version of Table 1 in the main paper.
}
\label{tab:supp_hypersim_comp}
\vspace{-3mm}
\end{table*}
\begin{table*}[h!]
\centering
\scalebox{0.45}{
\begin{tabular}{L{.18\linewidth}|S{.05\linewidth}S{.05\linewidth}S{.05\linewidth}S{.05\linewidth}S{.05\linewidth}S{.05\linewidth}|D{.05\linewidth}D{.06\linewidth}D{.07\linewidth}D{.07\linewidth}D{.07\linewidth}D{.07\linewidth}D{.08\linewidth}D{.09\linewidth}D{.09\linewidth}D{.09\linewidth}D{.1\linewidth}D{.1\linewidth}D{.09\linewidth}D{.1\linewidth}D{.1\linewidth}D{.09\linewidth}}
ViewSeg\xspace loss & mIoU & IoU$^\textrm{T}$ & IoU$^\textrm{S}$ & fwIoU & pACC & mACC & L$_1$ ($\downarrow$) & L$_1^\textrm{T}$ ($\downarrow$) & L$_1^\textrm{S}$ ($\downarrow$) & Rel ($\downarrow$) & Rel$^\textrm{T}$ ($\downarrow$) & Rel$^\textrm{S}$ ($\downarrow$) & $\delta < 1.25$ & $\delta^\textrm{T} < 1.25$ & $\delta^\textrm{S} < 1.25$ & $\delta < 1.25^2$ & $\delta^\textrm{T} < 1.25^2$ & $\delta^\textrm{S} < 1.25^2$ & $\delta < 1.25^3$ & $\delta^\textrm{T} < 1.25^3$ & $\delta^\textrm{S} < 1.25^3$\\
\Xhline{2\arrayrulewidth}
\emph{w/o} photometric loss & 16.9 & 30.8 & 58.7 & 44.8 & 22.7 & 62.5 & 2.49 & 2.35 & 2.61 & 0.677 & 0.750 & 0.615 & 0.359 & 0.347 & 0.363 & 0.611 & 0.594 & 0.625 & 0.764 & 0.744 & 0.780\\
\emph{w/o} semantic loss & - & - & - & - & - & - & 2.58 & 2.52 & 2.63 & 0.787 & 0.919 & 0.678 & 0.345 & 0.317 & 0.369 & 0.587 & 0.548 & 0.621 & 0.740 & 0.704 & 0.770\\
\emph{w/o} source view loss & 14.3 & 28.2 & 57.9 & 28.2 & 19.3 & 61.1 & 2.37 & 2.27 & 2.45 & 0.683 & 0.764 & 0.615 & 0.397 & 0.378 & 0.413 & 0.649 & 0.623 & 0.670 & 0.785 & 0.761 & 0.806\\
\emph{w/o} viewing dir & 16.0 & 33.1 & \bf{59.2} & \bf{44.9} & 21.5 & 62.1 & 2.53 & 2.38 & 2.65 & 0.708 & 0.783 & 0.646 & 0.354 & 0.351 & 0.356 & 0.602 & 0.593 & 0.610 & 0.759 & 0.744 & 0.772 \\
final & \bf{17.1} & \bf{33.2} & 58.9 & 44.8 & \bf{23.9} & \bf{62.2} & \bf{2.29} & \bf{2.18} & \bf{2.38} & \bf{0.646} & \bf{0.721} & \bf{0.584} & \bf{0.409} & \bf{0.393} & \bf{0.421} & \bf{0.656} & \bf{0.633} & \bf{0.676} & \bf{0.794} & \bf{0.772} & \bf{0.812} \\
\end{tabular}
}
\vspace{-3mm}
\caption{Extended version of Table 2 in the main paper.
}
\label{tab:supp_hypersim_loss
|
_ablations}
\vspace{-2mm}
\end{table*}
\begin{table*}[h!]
\centering
\scalebox{0.44}{
\begin{tabular}{L{.22\linewidth}|S{.05\linewidth}S{.05\linewidth}S{.05\linewidth}S{.05\linewidth}S{.05\linewidth}S{.05\linewidth}|D{.05\linewidth}D{.06\linewidth}D{.07\linewidth}D{.07\linewidth}D{.07\linewidth}D{.07\linewidth}D{.08\linewidth}D{.09\linewidth}D{.09\linewidth}D{.09\linewidth}D{.1\linewidth}D{.1\linewidth}D{.09\linewidth}D{.1\linewidth}D{.1\linewidth}D{.09\linewidth}}
ViewSeg\xspace backbone & mIoU & IoU$^\textrm{T}$ & IoU$^\textrm{S}$ & fwIoU & pACC & mACC & L$_1$ ($\downarrow$) & L$_1^\textrm{T}$ ($\downarrow$) & L$_1^\textrm{S}$ ($\downarrow$) & Rel ($\downarrow$) & Rel$^\textrm{T}$ ($\downarrow$) & Rel$^\textrm{S}$ ($\downarrow$) & $\delta < 1.25$ & $\delta^\textrm{T} < 1.25$ & $\delta^\textrm{S} < 1.25$ & $\delta < 1.25^2$ & $\delta^\textrm{T} < 1.25^2$ & $\delta^\textrm{S} < 1.25^2$ & $\delta < 1.25^3$ & $\delta^\textrm{T} < 1.25^3$ & $\delta^\textrm{S} < 1.25^3$\\
\Xhline{2\arrayrulewidth}
DLv3+~\cite{chen2018encoder} + ADE20k~\cite{zhou2019semantic} & \bf{17.1} & \bf{33.2} & 58.9 & 44.8 & \bf{23.9} & 62.2 & 2.29 & 2.18 & 2.38 & 0.645 & 0.721 & 0.584 & 0.409 & 0.393 & 0.421 & 0.656 & 0.633 & 0.676 & 0.794 & 0.772 & 0.812 \\
DLv3+~\cite{chen2018encoder} + IN~\cite{deng2009imagenet} & 16.3 & \bf{33.2} & \bf{59.2} & \bf{45.2} & 22.0 & \bf{62.5} & \bf{2.28} & \bf{2.17} & \bf{2.36} & \bf{0.614} & \bf{0.682} & \bf{0.559} & \bf{0.415} & \bf{0.400} & \bf{0.427} & \bf{0.663} & \bf{0.640} & \bf{0.682} & \bf{0.799} & \bf{0.776} & \bf{0.818} \\
ResNet34~\cite{he2016deep} + IN~\cite{deng2009imagenet} & 7.45 & 21.7 & 55.9 & 37.1 & 11.2 & 56.1 & 2.67 & 2.51 & 2.81 & 0.712 & 0.815 & 0.626 & 0.320 & 0.304 & 0.333 & 0.562 & 0.541 & 0.580 & 0.720 & 0.702 & 0.736\\
\end{tabular}
}
\vspace{-2mm}
\caption{Extended version of Table 3 in the main paper.
}
\label{tab:supp_hypersim_backbone_ablations}
\vspace{-4mm}
\end{table*}
\begin{table*}
\centering
\scalebox{0.47}{
\begin{tabular}{L{.1\linewidth}|S{.05\linewidth}S{.05\linewidth}S{.05\linewidth}S{.05\linewidth}S{.05\linewidth}S{.05\linewidth}|D{.05\linewidth}D{.06\linewidth}D{.07\linewidth}D{.07\linewidth}D{.07\linewidth}D{.07\linewidth}D{.08\linewidth}D{.09\linewidth}D{.09\linewidth}D{.09\linewidth}D{.1\linewidth}D{.1\linewidth}D{.09\linewidth}D{.1\linewidth}D{.1\linewidth}D{.09\linewidth}}
ViewSeg\xspace & mIoU & IoU$^\textrm{T}$ & IoU$^\textrm{S}$ & fwIoU & pACC & mACC & L$_1$ ($\downarrow$) & L$_1^\textrm{T}$ ($\downarrow$) & L$_1^\textrm{S}$ ($\downarrow$) & Rel ($\downarrow$) & Rel$^\textrm{T}$ ($\downarrow$) & Rel$^\textrm{S}$ ($\downarrow$) & $\delta < 1.25$ & $\delta^\textrm{T} < 1.25$ & $\delta^\textrm{S} < 1.25$ & $\delta < 1.25^2$ & $\delta^\textrm{T} < 1.25^2$ & $\delta^\textrm{S} < 1.25^2$ & $\delta < 1.25^3$ & $\delta^\textrm{T} < 1.25^3$ & $\delta^\textrm{S} < 1.25^3$\\
\Xhline{2\arrayrulewidth}
\emph{w/} 4 views & \bf{17.1} & \bf{33.2} & \bf{58.9} & \bf{44.8} & \bf{23.9} & \bf{62.2} & \bf{2.29} & \bf{2.18} & \bf{2.38} & \bf{0.646} & \bf{0.721} & \bf{0.584} & \bf{0.408} & \bf{0.393} & \bf{0.421} & \bf{0.656} & \bf{0.633} & \bf{0.676} & \bf{0.794} & \bf{0.772} & \bf{0.812}\\
\emph{w/} 3 views & 15.5 & 31.3 & 58.7 & 43.9 & 20.8 & 61.5 & 2.39 & 2.25 & 2.49 & 0.652 & 0.730 & 0.587 & 0.387 & 0.376 & 0.395 & 0.634 & 0.617 & 0.648 & 0.777 & 0.759 & 0.793\\
\emph{w/} 2 views & 13.6 & 27.4 & 57.7 & 41.9 & 18.2 & 60.2 & 2.57 & 2.49 & 2.64 & 0.765 & 0.878 & 0.672 & 0.363 & 0.339 & 0.383 & 0.605 & 0.574 & 0.633 & 0.751 & 0.721 & 0.776\\
\emph{w/} 1 view & 11.6 & 24.9 & 56.5 & 39.7 & 15.8 & 57.9 & 2.62 & 2.52 & 2.70 & 0.734 & 0.828 & 0.657 & 0.332 & 0.322 & 0.339 & 0.562 & 0.541 & 0.580 & 0.710 & 0.686 & 0.730\\
\end{tabular}
}
\vspace{-2mm}
\caption{Extended version of Table 4 in the main paper.}
\label{tab:supp_hypersim_views_ablations}
\end{table*}
\begin{table*}
\centering
\scalebox{0.46}{
\begin{tabular}{L{.15\linewidth}|S{.05\linewidth}S{.05\linewidth}S{.05\linewidth}S{.05\linewidth}S{.05\linewidth}S{.05\linewidth}|D{.05\linewidth}D{.06\linewidth}D{.07\linewidth}D{.07\linewidth}D{.07\linewidth}D{.07\linewidth}D{.08\linewidth}D{.09\linewidth}D{.09\linewidth}D{.09\linewidth}D{.1\linewidth}D{.1\linewidth}D{.09\linewidth}D{.1\linewidth}D{.1\linewidth}D{.09\linewidth}}
ViewSeg\xspace & mIoU & IoU$^\textrm{T}$ & IoU$^\textrm{S}$ & fwIoU & pACC & mACC & L$_1$ ($\downarrow$) & L$_1^\textrm{T}$ ($\downarrow$) & L$_1^\textrm{S}$ ($\downarrow$) & Rel ($\downarrow$) & Rel$^\textrm{T}$ ($\downarrow$) & Rel$^\textrm{S}$ ($\downarrow$) & $\delta < 1.25$ & $\delta^\textrm{T} < 1.25$ & $\delta^\textrm{S} < 1.25$ & $\delta < 1.25^2$ & $\delta^\textrm{T} < 1.25^2$ & $\delta^\textrm{S} < 1.25^2$ & $\delta < 1.25^3$ & $\delta^\textrm{T} < 1.25^3$ & $\delta^\textrm{S} < 1.25^3$\\
\Xhline{2\arrayrulewidth}
ViewSeg\xspace noft & 13.2 & 44.8 & 56.0 & 51.4 & 27.1 & 66.8 & 0.982 & 0.851 & 1.138 & 0.222 & 0.194 & 0.254 & 0.623 & 0.687 & 0.546 & 0.880 & 0.905 & 0.850 & 0.968 & 0.974 & 0.961 \\
ViewSeg\xspace & 30.2 & 56.2 & 62.8 & 62.3 & 48.4 & 75.6 & 0.550 & 0.510 & 0.597 & 0.130 & 0.130 & 0.130 & 0.851 & 0.857 & 0.844 & 0.961 & 0.953 & 0.972 & 0.986 & 0.980 & 0.992\\
\hline
Oracle & 56.2 & 76.8 & 78.0 & 90.1 & 79.4 & 93.8 & 0.226 & 0.230 & 0.220 & 0.058 & 0.065 & 0.050 & 0.976 & 0.965 & 0.991 & 0.998 & 0.996 & 0.999 & 1.000 & 1.000 & 1.000\\
\end{tabular}
}
\vspace{-2mm}
\caption{Extended version of Table 5 in the main paper.
}
\label{tab:supp_replica_comp}
\vspace{-3mm}
\end{table*}
\section{Related Work}
\label{sec:related}
For the task of novel view semantic understanding,
we draw from 2D scene understanding, novel view synthesis and 3D learning to recognize scenes from novel viewpoints.
\mypar{Semantic Segmentation.}
Segmenting objects and stuff from images is extensively researched.
Initial efforts apply Bayesian classifiers on local features~\cite{konishi2000statistical} or perform grouping on low-level cues~\cite{shi2000normalized}.
Others~\cite{carreira2012semantic,dai2015convolutional} score bottom-up mask proposals~\cite{arbelaez2014multiscale,carreira2011cpmc}.
With the advent of deep learning,
FCNs~\cite{long2015fully} perform per-pixel segmentation with a CNN.
DeepLab~\cite{chen2017rethinking} use atrous convolutions and an encoder-decoder architecture~\cite{chen2018encoder} to handle scale and resolution.
Regarding multi-view semantic segmentation,
\cite{kundu2016feature} improve the temporal consistency of semantic segmentation in videos by linking frames with optical flow and learned feature similarity.
\cite{mccormac2017semanticfusion} map semantic segmentations from RGBD inputs on 3D reconstructions from SLAM.
\cite{he2017std2p} fuse predictions from video frames using super-pixels and optical flow.
\cite{luc2017predicting} learn scene dynamics to predict semantic segmentations of future
frames given several past frames.
\mypar{Novel View Synthesis.}
Novel view synthesis is a popular topic of research in computer vision and graphics.
\cite{flynn2019deepview,srinivasan2019pushing,srinivasan2017learning,tulsiani2018layer,xu2019deep,zhou2018stereo} show great results synthesizing views from two or more narrow baseline images.
Implicit voxel representations have been used to fit a scene from many scene views~\cite{lombardi2019neural,shin20193d,sitzmann2019deepvoxels}.
Recently, NeRF~\cite{mildenhall2020nerf} learn a continuous volumetric scene function which emits density and radiance at spatial locations and show impressive results when fitted on a single scene with hundreds of views.
We extend NeRF to emit a distribution over semantic categories at each 3D location.
Semantic-NeRF~\cite{ZhiICCV2021} also predicts semantic classes.
We differ from~\cite{ZhiICCV2021} as we generalize to novel scenes from sparse input views instead of in-place interpolation within a single scene from hundreds of views.
NeRF extensions~\cite{henzler2021unsupervised,reizenstein21co3d}, such as PixelNeRF~\cite{yu2021pixelnerf}, generalize to novel scenes from few input views with the help of learnt CNNs but show results on single-object benchmarks for RGB synthesis.
We differ from~\cite{yu2021pixelnerf} by carefully pairing a geometry-aware model with state-of-the-art scene recognition~\cite{chen2018encoder} and experiment on realistic multi-object scenes.
\mypar{3D Reconstruction from Images.}
Scene reconstruction from multiple views is traditionally tackled with classical binocular stereo~\cite{Hartley04, scharstein2002taxonomy} or with the help of shape priors~\cite{Bao2013,blanz1999,dame2013,hane2014}.
Modern techniques learn disparity from image pairs~\cite{kendall2017end}, estimate correspondences with contrastive learning~\cite{schmidt2017self}, perform multi-view stereopsis via differentiable ray projection~\cite{kar2017learning} or learn to reconstruct scenes while optimizing for cameras~\cite{qian2020associative3d,jin2021planar}.
Differentiable rendering~\cite{loper2014opendr,kato2018neural,liu2019soft,chen2019learning,li2018differentiable,nimier2019mitsuba,ravi2020pytorch3d} allows gradients to flow to 3D scenes via 2D re-projections.
\cite{kato2018neural,liu2019soft,chen2019learning,ravi2020pytorch3d} reconstruct single objects from a single view via rendering from 2 or more views during training.
We also use differentiable rendering to learn 3D via 2D re-projections in semantic space.
\mypar{Depth Estimation from Images.}
Recent methods train networks to predict depth from videos~\cite{zhou2017unsupervised,luo2020consistent,chen2019qanet} or 3D supervision~\cite{eigen2015predicting,chen2016single,Li18,yin2021learning,ranftl2020towards}.
We do not use depth supervision but predict depth from novel views via training for semantic and RGB reconstruction from sparse inputs.
\section{Experiments}
\label{sec:experiments}
We experiment on Hypersim~\cite{roberts2020hypersim} and Replica~\cite{straub2019replica}.
Both provide posed views of complex scenes with over 30 object types and under varying conditions of occlusion and lighting.
At test time, we evaluate on novel scenes not seen during training.
Due to its large size, we treat Hypersim as our main dataset where we run an extensive quantitative analysis.
We then show generalization to the smaller Replica.
\mypar{Metrics.}
We report novel view metrics for semantics and geometry.
For a novel view of a test scene, we project the semantic predictions (Eq.~\ref{eq:semantic}) and depth (Eq.~\ref{eq:depth}) and compare to the ground truth semantic and depth maps, respectively.
Ideally, we would also evaluate directly in 3D, which requires access to full 3D ground truth.
However, 3D ground truth is not publicly available for Hypersim and is generally hard to collect.
Thus, we treat novel view metrics as proxy metrics for 3D semantic segmentation and depth estimation.
For semantic comparisons, we report semantic segmentation metrics~\cite{caesar2018cvpr} implemented in Detectron2~\cite{wu2019detectron2}:
\textbf{mIoU} is the intersection-over-union (IoU) averaged across classes,
\textbf{IoU$^\textrm{T}$} and \textbf{IoU$^\textrm{S}$} report IoU by merging all things (object) and stuff classes (wall, floor, ceiling), respetively.
\textbf{fwIoU} is the per class IoU weighted by the pixel-level frequency of each classs,
\textbf{pACC} is the percentage of correctly labelled pixels and
\textbf{mACC} is the pixel accuracy averaged across classes.
For all, performance is in \% and higher is better.
For depth comparisons, we report depth metrics following~\cite{Eigen14}:
\textbf{L$_1$} is the per-pixel average L$_1$ distance between ground truth and predicted depth,
\textbf{Rel} is the L$_1$ distance normalized by the true depth value,
\textbf{Rel$^\textrm{T}$} and \textbf{Rel$^\textrm{S}$} is the Rel metric for all things and stuff, respectively.
$\bm{\delta < \tau}$ is the percentage of pixels with predicted depth within $[\frac{1}{\tau}, \tau] \times$ the true depth.
L$_1$ is in meters and $\delta < \tau$ is in \%.
For $\delta < \tau$ metrics, higher is better.
For all other, lower is better ($\downarrow$).
\subsection{Experiments on Hypersim}
Hypersim~\cite{roberts2020hypersim} is a dataset of 461 complex scenes.
Camera trajectories across scenes result in 77,400 images with camera poses, masks for 40 semantic classes~\cite{Silberman12}, along with true depth maps.
Hypersim contains on average 50 objects per image, making it a very challenging dataset.
\mypar{Dataset.}
For each scene, we create source-target pairs from the available views.
Each image is labelled as target and is paired with an image from a different viewpoint if: (1) the view frustums intersect by no less than 10\%; (2) the camera translation is greater than 0.5m; and (3) the camera rotation is at least 30$^o$.
This ensures that source and target views are from different camera viewpoints and broadly depict the same parts of the scene but without large overlap.
We follow the original Hypersim split, which splits train/val/test to 365/46/50 disjoint scenes, respectively.
Overall, there are 120k/14k/14k pairs in train/val/test, respectively.
\begin{figure*}[t]
\centering
\includegraphics[width=0.99\linewidth]{figures/viewseg_hypersim.png}
\vspace{-3mm}
\caption{Predictions on Hypersim.
For each example, we show the 4 input RGB views (left), the ground truth RGB, semantic and depth maps for the novel target view (middle) and ViewSeg\xspace's predictions (right).
RGB synthesis is not our goal, but we show the predicted RGB for completeness. Our model does not have access to the true observations from the target view at test time. Depth: 0.1m \includegraphics[width=0.07\linewidth]{figures/depth_colormap.png} 20m.}
\label{fig:hypersim_preds}
\vspace{-2mm}
\end{figure*}
\input{tables/hypersim_comparisons.tex}
\mypar{Training details.}
We implement ViewSeg\xspace in PyTorch with Detectron2~\cite{wu2019detectron2} and PyTorch3D~\cite{ravi2020pytorch3d}.
We train on the Hypersim training set for 13 epochs with a batch size of 32 across 32 Tesla V100 GPUs.
The input and render resolution are set to 1024$\times$768, maintaining the size of the original dataset.
We optimize with Adam~\cite{kingma2014adam} and a learning rate of 5e-4.
We follow the PixelNeRF~\cite{yu2021pixelnerf} strategy for ray sampling: We sample 64 points per ray in the coarse pass and 128 points per ray in the fine pass.
In addition to the target view, we randomly sample rays on each source view and additionally minimize the source view loss, as we describe in Sec.~\ref{sec:optimization}.
We set $t_n = 0.1$m, $t_f=20$m in Eq.~\ref{eq:semantic} \&~\ref{eq:depth} and $\lambda = 0.04$ in Eq.~\ref{eq:objective}.
More details in the Supplementary.
\mypar{Baselines.}
In addition to extensive ablations that reveal which components of our method are most important, we compare with multiple baselines and oracle methods to provide context for our method.
Our baselines aim to test alternate strategies, including inferring the true RGB image and then predicting the pixel classes as well as lifting a predicted semantic segmentation map to 3D and re-projecting.
To provide context, we report a {\bf Target View Oracle} that has access to the true target view's image.
The target RGB image fundamentally resolves many ambiguities in 3D about what is where, and is not available to our method.
Instead, our method is tasked with predicting segmentations and depth from new viewpoints \emph{without the target RGB images}.
Our oracle applies appropriate supervised models directly on the true target RGB.
For semantic segmentation, we use a model~\cite{chen2018encoder} that is identical to ours pre-trained on ADE20k~\cite{zhou2019semantic} and finetuned on all images of the Hypersim training set.
For depth, we use the model from~\cite{yin2021learning}, which predicts normalized depth.
We obtain metric depth by aligning with the optimal shift and scale following \cite{ranftl2020towards,yin2021learning}.
Our first baseline, denoted {\bf PixelNeRF++}, tests the importance of the end-to-end nature of our approach by performing a two stage process: we use the novel view synthesis (NVS) method of PixelNeRF~\cite{yu2021pixelnerf} to infer the RGB of the target view and then apply an image-based model for semantic segmentation.
To ensure a fair comparison, PixelNeRF is trained on the Hypersim training set.
We use the segmentation model we trained for the oracle to predict semantic segmentations.
Depth is predicted with Eq.~\ref{eq:depth}.
Our second baseline, named {\bf CloudSeg\xspace}, tests the importance of an implicit representation by comparing with an explicit 3D point cloud representation.
Inspired by SynSin~\cite{wiles2020synsin}, we train a semantic segmentation backbone similar to our ViewSeg\xspace, along with a depth model, from~\cite{yin2021learning}, to lift each source view to a 3D point cloud with per point class probabilities.
A differentiable point cloud renderer~\cite{ravi2020pytorch3d} projects the point clouds from the source images to the target view to produce a semantic and a depth map.
CloudSeg\xspace is trained on the Hypersim training set and uses the same 2D supervision as our ViewSeg\xspace.
\mypar{Results.}
Table~\ref{tab:hypersim_comp} compares our ViewSeg\xspace, PixelNeRF++ and CloudSeg\xspace with 4 source views and the oracle on Hypersim val.
We observe that PixelNeRF++, which predicts the target RGB view and then applies an image-based model for semantic prediction, performs worse than ViewSeg\xspace.
This is explained by the low quality RGB predictions, shown in Fig.~\ref{fig:hypersim_comp}.
Predicting high fidelity RGB of novel complex scenes from only 4 input views is still difficult for NVS models, suggesting that a two-stage solution for semantic segmentation will not perform competitively.
Indeed, ViewSeg\xspace significantly outperforms PixelNeRF++ showing the importance of learning semantics end-to-end.
In addition to semantics, ViewSeg\xspace outperforms PixelNeRF++ for depth.
This suggests that learning semantics jointly has a positive impact on geometry as well.
Finally, CloudSeg\xspace has a hard time predicting semantics and geometry.
This is likely attributed to the wide baseline source and target views in our task which cause explicit 3D representations to produce holes and erroneous predictions in the rendered target output.
In SynSin~\cite{wiles2020synsin}, the camera transform between source and target in the datasets the authors explore is significantly narrower, unlike our task where novel viewpoints correspond to wider camera transforms, as shown in Fig.~\ref{fig:hypersim_preds}.
Fig.~\ref{fig:hypersim_preds} shows ViewSeg\xspace's predictions on Hypersim val.
We show the 4 source views (left), the ground truth target RGB, semantic and depth map (middle) and ViewSeg\xspace's predictions from the target viewpoint (right).
Note that ViewSeg\xspace does not have access to ground truth target observations and only receives the 4 images along with camera coordinates for the source and the target viewpoints.
Examples in Fig.~\ref{fig:hypersim_preds} are of diverse scenes (restaurant, bedroom, kitchen, living room) with many objects (chair, table, counter, cabinet, window, blinds, lamp, picture, floor mat, \etc).
We observe that the predicted RGB is of poor quality proving that NVS has a hard time in complex scenes and with few views.
RGB synthesis is not our goal.
We aim to predict the scene semantics and Fig.~\ref{fig:hypersim_preds} shows that our model achieves this.
ViewSeg\xspace detects stuff (floor, wall, ceiling) well and predicts object segments for the right object types, even for diverse target views.
Our depth predictions show that ViewSeg\xspace captures the scene's geometry, even though it was not trained with any 3D supervision.
Fig.~\ref{fig:hypersim_comp} compares ViewSeg\xspace to PixelNeRF++.
Semantic segmentation from predicted RGB results in bad predictions, as shown in the PixelNeRF++ column.
Fig.~\ref{fig:results_3d} shows examples of semantic 3D reconstructions.
\input{tables/hypersim_loss.tex}
\input{tables/hypersim_segmentation.tex}
\input{tables/hypersim_numberofviews.tex}
\mypar{Ablations and Input Study.}
Table~\ref{tab:hypersim_loss_ablations} ablates various terms in our objective.
For reference, the performance of our ViewSeg\xspace trained with 4 source views and with the final objective (Eq.~\ref{eq:objective}) is shown in the last row.
When we remove the photometric loss, L$^\textrm{P}$, semantic performance remains roughly the same but depth performance drops ($-20$cm in L$_1$), which proves that appearance helps capture scene geometry.
When we remove the semantic loss, L$^\textrm{S}$, and train solely with a photometric loss, we observe a drop in depth ($-29$cm in L$_1$).
This suggests that semantics helps geometry; we made a similar observation when comparing to PixelNeRF++ in Table~\ref{tab:hypersim_comp}.
When training without source view losses both semantic and depth performance drop, with semantic performance deteriorating the most ($-2.8$\% in mIoU).
This confirms our insight that enforcing consistency with the source views improves learning.
Finally, when we remove the viewing direction from the model's input, depth performance suffers the most ($-24$cm in L$_1$).
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figures/viewseg_pixelnerf.pdf}
\vspace{-6mm}
\caption{Comparison of ViewSeg\xspace and PixelNeRF++~. For each example we show the 4 RGB inputs (1$^\textrm{st}$-2$^\textrm{nd}$ col.), the true RGB and semantic map from the target view (3$^\textrm{rd}$ col.), the RGB and semantic prediction by PixelNeRF++ (4$^\textrm{th}$ col.) and the RGB and semantic prediction by our ViewSeg\xspace (5$^\textrm{th}$ col.).}
\label{fig:hypersim_comp}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\linewidth]{figures/viewseg_3d.pdf}
\vspace{-5mm}
\caption{3D semantic reconstructions on Replica (top two) and Hypersim (bottom). We show the 4 RGB inputs (1$^\textrm{st}$-2$^\textrm{nd}$ col.), the true RGB and semantic map from the novel view (3$^\textrm{rd}$ col.), and two views of the 3D semantic reconstructions (4$^\textrm{th}$-5$^\textrm{th}$ col.).}
\label{fig:results_3d}
\vspace{-4mm}
\end{figure}
Table~\ref{tab:hypersim_backbone_ablations} compares different backbones for the 2D segmentation module.
We compare DeepLabv3+ (DLv3+)~\cite{chen2018encoder} pre-trained on ImageNet~\cite{deng2009imagenet} and ADE20k~\cite{zhou2019semantic} and ResNet34~\cite{he2016deep} pre-trained on ImageNet~\cite{deng2009imagenet}.
The latter is used in PixelNeRF~\cite{yu2021pixelnerf}.
DLv3+ significantly boosts performance for both semantics and depth while pre-training on ADE20k slightly adds to the final performance.
Table~\ref{tab:hypersim_views_ablations} compares ViewSeg\xspace with varying number of source views.
We observe that more views improve both semantic segmentation and depth.
More than 4 views could lead to further improvements but substantially increase memory and time requirements during training.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.99\linewidth]{figures/viewseg_replica.pdf}
\vspace{-2mm}
\caption{Results on Replica test. We show the 4 input views (left), the ground truth RGB, semantic and depth maps from the novel target viewpoint (middle) and ViewSeg\xspace's predictions (right). Depth colormap: 0.1m \includegraphics[width=0.07\linewidth]{figures/depth_colormap.png} 20m.}
\label{fig:results_replica}
\vspace{-2mm}
\end{figure*}
\input{tables/replica_comparison.tex}
\subsection{Generalization to Replica}
We experiment on the Replica dataset~\cite{straub2019replica} which contains real-world scenes such as living rooms and offices.
Scenes are complex with many object types and instances in various layouts.
We show generalization by applying ViewSeg\xspace pre-trained on Hypersim and then further fine-tune it on Replica to better fit to Replica's statistics.
\mypar{Dataset.}
We use AI-Habitat \cite{savva2019habitat} to extract multiple views per scene.
For each view we collect the RGB image, semantic labels and depth.
For each Replica scene, we simulate an agent circling around the center of the 3D scene and render the observations.
Note that this is unlike Hypersim~\cite{roberts2020hypersim}, where camera trajectories are extracted by the authors a-priori.
We use the same camera intrinsics and resolution as Hypersim:
the horizontal field of view is 60$^\circ$ and the image resolution is 1024$\times$768.
Finally, we map the 88 semantic classes from Replica to NYUv3-13, following~\cite{ZhiICCV2021,dai2017scannet}.
Our dataset consists of 12/3 scenes for train/test, respectively, resulting in 360/90 source-target pairs.
Note that this is 330$\times$ smaller than Hypersim.
Yet, we show compelling results on Replica by pre-training ViewSeg\xspace on Hypersim.
\mypar{Results.}
Table~\ref{tab:replica_comp} reports the performance of our ViewSeg\xspace, trained on Hypersim, before fine-tuning (denoted as noft) and after fine-tuning, as well as a {\it Target View Oracle}.
The oracle fine-tunes the supervised semantic segmentation model on images from our Replica dataset and finds the optimal depth scale and shift for the test set.
We observe that ViewSeg\xspace's performance improves significantly when fine-tuning on Replica for both semantic segmentation and depth across all metrics.
It is not surprising that fine-tuning on Replica improves performance as the scenes across the two datasets vary in both object appearance and geometry.
We also observe that performance is significantly higher than Hypersim (Table~\ref{tab:hypersim_comp}).
Again, this is not a surprise, as Hypersim contains far more diverse and challenging scenes.
However, the trends across the both datasets remain.
Fig.~\ref{fig:results_replica} shows predictions on Replica by ViewSeg\xspace.
We show the 4 RGB inputs (left), the ground truth RGB , semantic and depth map for the novel target viewpoing (middle) and ViewSeg\xspace's predictions (right).
Fig.~\ref{fig:results_3d} shows 3D semantic reconstructions on two Replica test scenes.
|
\section{\uppercase{Introduction}}
\label{sec:introduction}
\noindent Many segmentation challenges have been undertaken recently, showing the need for automated models in clinical settings \cite{maier2017isles,winzeck2018isles,bakas2018identifying}. Along with strong predictive power, these challenges stress the importance of fast inference, as lesions can quickly spread. For instance, ischemic stroke lesions cause increasing tissue death within hours of onset, requiring reperfusion therapies around this time. The stroke progresses through acute, sub-acute and chronic stages within days. Gliomas, the most common type of malignant brain tumour, grows at a rate of increasing severity depending on the grade. As the tumour gets larger, symptoms often worsen, reinforcing the need to monitor lesion growth.
A simple, yet powerful, summary statistic is the lesion volume, or when the brain is represented as labelled voxels, the lesion label count. In a clinical study, \cite{alexander2010correlating} showed that lesion volume is a significant covariate for understanding ischemic stroke deficits after the initial onset. In application, lesion volume has generally been a dependable factor in the prognosis of ischemic stroke \cite{merino2007lesion,rivers2006acute} and multiple sclerosis \cite{zivadinov2012abnormal,bagnato2011lesions}. The objectivity of counts enables straightforward inference: a higher lesion label count means the lesion has grown.
In comparison to segmenting entire 3D medical images, directly estimating the number of lesion voxels in the brain from raw features should be expected to require less computational resources since it is no longer necessary to provide detailed information about the lesion's appearance. Nonetheless, in a study by \cite{erskine2005resolution} comparing the effects on volume estimation by using different magnetic resonance imaging scanners, lesion volume was estimated from a computer-assisted segmentation tool. Other methods for estimating lesion volume favored a geometric approach, wherein the lesion's surface area per slice is calculated and the estimate is derived by summing accross slices \cite{park2013semi,filippi1995resolution}. In contrast, the output of our proposed statistical direct lesion counting model is a single non-negative integer that doesn't require sophisticated viewing software or significant memory usage.
CNNs have shown promising results in lesion segmentation, as in the works of \cite{kamnitsas2017efficient}, \cite{havaei2017brain} and \cite{ronneberger2015u}, due to their ability to produce useful features from
\begin{figure*}
\centering
\includegraphics[scale=0.8]{PRCountNet.png}
\caption{3D architecture employed for counting lesions. The input tensor is obtained by stacking patches from the patient's brain MRI over $4$ different modalities. After applying convolution and pooling operations, the final output is a real number.} \label{fig:PRCountNet}
\end{figure*}
\noindent large visual spatial contexts combined with efficient iterative patch-based training and dense inference. The output of the CNN is often interpeted as the parameter of a conditional distribution. For instance, in \cite{kamnitsas2017efficient,havaei2017brain}, the output at each voxel is the parameter of a Bernoulli conditional distribution. The Poisson distribution is generally well known for modelling counts over time and space, and particularly has been applied to modelling the count of multiple sclerosis lesions over time \cite{altman2005application,albert1994time}. For this reason, we propose the lesion label counts, or equivalently lesion volume, in a predefined patch size is assumed to follow a Poisson distribution conditional on the patch features. The CNNs of \cite{kamnitsas2017efficient} and \cite{havaei2017brain} use hundreds of thousands of parameters for segmentation. Using CNNs, coupled with good distributional assumptions, should allow for smaller architectures and faster convergence on the counting task.
One prior related work by \cite{dubost2017gp} used a 3D CNN, similar to U-Net \cite{ronneberger2015u} to predict global lesion label count, but produces a segmentation when testing by using a removable global pooling layer. A drawback of estimating global lesion label counts is the need for more patient brain images, since an entire brain serves as a single sample. In their study, training was performed on $1,289$ 3D PD-weighted MRI scans, whereas some challenges provide limited training instances \cite{maier2017isles}. Another challenge with global lesion counts is being able to efficiently produce scalar outputs from larger 3D information, which would require additional preprocessing and transformations. Estimation of counts in patches can help in these situations.
This paper is organized as follows: Section 2 describes the methods, Section 3 presents the results, and a discussion follows in Section 4.
\section{\uppercase{Methods}}
\subsection{Architecture}
\noindent The proposed network is shown in Figure \ref{fig:PRCountNet}. As input it stacks $25 \times 25 \times 25$ patches from each MR sequence, runs 3 layers of convolution and max pooling which have sizes $3 \times 3 \times 3$ and $2 \times 2 \times 2$ respectively, followed by a final convolution of size $16 \times 16 \times 16$ to output one real number. In addition to the convolution and max pooling operations, Leaky ReLU nonlinearity was used. It is important to note the number of output activations at each layer are considerably small to reduce the total number of parameters. The patch is not only a parameter of the training features, but is intertwined with the task as well, since it delineates the region over which the lesion label count is estimated.
\paragraph{Training}: A block diagram of the methodology is shown in Figure \ref{fig:Block}. In accordance with the notation of Figure \ref{fig:Block}, the architecture associates one real number $N$ to each input tensor. Then, the model can be formulated as: $c | X \sim Pois(e^{N(X,\Theta)})$, where $c$ is the lesion label count over the patch, X are the input features used in the architecture, and $\Theta$ are the parameters of the
\begin{figure*}
\centering
\includegraphics[scale=0.6]{BLOCK.png}
\caption{Block diagram representation of the CNN-based Poisson Regression model. The predicted count is obtained by flooring the estimated conditional Poisson parameter ($\lambda$).} \label{fig:Block}
\end{figure*}
\noindent architecture. In order to train the parameters from observed counts $c_i$ (assumed to Poisson distributed with rate $\lambda_i$), mini-batch gradient descent with a batch size $b$ is used to minimize the average negative log-likelihood, $-\frac{1}{b}\sum_{i=1}^{b}log(\frac{\lambda_i^{c_i}e^{-\lambda_i}}{c_i!})$, plus additional L1 and L2 regularization terms to prevent overfitting. The samples used in the mini-batch were taken so as to ensure the lesion count was non-zero by insisting the central voxel be lesion. Since training only samples non-zero counts, it will not be efficient at predicting counts for completely randomly sampled patches, for which zero counts are more frequent. A possible workaround for this task is using a zero-inflated Poisson model (ZIP) \cite{lambert1992zero}, which is suggested for a future study.
\subsection{Implementation Details}
\noindent The open-source software Tensorflow was used to implement the model \cite{abadi2016tensorflow}. Non-zero counts were sampled in mini-batches of size $10$. Weights were initialized under a Gaussian with mean $0$ and standard deviation of $0.001$, while biases are initialized to $0$. Moreover, the Adam Optimizer was used with initial learning rate of $10^{-4}$, and training was stopped when the average cost over $1,000$ iterations increased. This always happened within $
|
15,000$ iterations. In comparison, the segmentation CNN of \cite{kamnitsas2017efficient} has default training configurations set to $70,000$ iterations, demonstrating a quick ability to learn for direct counting models. L1 regularization and L2 regularization were used and set to $10^{-8}$ and $10^{-6}$ respectively. In addition to regularization, dropout on all hidden layers was employed at a rate ot $0.5$. At prediction time, the mean Poisson rate was floored to provide an integer estimate. Table 1 summarizes the implementation details.
\begin{table}[H]
\centering
\caption{Numerical summary of implementation details.}
\label{tab1}
\begin{tabular}{|c|c|}
\hline
\bfseries Implementation Detail & \bfseries Value\\
\hline
Batch size ($b$) & $10$\\
\hline
Kernel initialization mean(std.) & $0(0.001)$ \\
\hline
Learning rate& $10^{-4}$\\
\hline
L1, L2 coefficients & $10^{-8}, 10^{-6}$ \\
\hline
Dropout & $0.5$\\
\hline
\end{tabular}
\end{table}
\section{Experiments and Results}
\subsection{Dataset}
\noindent The architecture and model were trained and evaluated on the ISLES2015 (SISS) training data, which consists of $28$ patients with sub-acute ischemic stroke. All data come from the same clinical center, which provided $4$ MR sequences for each patient: FLAIR, DWI, T1 and T1-contrast. Images are of size $230 \times 230 \times 153/154$, are processed to not contain the skull and have isotropic $1mm^3$ voxel resolution. Sub-acute ischemic stroke lesions have large variation in size. For instance, in the dataset, the smallest lesion consists of $106$ voxels, and the largest consists of $233,547$ voxels. From the $28$ brains, $20$ were randomly selected to form the training set, and the remaining $8$ formed the validation set. This selection was carried out once, and the same split was used across all experiments. It should be acknowledged that the choice of the training and validation split will have an effect on performance due to the aformentioned variability in lesion count.
\subsection{Model Performance}
\noindent To evaluate the performance of the architecture, several metrics are calculated on $10,000$ patches sampled to have a non-zero lesion label count from the validation set. Sampling was done by first selecting the validation brain and then selecting a patch from the brain. Estimated counts that surpassed the possible count in the predefined patch size were adjusted to predict the maximum possible count. In the experiment applied to the ISLES2015 (SISS) data, the mean absolute error (MAE) rounded up to the nearest integer was computed to be $1,458$. In addition, over the same samples the average estimated count to true count ratio was computed to be $1.15$. Finally, the mean relative error (MRE) was $0.42$ for the ischemic stroke lesions. True patch lesion label counts vary from a few hundred to $15,000$, indicating a promising initial result. Figure \ref{fig:RPlot} plots the estimated and true counts for 200 samples.
\begin{figure}[H]
\centering
\includegraphics[scale=0.35]{PredVTrue.png}
\caption{Plot of true count and estimated count for 200 lesion patch samples. Coefficient of multiple correlation (Pearson's correlation coefficient between predicted and actual values) of $R = 0.81$.} \label{fig:RPlot}
\end{figure}
\subsection{Predicting count order}
\noindent The second experiment was to order pairs of patches by lesion label count, which can be applied in a clinical setting to compare lesion images over time and assess growth or decay. Given any two image patches containing lesions, the goal is to evaluate the model's ability to identify the image with the larger (equivalently smaller) lesion volume using the proposed estimation. In the experiment, $10,000$ pairs of non-zero count patches are sampled from the validation fold. A sample is counted as correct if the predicted counts preserve the same order as the true counts. Running the experiment on the ISLES2015 (SISS) data gives a correct order prediction for $86$ \% of samples, demonstrating good ordering capabilities. Figure \ref{pair} shows an accurately predicted sample.
\begin{figure}[H]
\centering
\includegraphics[scale=0.76]{Pairwise.png}
\caption{Example of predicting count order, where the red square outline represents the middle slice of the $25^3$ patch. The true counts from left to right are: $356$ and $5297$. The predicted counts from left to right are: $501$ and $5640$. In this case, the model accurately identifies the left image as having the smaller lesion volume. }
\label{pair}
\end{figure}
\section{\uppercase{Future Work}}
\subsection{Extension to Arbitrary Patches}
\noindent The analysis and modelling undertaken in the previous sections were done on patches that contain lesion voxels. That is, the patches were sampled to contain lesion voxels. Although it was shown that this has the potential for modelling lesion growth or decay by first estimating lesion volume, relaxing this restriction allows for the prediction of counts in arbitrary patches, which are more frequently zero counts. Due to the unbalanced nature of the data, one proposition for a future study is to combine a zero-inflated Poisson model which is known to account for excess zero counts in data, with CNNs.
\subsection{Location Detection}
Being able to predict counts in arbitrary patches can form the basis for
lesion location detection. A possible algorithm could be to randomly sample patches from one brain, predict their counts, and record the central voxel
position for the patch with the maximum predicted count. The voxel position returned by the
algorithm should identify an area of significant lesion presence in the brain,
assuming the model is well-tuned. Though simple, the algorithm is theoretically
stable when applied on a single lesion, since applying true counts in place of predictions will always return
the location of a lesion provided a lesion is present.
Rather than recording just the maximum predicted count from the given sample,
another option would be to order the predicted counts for samples drawn from one brain. From a pre-defined quantile, larger counts a long with their central voxel positions will delineate the lesion.
\subsection{Model Selection for Segmentation}
The size of a lesion is one of the factors that can allow some configurations of a segmentation algorithm to perform better than others \cite{kamnitsas2017efficient}. Smaller lesions, often found in sub-acute ischemic stroke, are usually tougher to segment and produce lower dice coefficients \cite{havaei2017brain} than larger lesions due to a relatively higher number of false positives compared to true positives. For CNNs, configurations mainly pertain to the architecture employed. Given that some architectures may segment smaller lesions better, one option is to regress the architecture employed for a brain, on an estimate of the global lesion label count in the brain.
\section{\uppercase{Discussion}}
Direct counting models might be useful in clinical settings. Some potential examples are but not limited to: monitoring and locating lesions. They can also aid segmentation by selecting configurations of segmentation algorithms based on global lesion size, from raw data, and reducing the required segmentation area through lesion detection. Further developments need to be made to predict patch counts, including: improving accuracy in the predictions of non-zero counts, accounting for highly imbalanced zero counts, developing sampling-based algorithms for lesion location detection, and providing aggregate patch measures to predict global lesion count. This will increase the effectiveness and broaden the applications of direct counting models.
\bibliographystyle{apalike}
{\small
|
\section{Introduction}
Only two extragalactic sources have been confirmed to emit
gamma-rays
above 300\,GeV: Mkn~421 (\cite{pun92}, \cite{mac95}, \cite{pet96})
and Mkn~501 (\cite{qui96}, \cite{aha97}).
In addition, an as yet unconfirmed detection has been made of
1ES~2344+514 (\cite{cat97a}). All three have redshifts less than 0.05
and are, based on their values of log~($F_X$/$F_r$), classified
as X-ray--selected BL~Lacs (XBLs). Observations of more than 30
candidates of other source types over a range of redshifts have not yielded
any other extragalactic TeV emitters (e.g. \cite{ker95}).
From theoretical considerations Stecker et al. (1996) have proposed
that low redshift XBLs may be the only extragalactic gamma-ray sources
observable at TeV energies.
The CANGAROO data set, containing data collected over more than 5 years,
includes observations of four
low redshift active galactic nuclei (AGN) of
the BL~Lacertae class (PKS0521$-$365, EXO\,0423.4$-$0840,
PKS2005$-$489 and PKS2316$-$423). In this paper we present the results
of searches for TeV emission from these objects.
A comparison of on-source and off-source regions of sky is used
to search for an excess of gamma-ray--like
events from these sources over a typically two-week observing period.
The detection of a large flare from the BL~Lac Mkn~421 by the Whipple
Collaboration (\cite{gai96}) has encouraged us to
analyze the data
with particular emphasis on shorter timescale flare searches. The Whipple
result has shown that at TeV energies, BL~Lacs are capable of
extremely energetic flares on timescales of less than 1~day.
Preliminary analysis of the Mkn~421 gamma-ray spectrum indicates that
photon energies extend beyond 5\,TeV (\cite{mce97}). Similar strong,
short duration flares at TeV energies have also been reported from
Mkn~501, showing gamma-ray
emission extending to energies of at least 10\,TeV (\cite{aha97,qui97}).
The lack of high energy turnover in the
observed spectrum implies that the
interaction of the gamma-rays with the
cosmic infra-red background is at the lower end of expectations
(cf \cite{ste92}). The results from Mkn~421 and Mkn~501
indicate that the CANGAROO 3.8\,m telescope, with a relatively high
gamma-ray energy threshold, is capable of detecting such extragalactic
sources of TeV gamma-rays.
\section{The CANGAROO imaging atmospheric \v Cerenkov telescope}
The CANGAROO 3.8\,m imaging telescope is located near Woomera, South Australia
(longitude $137^{\circ}47'$E, latitude $31^{\circ}06'$S, 160\,m a.s.l).
The reflector is a single 3.8\,m diameter parabolic dish with $f$/1.0 optics.
The imaging camera consists of a square-packed array of 10mm $\times$ 10mm
Hamamatsu R2248 photomultiplier tubes. The camera originally contained
224 photomultiplier tubes, and was increased to 256 pixels
(16 $\times$ 16 square
array) in May 1995. The tube centers are separated by $0.18^{\circ}$, giving
a total field of view (side-side) of $\sim 3.0^{\circ}$. The photo-cathode of
each tube subtends $0.12^{\circ} \times 0.12^{\circ}$, giving a
photo-cathode coverage of about 40\%
of the field of view.
For a more detailed description of the 3.8\,m
telescope see \cite{har93}.
In the current configuration an event trigger is generated when a
sufficiently large number of tubes (3$\sim$5) exceed their
discriminator threshold.
Individual tube discriminator levels are believed to be around
4 photo-electrons. Under these triggering conditions the current gamma-ray
energy threshold of the 3.8\,m telescope is estimated to be $\sim$1.5\,TeV.
Prior
to mirror re-coating in November 1996 (which includes all data presented
in this paper) the energy threshold was somewhat higher. Using Monte Carlo
simulations we estimate that this energy threshold (as defined by the
peak of the differential energy spectrum) was 2.5\,TeV. When
calculating integral fluxes and flux limits we define our threshold as 2\,TeV.
This figure is obtained by re-binning the lower energy gamma-rays to
produce an integral spectrum defined by a single power law with a sharp
cutoff at the threshold energy.
Since starting observations in 1991 the CANGAROO 3.8\,m telescope has been
used to observe a number of galactic and extragalactic candidate TeV
gamma-ray sources (\cite{rob97}). From these observations we have
evidence for gamma-ray emission from three galactic sources --- PSR1706$-$44
(\cite{kif95}), the Crab nebula (\cite{tan94}) and
the Vela SNR (\cite{yos97}).
As an indication of the sensitivity of the 3.8\,m telescope
to extragalactic sources we can use Monte Carlo simulations to
estimate the response of the telescope to fluxes observed from
Mkn~421 and Mkn~501.
Assuming that the gamma-ray emission from Mkn~421 and Mkn~501 extends
up to 10\,TeV
the integral fluxes above 2\,TeV are as follows.
For Mkn~421:\\
F($>$2{\rm TeV}) $\sim 2.2 \times 10^{-12}$ photons ${\rm cm}^{-2}{\rm s}^{-1}$\\
(adopting the average 1996 Whipple flux with an assumed integral spectral index
of $-$1.56, \cite{mce97}.)
For Mkn~501:\\
F($>$2{\rm TeV}) $\sim 6.5 \times 10^{-12}$ photons ${\rm cm}^{-2}{\rm s}^{-1}$\\
(adopting the March 1997 HEGRA flux assuming an integral photon spectral
index of $-$1.49, Aharonian et al. 1997.)
If any of the sources examined in this paper were
capable of providing a steady
flux at the level of Mkn~421 or Mkn~501 it would
be detectable by the CANGAROO 3.8\,m telescope, albeit at low significance.
Observations by the Whipple group
of Mkn~421 have shown that it is capable of extremely
energetic flares on timescales of hours to days. A flare similar to
that seen on 7 May 1996 by the Whipple telescope (\cite{mce97})
with a flux of
F($>$250GeV) $\sim 5.7 \times 10^{-10}$ photons ${\rm cm}^{-2}{\rm s}^{-1}$
(F($>$2TeV) $\sim 3.8 \times 10^{-11}$ photons ${\rm cm}^{-2}{\rm s}^{-1}$
for a $-$1.56 spectral index and maximum photon energy of 10\,TeV) lasting
two hours would be easily detectable by CANGAROO at a significance of
around $7 \sigma$.
\section{Data sample}
When selecting AGN as observation targets for the 3.8\,m
telescope we have used a number of criteria including the
proximity of the source and measurements from X-ray
and gamma-ray satellite experiments.
As mentioned earlier,
\cite{ste96} have suggested that nearby XBLs
are the most promising sources of detectable TeV gamma-ray emission.
Multi-frequency studies of blazar emission (\cite{sam96}) suggest
that XBLs may be more compact and have higher electron energies
and stronger magnetic fields than
their radio selected counterparts.
The current status of
ground based TeV observations adds support to this belief.
We present here the results of observations on two XBLs and
two radio selected BL~Lacs (RBLs).
While RBLs are perhaps less promising as TeV sources, well
placed upper limits in the TeV range could help to confirm fundamental
differences between XBLs and RBLs. The four sources reported in this
paper are described in the following sub-sections.
\subsection{PKS0521$-$365}
PKS0521$-$365
(RA$_{J2000}$ = $05^{h}23^{m}2.0$,
Dec$_{J2000}$ =$-36^{\circ}$23'2'', z = 0.055)
is a radio selected BL-Lac. It was first detected as a strong radio
source more than 30 years ago (\cite{bol64}).
The spectral energy distribution of PKS0521$-$365 shows a double
peaked structure that is typical of blazars (\cite{pia96}). The first
peak in the energy distribution occurs in the far IR ($10^{13}-10^{14}$Hz)
and is assumed to be from synchrotron emission from the electrons in the jet.
The second component, possibly from
IC emission from the same electron population,
peaks at around 100MeV.
PKS0521$-$365 was viewed by the EGRET experiment on the CGRO during
1992 from May 14 to June 4 (\cite{lin95}).
A point source, consistent with the position
of PKS0521$-$365 was detected at a statistical significance of 4.3 $\sigma$,
with an integral source flux above 100MeV of $(1.8 \pm 0.5) \times 10^{-7}$
photons ${\rm cm}^{-2}{\rm s}^{-1}$. The photon spectrum from the EGRET
observation can be fitted with a single power law \\
$dN/dE = (1.85 \pm 1.14) \times 10^{-8} (E/1 {\rm GeV})^{-2.16 \pm 0.36}$
photons ${\rm cm}^{-2}{\rm s}^{-1}{\rm GeV}^{-1}$\\
{\noindent}The hardness of the EGRET photon spectrum and the proximity of the
source make PKS 0521-365 a candidate source for detectable levels
of TeV gamma-ray emission.
The CANGAROO data set for this object consists of 52 on/off pairs of
observations with a total of 89 hours of on-source and 84 hours of
off-source data.
\subsection{EXO\,0423.4$-$0840}
EXO\,0423.4$-$0840
(RA$_{J2000}$ = $04^{h}25^{m}50.7$,
Dec$_{J2000}$ =$-08^{\circ}$33'43'', z = 0.039)
was reported as a serendipitous discovery as part of the high galactic
latitude survey in the 0.05--2.0\,keV range by EXOSAT (\cite{gio91}).
Associating the source with the galaxy, MCG --01--12--005
(incorrectly given in \cite{gio91} as MCG +01--12--005)
and noting the high X-ray luminosity ($>10^{43}$ ergs ${\rm s}^{-1}$)
of the source, Giommi et al. (1991) proposed the source as a candidate
BL~Lac object.
This would make the source
the third closest such object known (after Mkn~421 and Mkn~501),
and the closest in the southern hemisphere.
We note however that \cite{kir90} determine a
redshift of 0.0392 for the source they designate IRAS~04235-0840B and
identify with the HEAO source 1H~0422-086, and which they classify as
a type 2 Seyfert. Clearly higher resolution X-ray studies are
required to firmly establish the identity of this source.
We have observed EXO\,0423.4$-$0840 during
October 1996 obtaining a total raw
data set comprising 20 hours of on-source and 17 hours of off-source data.
\subsection{PKS2005$-$489}
PKS2005$-$489
(RA$_{J2000}$ = $20^{h}09^{m}25.4$,
Dec$_{J2000}$ = $-48^{\circ}$49'54'', z=0.071)
is an XBL. X-ray measurements of PKS2005$-$489 by EXOSAT show extremely
large flux variations on timescales of hours (\cite{gio90}).
Initially reported as a marginal EGRET detection (\cite{mon95}),
a more accurate background estimation decreased the
significance below the level required for inclusion in the Second
EGRET Catalog (\cite{tom95}).
However the fluxes above 1\,GeV are more significant than those
above 100\,MeV suggesting that the source,
in a part of the
sky that has received relatively poor EGRET exposure,
may indeed be detectable at higher energies (\cite{lin97}).
We have observed PKS2005$-$489 during August of 1993 and during
August/September
1994 obtaining 41 hours of on-source and 38 hours of off-source data.
\subsection{PKS2316$-$423}
PKS2316$-$423
(RA$_{J2000}$ = $23^{h}19^{m}05.8$,
Dec$_{J2000}$ = $-42^{\circ}$06'48'', z= 0.055)
is a radio selected BL~Lac object. Assuming a magnetic field
strength of $B \leq 10^{-3}$G, the emission from radio through to X-ray
wavelengths is consistent with synchrotron radiation from electrons with
$E > 10^{13}$eV (\cite{cra94}). PKS2316$-$423 is not detected
by the EGRET telescope on the CGRO.
The CANGAROO 3.8\,m telescope observed PKS2316$-$423 during July 1996 for
a total of 26 hours of on-source data and 25 hours of off-source data.
\section{Image analysis and Monte Carlo simulations}
Prior to image analysis a data integrity check is performed to
test for the presence of cloud or electronics problems. For a subset
of observations tracking calibration is tested by monitoring changes
in single-fold rates as local stars rotate through the field of view.
Using this method the pointing direction can be inferred to an
accuracy of around $0.05^{\circ}$.
For each event trigger the tubes associated with the image are
selected using the following criteria:
(i) The TDC of the tube must indicate that the tube has exceeded
its discrimination threshold within 50\,ns around the trigger time of the
event, and
(ii) The ADC signal in the tube must be at least 1 standard
deviation above the RMS
of background noise for that tube.
An image is considered suitable for parameterization if it contains more
than 4 selected tubes, and if the total signal for all tubes in the image
exceeds 200 ADC counts (around 20 p.e.). About 25\% of raw images
are rejected by these two selection conditions. Surviving images are
parameterized after \cite{hil85}, with the gamma-ray domains for
our observations being :
\\
\\
$0.5^{\circ} <$ Distance $< 1.1^
|
{\circ}$\\
$0.01^{\circ} <$ Width $< 0.08^{\circ}$\\
$0.1^{\circ}<$ Length $< 0.4^{\circ}$\\
$0.4 <$ Concentration $< 0.9$\\
alpha $< 10^{\circ}$\\
The cumulative percentages of events passing each selection condition
are given in table~\ref{cuts}. The
numbers shown are based on image cuts applied to
Monte Carlo gamma-rays and protons. The efficiency of the cuts for the
Monte Carlo protons is consistent with that seen for real data.
\begin{table}
\begin{center}
\begin{tabular}{ccc}
\hline
& & \\
Cut & Gammas & Protons \\
& & \\ \hline
Raw data & 100\% & 100\% \\
ADC sum$>$200 & 90.0 & 77.5 \\
Distance & 65.4 & 44.6 \\
Width & 54.8 & 20.4 \\
Length & 54.0 & 16.5 \\
Concentration & 40.3 & 10.9 \\
alpha & 38.8 & 1.3 \\
\hline
\end{tabular}
\end{center}
\caption{Cumulative percentages of events passing gamma-ray
selection cuts.\label{cuts}}
\end{table}
To estimate fluxes and upper limits for our data we use an exposure
calculation based on a detailed Monte Carlo simulation of the response
of our telescope to gamma-ray initiated EAS. Simulations have been
based on the MOCCA simulation package (\cite{hil82}) which models all relevant
particle production processes and includes atmospheric absorption effects
for the \v Cerenkov photons that are produced. The energies of simulated
gamma-ray primaries were selected from a power law with integral exponent
$-$1.4, with a minimum primary energy of 500\,GeV. Core distances were
selected from an appropriate distribution out to a limiting core
distance of 250\,m. Simulated gamma-ray images were then subjected to the
same selection criteria as the real data. Fluxes and upper limits are
calculated by comparing measured gamma-ray excess rates to those predicted
by the simulations. If we assume that our telescope model is
correct, this method of flux estimation has only two free parameters ---
the source spectral index and source cutoff energy.
For total flux calculations the estimate of cutoff energy is not
critical --- the generally steep nature
of gamma-ray source spectral indices ensures that the bulk of the
flux is around the threshold energy.
\section{Results}
\begin{figure}
\picplace{65mm}
\special{psfile=flux.eps hoffset=-23 hscale=55 vscale=55}
\caption{Scatter plot of night by
night $2\sigma$ flux limits for each of the
sources (indicated by crosses). Also shown are the $2\sigma$ flux limits
for the total data set for each source (stars). The two dashed lines
show the $>$2TeV fluxes from Mkn~421 (left) and Mkn~501 (right) (see text
for details).}
\label{flux}
\end{figure}
The total data set for each source has been tested for the presence
of gamma-ray signals.
The significance of gamma-ray excesses have been calculated using
a method based on that of \cite{li83}:
\begin{displaymath}
S = \sqrt{2} \left\{ N_{on}
{\rm ln} \left[\frac{1+\beta}{\beta}
\left( \frac{N_{on}}{N_{on}+N_{off}} \right) \right] \right.
\end{displaymath}
\begin{equation}
\ \ \ \ \ \ \ \ \ \ \ \left. + N_{off} {\rm ln}
\left[ (1+\beta)
\left( \frac{N_{off}}{N_{on}+N_{off}} \right) \right] \right\}^{1/2}
\end{equation}
\noindent where S is the statistical significance and
$\beta$ is the ratio of events in the on-source observation to
those in the off-source observation in the range
$30^{\circ} < {\rm alpha} < 90^{\circ}$ (where alpha is the image parameter
describing image orientation). The values of $N_{on}$ and $N_{off}$
are calculated from the gamma-ray selected (alpha $< 10^{\circ}$)
data for the on and off-source observations respectively.
This method tends to slightly
overestimate the significances because it does not
account for the statistical uncertainty
in calculating the value of $\beta$. Based on Monte Carlo simulations
we estimate that this effect is small --- and less than
the typical systematic
differences caused by the variations in parameter distributions between
on and off-source observations.
In the total data set for each source no significant excess is seen for
those events in the gamma-ray domain.
The calculated excesses are
$-0.27 \sigma$ (PKS0521$-$365),
$-0.99 \sigma$ (EXO\,0423.4$-$0840),
$-0.91 \sigma$ (PKS2005$-$489) and
$+0.22 \sigma$ (PKS2316$-$423).
Upper limits to steady emission have
been calculated after \cite{pro84} and are shown in the scatter plot
in Fig.~\ref{flux}.
\begin{table}
\begin{center}
\begin{tabular}{cccc}
\hline
& & & \\
PKS0521 & PKS2005 & PKS2316 & EXO0423\\
MJD\, \, \, F & MJD\, \, \, F & MJD\, \, \, \, F & MJD\, \, \, \, F \\ \hline
9655.7\, \, 2.9 & 9220.5\, \, 4.5 & 10304.8\, \, 4.8 & 10367.7\, \, 2.1 \\
9663.7\, \, 6.5 & 9221.6\, \, 5.3 & 10305.7\, \, 2.7 & 10368.7\, \, 1.3 \\
9684.6\, \, 6.7 & 9222.6\, \, 7.1 & 10306.7\, \, 3.1 & 10369.7\, \, 3.1 \\
9685.7\, \, 3.9 & 9576.7\, \, 4.9 & 10307.8\, \, 2.4 & 10370.7\, \, 3.4 \\
9686.7\, \, 3.1 & 9577.7\, \, 4.8 & 10308.7\, \, 1.8 & \\
9687.7\, \, 4.8 & 9597.5\, \, 7.0 & 10311.5\, \, 3.6 & \\
9688.7\, \, 3.7 & 9599.5\, \, 2.6 & & \\
9690.7\, \, 4.6 & 9600.5\, \, 3.5 & & \\
9691.7\, \, 7.1 & 9601.5\, \, 5.9 & & \\
9713.6\, \, 4.9 & 9602.5\, \, 6.7 & & \\
9714.7\, \, 10.0 & 9604.6\, \, 8.9 & & \\
9715.6\, \, 7.2 & 9605.6\, \, 4.1 & & \\
9717.6\, \, 4.8 & & & \\
9718.6\, \, 8.2 & & & \\
9719.6\, \, 3.0 & & & \\
9720.5\, \, 4.2 & & & \\
9722.6\, \, 6.6 & & & \\
\hline
\end{tabular}
\end{center}
\caption{The 2$\sigma$ flux upper limits, F($>$2TeV)
($\times 10^{-12}$ photons cm$^{-2}$ s$^{-1}$) and approximate
MJD (-40000) for each on-source observation.\label{obs}}
\end{table}
We have also searched our data set for gamma-ray emission on a night by
night basis. In general our observations of a source consist of a long
(several hours) on-source run, with a similar length off-source
run, offset in RA to provide the same coverage of azimuth and zenith.
The flare search has been performed by calculating the on-source excess
for each pair of on/off observations each night.
In cases where there
is no matching off-source run, an equivalent off-source run from another
nearby night is used.
Figure~\ref{sig} shows the distribution of on-source significances
for all three sources. There is no evidence for gamma-ray flares on
the timescale of $\sim$ 1 night for any of the sources.
The most significant nightly excess (from PKS0521-365) has a nominal
significance of $3.7 \sigma$ but after allowing for the number of
searches performed this significance is reduced to less than
$3\sigma$.
The upper limits to
gamma-ray emission for these observations are shown in Fig.~\ref{flux}.
We have also included, in table~\ref{obs}, a list of
upper limits to gamma-ray emission for each individual observation.
\begin{figure}
\picplace{60mm}
\special{psfile=sig.eps hoffset=-5 hscale=50 vscale=50}
\caption{Distribution of the significances of night by night
excesses for all sources.}
\label{sig}
\end{figure}
\section{Discussion}
The interpretation of upper limits from BL~Lacs is difficult, because at
the present time there is no detailed model to predict TeV fluxes from
the different classes of BL~Lacs.
\cite{ste96} have
attempted to predict TeV fluxes from a number of nearby XBLs by assuming
that all XBLs have similar spectral characteristics to the known TeV
emitter Mkn~421. \cite{mac95} argue that the observed X-ray/TeV gamma-ray
flux increases from Mkn~421 during flares indicate
that the TeV gamma-rays are produced
by a synchrotron self-Compton (SSC) mechanism.
\cite{ste96} note that the Compton emission from the
electrons in the jet has a similar spectrum to the synchrotron
component, but upshifted by the square of the
maximum Lorentz factor (estimated to be $\sim 10^{4.5}$ for Mkn~421)
of the electrons. For Mkn~421 and PKS2155$-$304
(both XBLs detected by EGRET) the luminosities in the Compton and
synchrotron regions are nearly equal. Assuming that other XBLs
also have $L_{C}/L_{syn} \sim 1$, \cite{ste96} derive the following
relationship
between X-ray and TeV source fluxes :\\
$\nu_{TeV}F_{TeV} \sim \nu_{x}F_{x}$\\
{\noindent}where $\nu$ is the frequency and $F$ the flux in each energy band.
Using an assumed $E^{-2.2}$ spectral index for all XBLs, and absorption
of TeV gamma-rays in the intergalactic infra-red based on an average of
Models 1 and 2 from \cite{ste97}, they estimate TeV fluxes above 1\,TeV
from a number of nearby XBLs based on EXOSAT X-ray flux measurements.
It should be noted that a more recent paper (\cite{ste98}) concludes that
the absorption of TeV gamma-rays in the infra-red background is
overestimated in \cite{ste97}. Allowing for this, the predicted flux
for PKS2005$-$489 above 1\,TeV is
F($>1 {\rm TeV})=1.3 \times 10^{-12}$ photons ${\rm cm}^{-2}{\rm s}^{-1}$.
Above 2\,TeV (the threshold of CANGAROO in this analysis) this flux would
be F($>2{\rm TeV})=0.34 \times 10^{-12}$ photons ${\rm cm}^{-2}{\rm s}^{-1}$,
below the flux sensitivity of the measurement made in this paper.
The photon spectral index assumed in \cite{ste96}
is now incompatible with recent measurements of Mkn~421 and Mkn~501.
It is also possible that the SSC mechanism is not primarily
responsible for
the TeV gamma-ray emission and a number of other mechanisms have
been suggested (\cite{man93,der94} and references therein).
Of the other predictions of \cite{ste96},
\cite{ker95} derive
an upper limit to the $>$0.3~TeV emission
for the XBL 1ES\,1727+502 that is above the calculated flux.
It is also worth noting that a deep exposure of the RBL BL~Lacertae
with the Whipple telescope did not yield a detection
(\cite{cat97b}).
Current and future observations of a range of
BL~Lacs by ground based \v Cerenkov telescopes should help to clarify
the mechanisms for the production of high energy photons in these sources.
\section{Conclusion}
Analysis of CANGAROO data shows no evidence for long-term or short-term
emission
of gamma-rays above 2\,TeV from the
BL~Lacs PKS0521$-$365, EXO\,0423.4$-$0840,
PKS2005$-$489, PKS2316$-$423. The $2\sigma$ upper
limits to steady emission are
$1.0 \times 10^{-12} {\rm cm}^{-2}{\rm s}^{-1}$ (PKS0521$-$365),
$1.1 \times 10^{-12} {\rm cm}^{-2}{\rm s}^{-1}$ (EXO\,0423.4$-$0840),
$1.1 \times 10^{-12} {\rm cm}^{-2}{\rm s}^{-1}$ (PKS2005$-$489), and
$1.2 \times 10^{-12} {\rm cm}^{-2}{\rm s}^{-1}$ (PKS2316$-$423).
For the XBL PKS2005$-$489 the flux limits presented in this paper
do not constrain the TeV emission levels predicted by the simple model
of \cite{ste96}.
\begin{acknowledgements}
The authors would like to thank O. de Jager for helpful comments.
This work is supported by a Grant-in-Aid in Scientific Research from the
Japan Ministry of Education, Science and Culture, and also by the
Australian Research Council and the International Science and
Technology Program. MDR acknowledges the receipt of a JSPS fellowship from
the Japan Ministry of Education, Science, Sport and Culture.
\end{acknowledgements}
|
\section{Introduction}
Galaxies cluster not only in physical space, but in color space as well \citep{Strateva+01,Bell+04}. The advent of CCD technology revealed a strong dichotomy in galaxy colors: a tightly-packed red sequence (RS; predominantly quiescent, passively evolving ellipticals) and a broader blue cloud (BC; predominantly active, star-forming spiral galaxies) \citep{Bower_Lucey_Ellis_1992,Schawinski+14}. Galaxies that fall between the RS and BC populate the `green valley' (GV).
\iffalse
\footnote{
Historical note: It was ca. 2007 when the terms BC \& GV gained mainstream traction. \citet{Faber+07} mentions ``blue cloud'' \& `valley' (though it was called a `valley' at least as early as 2004 \citep{Weiner+05}); a few months later, first ``green valley'' references in \citet{Wyder+07, Schiminovich+07}.
} \fi
Astrophysically, the RS serves as an imperfect proxy for selecting galaxies with low specific star formation rate (sSFR).
Star formation decays naturally with age: stellar populations older than roughly $1~{\rm Gyr}$ become almost uniformly red, implying that the reddest galaxies have essentially no star formation \citep{CG10}.
Due to the Gaussian random nature of $\Lambda$CDM initial conditions, the density of peaks on different scales are coupled, such that the earliest forming galaxies reside in regions destined to host clusters of galaxies \citep{Springel+05}. As a result, clusters naturally contain an older galaxy population than the field. Other dynamical processes that can shut down star formation are also enhanced in proto-cluster environments.
Major mergers between galaxies cause rapid morphological and chromatic shifts from blue spirals towards red ellipticals. Effects such as ram-pressure stripping and AGN feedback blow away gas from high density regions, rapidly diminishing star formation---or {\it quenching}---the galaxy \citep{Schawinski+14}.
Galaxy clusters are natural hotbeds for merging, ram-pressure stripping, and AGN feedback as well, so they serve as ideal nodes at which to find quenched galaxies.
The distribution of sSFR is skew-lognormal, with a peak of blue active star-forming galaxies at ${\rm sSFR} \sim 10^{-10}~{\rm yr}^{-1}$ at low redshift (the galactic main sequence) and a tail towards lower sSFR \citep{Wetzel+12,Eales+18}. This form suggests that the sSFR frequency distribution could be modeled as a dual Gaussian mixture.
Further strengthening this duality, the scatter in photometric color decreases drastically as sSFR decreases, such that galaxies with ${\rm sSFR} \lesssim 10^{-11.3}~{\rm yr}^{-1}$ share approximately the same color \citep{Eales+17}, thus creating an exceptionally narrow distribution of colors for quiescent galaxies. These factors combined then produce a dual Gaussian in photometric color \citep[see e.g.][]{Baldry+04,Hao+09}: a narrow component for the low-sSFR RS and a wider component for the high-sSFR BC.
Since the red sequence is particularly strong in clusters, it serves as a strong key for galaxy cluster selection.
Identification of clusters by their RS was first proposed by \citet{Gladders&Yee2000}. Since galaxies redden with age, ignoring galaxies bluer than a given cluster's RS removes essentially all galaxies at lower redshifts, efficiently reducing foreground contributions.
The maxBCG algorithm \citep{Koester+07} further improved cluster selection, using a hard $\pm 2 \, \sigma_{\rm RS}$ cut in photometric color to select clusters. The algorithm defines richness, $N_{200}$, as the count of red galaxies within an estimated virial radius, $R_{200}$. This count of virialized galaxies within the cluster serves as a halo mass proxy \citep{Rozo+09_constraining}.
\citet{Rozo+09_improvement} developed an improved richness estimate $\lambda$: the sum of RS membership probabilities for a given cluster, which included a more nuanced cutoff radius and a Gaussian color filter.
More recent algorithms and surveys have extended the RS's use for cluster cosmology.
To further improve the red/blue galaxy distinction, \citet{Hao+09} developed a single-color error-corrected Gaussian Mixture Model (ECGMM) in color-magnitude space. As compared to a typical GMM, their ECGMM accounted for photometric errors contributing to the scatter. Again, they selected RS galaxies within a hard $\pm 2 \sigma_{\rm RS}$ cut.
Around this time, the first results from the SpARCS survey \citep[The \textit{Spitzer} Adaptation of the Red-sequence Cluster Survey} % SpARCS \citep[\SpARCS][]{Wilson+09,Muzzin+09;][]{Wilson+09,Muzzin+09} produced hundreds of $z>1$ cluster candidates using a selection method similar to that of \citet{Gladders&Yee2000}.
Later, \citet{Rykoff+14} designed the redMaPPer algorithm, which selects RS galaxies in a multi-color + magnitude space, giving a redshift-continuous, multi-color update to richness.
Building on similar methodology, \citet{Rozo+16} introduced the redMaGiC algorithm to select luminous red galaxies, estimating galactic redshifts with high accuracy.
These methods serve as a basis for DES cluster finding in cosmological analyses \citep{Rykoff2016DESSVredmapper, Abbott+20}, with the richness indicator $\lambda$ serving as a mass proxy.
We present {\bf Red Dragon}: a multivariate Gaussian mixture model to select the RS along with other galactic populations. Red Dragon gives a consistent RS definition and continuous red fraction across redshift, characterizing well the underlying photometric distribution of galaxies.
Red Dragon follows the historical trend of moving from quantized (e.g. binary) classifications towards continuous, probabilistic definitions.
Where once galaxies were purely classified as ellipticals or spirals,
continuous morphological parameters now allow for more precise morphological characterization of galaxies \citep{Conselice_2014}.
Similarly, the RS has historically been selected as a hard cut in color--magnitude \citep[e.g.][]{Hao+09} or color--color space \citep[e.g.][]{Whitaker+12}, but these cuts lack the nuanced information available from the full multi-color space of 4+ band surveys.
Red Dragon now offers a probabilistic and smooth RS definition across redshift.
We begin in section~\ref{sec:motives} by introducing the datasets used in this analysis, along with an extended discussion of motivations for our method.
Section~\ref{sec:methods} then details the truth labels used to quantify goodness of RS fit and expounds technical features of the algorithm, such as the optimal number of Gaussian components to use.
Section~\ref{sec:results} presents several results of this method.
Finally, section~\ref{sec:conclude} summarizes the method and discusses future applications.
\section{Data \& Motivations} \label{sec:motives}
In this section, we introduce the three datasets used in this work (\S\ref{sec:datasets}) then explain several of our chief motivations in developing this algorithm: chiefly the redshift drift of the 4000~\AA\ break (\S\ref{sec:z_drift}) and the information to be gained from multi-color analysis (\S\ref{sec:beyond}).
\subsection{Datasets} \label{sec:datasets}
\begin{table}\centering
\caption{
The various datasets used in this analysis.
The baryon pasting algorithm ADDGALS created Buzzard's synthetic galaxy catalog, assigning SDSS-like galaxies to an underlying N-body simulation.
}
\begin{tabular}{ccccc}
\hline
Dataset & Type & Redshift & sSFR & $N_{\rm gal}$ \\
\hline
SDSS/low-$z$ & Observation &
$0.1 \pm 0.005$ & Yes & 44 452 \\
SDSS/mid-$z$ & Observation &
$(.3, .5)$ & Yes & 90 609 \\
TNG300-1 & Hydro sim &
$0.1$ & Yes & 62 230 \\
Buzzard & Synthetic &
$[0.05,0.84]$ & No & 91 004 552 \\
\hline
\end{tabular}
\label{tab:datasets}
\end{table}
We analyze galaxies from the three datasets listed in table~\ref{tab:datasets}: local observed galaxies from SDSS \citep{Szalay+02}, galaxies at low redshift produced by the hydrodynamic simulation IllustrisTNG \citep[][\href{https://www.tng-project.org/data/docs/specifications/\#sec5k}{Model C}, \href{https://www.tng-project.org/files/TNG300-1_StellarPhot/}{observed frame}] {Nelson+18}, and a wide redshift sample from the Buzzard Flock synthetic galaxy catalog \citep{DeRose+19,DeRose+21}. All galaxy samples are luminosity limited such that $L_i(z) > 0.2 \, L_{*,i}(z)$ using the $i$-band characteristic luminosity as a function of redshift, $L_{*,i}(z)$ as defined in \citet{Rykoff+14}. These samples offer complementary tests of Red Dragon's ability to identify the quiescent galaxies of the RS.
SDSS galaxies were selected from a spectroscopic sample.
For the low-redshift $z = 0.1 \pm .005$ sample\footnote{SDSS/low-$z$ sample extracted from SDSS \href{http://skyserver.sdss.org/dr16/en/tools/search/sql.aspx}{SkyServer} with \href{https://pastebin.com/HeBmQYqz}{this SQL script}.}, redshift errors were typically $\lesssim 10^{-4}$. We limit summed photometric error to be below $0.3$ to exclude galaxies with poor photometry. Specific star formation rates were calculated using methods from \citet{Conroy+09} and are employed as a truth label to test against Red Dragon's selection of quenched galaxies.
The other SDSS galaxy sample\footnote{SDSS/mid-$z$ sample extracted from SDSS \href{http://skyserver.sdss.org/dr16/en/tools/search/sql.aspx}{SkyServer} with \href{https://pastebin.com/aAMGY6TW}{this SQL script.}}, spanning redshifts $0.3$ to $0.5$, tests Red Dragon's ability to smoothly select red galaxies as the 4000~\AA\ break crosses filters. This sample has typical redshift error $\lesssim 10^{-3}$, and our required redshift error of less than 0.05 excludes fewer than one in $10^4$ galaxies.
We also use the Illustris TNG300-1 cosmological hydrodynamic simulation at redshift $z = 0.1$ (with synthesized SDSS photometry).
These galaxies have a truth label for sSFR derived from each galaxy's star formation history. As noted in section~\ref{sec:SDSS_limits_of_hard_cuts} and as illustrated in appendix~\ref{apx:SDSS_vs_TNG_hist}, the distributions of galaxy colors do not match well those of SDSS, making application to this sample more so a test of Red Dragon's robustness than of its RS selection capacity.
The Buzzard synthetic galaxy catalog is a wide-area galaxy sample that extends the redshift range of our analysis to $z = 0.84$.
To create the Buzzard galaxy catalog, the {\sc ADDGALS} algorithm \citep{BW08,Wechsler+21} populated lightcone outputs of N-body simulations with galaxies. The empirical method introduces galaxy bias using a local dark matter density measure, and colors are applied using templates tuned to SDSS and other observed galaxy samples. The method reproduces well the magntitude counts and two-point clustering of galaxies at $z < 1$, but massive clusters are somewhat underpopulated as compared to observations \citep{DeRose+21, Wechsler+21}.
\begin{figure*}\centering
\includegraphics [width=\linewidth] {figs/bi_panel}
\caption{
SDSS low-redshift galaxy sample ($z = 0.1 \pm 0.005$) plotted in CC space.
The color $u-r$ gives a relatively clean measurement of the strength of the 4000~\AA\ break, measuring current SFR.
The color $r-i$ measures the post-break slope, an indicator of dust content (somewhat degenerate with age and metallicity).
{\bf Left:} colored by log specific star formation rate, with values below roughly -11 corresponding to the quenched population.
{\bf Right:} colored by $g -2r +i$, a pseudo finite difference second derivative near the 4k\AA\ break, corresponding roughly with metallicity.
}
\label{fig:bi_panel}
\end{figure*}
\subsection{Redshift drift of 4000~\AA\ break} \label{sec:z_drift}
Our main motivation in creating the Red Dragon algorithm originated in the redshift drift of
``the 4000~\AA\ break'': a sharp drop in spectral intensity at short wavelengths and the primary effect of quenching on a galaxy's spectrum.
This break has two main sources.
Though only certain wavelengths larger than 3645~\AA\ can be absorbed by excited ($n=2$) Hydrogen, any wavelength shorter than that will fully ionize an electron. This asymptote of the Balmer series at 3645~\AA\ results in a sharp drop of intensity towards shorter wavelengths \citep{Mihalas_1967}.
Meanwhile, stellar production of metals results in a blanket of line absorption, reddening the spectrum around 4000~\AA~\citep{Worthey_1994}.
These two effects conspire to cause a strong suppression of emission at wavelengths shorter than $3800 \pm 200$~\AA.
\begin{table}\centering
\caption{
Approximate redshift ranges over which each band will measure 4k\AA\ rest wavelengths, given for both \href{https://www.sdss.org/instruments/camera/\#Filters}{SDSS} \citep{Doi+10} and \href{http://data.darkenergysurvey.org/aux/releasenotes/DESDMrelease.html}{DES} \citep{Abbott+18} photometries.
}
\label{tab:4kA_break}
\begin{tabular}{lll}
\hline
band & $z_{\rm break,SDSS}$ & $z_{\rm break,DES}$ \\
\hline
g & $[ 0.0, 0.36)$ & $[ 0.0, 0.38)$ \\
r & $[ 0.36, 0.71)$ & $[ 0.38, 0.78)$ \\
i & $[ 0.71, 1.06)$ & $[ 0.78, 1.13)$ \\
z & $[ 1.06, 1.37)$ & $[ 1.13, 1.50)$ \\
Y & N/A & $[ 1.38, 1.55)$ \\
\hline
\end{tabular}
\end{table}
Table~\ref{tab:4kA_break} shows approximate redshifts at which a rest frame wavelength of 4000~\AA\ is observed in each observational band for both SDSS and DES. The difference in magnitude of the bands {\it surrounding} the band in which the break resides gives the cleanest measure of D4000 (the ratio of intensity on either side of the break), which gives an excellent estimate of sSFR. The optimal photometric color for RS selection thus changes with redshift.
If a RS selector uses only one color at a time, with discrete jumps in photometric color at certain transition redshifts, these hard transitions can result in an $\mathcal{O}(10\%)$ shift in red fraction, $f_R(z)$ \citep[up to $\sim 16\%$; see e.g.][]{Nishizawa+18}.
This jolt in red fraction would echo in single-color richness estimates based on a count of bright red galaxies in a cluster. Two identical clusters on either side of a redshift transition could then have significantly different $\lambda_{\rm col}$ values, introducing non-trivial systematic errors in halo mass--richness scaling relations.
Evolving a multi-color Gaussian mixture across redshift smoothly defines the RS, obviating the discontinuities caused by color swapping.
Taking all colors into account simultaneously allows for a continuous and consistent RS out to high redshifts.
\subsection{Beyond the 4000~\AA\ break} \label{sec:beyond}
Though a galaxy's quenched status primarily manifests though the strength of D4000,
other astrophysical factors such as age, dust, or metallicity separate RS from BC photometrically.
(For a summary of main effects of galaxy properties on optical spectra, see Figure~\ref{fig:cartoon_gal_spec}.)
Though D4000 can be estimated using a single photometric color, multi-color analysis serves to better distinguish the RS from the BC.
Figure~\ref{fig:bi_panel} illustrates this for the low-$z$ SDSS galaxy sample. While the horizontal axis $(u-r)$ correlates highly with D4000, the vertical axis $(r-i)$ gives a degenerate measure of dust content and other properties. The left panel colors points by specific star-formation rate (where $\log_{10} {\rm sSFR} \sim -11$ separates quenched from star-forming galaxies) while the right panel colors points by $(g-2r+i)$. The latter visibly correlates with the former.
This composite feature of $(g - 2r + i)$ acts as a pseudo second derivative, approximating here the spectrum curvature near the 4000~\AA\ break. While ordinary single-color (a vertical line on this plot) or CC selection (an angled line on this plot) would be ignorant of such information, the curvature information clearly correlates with sSFR and would aid in selecting the quenched population. Even a perfectly positioned hard line cut would be inherently limited in selecting quenched galaxies (see Figure~\ref{fig:bACC_CM_CC_GM}).
A multi-color Gaussian mixture simply includes such curvature terms using the primary color space (i.e. differences between neighboring bands; see equation~\ref{eqn:primary_SDSS}). Here we have $(u-r) = (u-g) + (g-r)$ and $(g - 2r + i) = (g-r) - (r-i)$, showing up as $\pm 45^\circ$ directions in the multi-dimensional primary color space.
Furthermore, the populations overlap in both CM and CC spaces, limiting the power of hard cut selection. In contrast, Gaussian mixtures are {\it designed} to model such overlapping populations, making them a natural tool to consider in selecting the RS and BC.
\iffalse
Thus we see that in addition to measuring D4000, information from other colors further differentiates RS from BC.
\wm{ [remnants of \S2.4; could delete completely:]
Since there's significant overlap between galaxy populations, hard cuts in single color (CM) or double color (CC) spaces are ill-suited for separating components.
Figure~\ref{fig:bi_panel} shows qualitatively that even a perfectly angled cut would remain ignorant of information gained from other bands.
Though Gaussian mixtures cannot perfectly model the photometric distribution of galaxies, GMs by nature estimate population membership probabilities for overlapping samples, making them a nice fit for the problem.
}
Optimal selection of the RS should extend beyond a single color (CM) or double color (CC) selection.
\fi
\iffalse
\subsection{Population overlap} \label{sec:mismatch}
In color space, the photometric distribution of galaxies well approximates a Gaussian mixture. The common RS selection method of hard cuts (i.e. drawing a line in CM or CC spaces to separate RS from BC) in contrast invokes significant systematic errors, as it fails to model the overlap between populations.
For the low-redshift SDSS sample, selecting galaxies via Gaussian mixtures results in $\delta \equiv \left| \mu_{\rm RS} - \mu_{\rm BC} \right| / \sqrt{{\sigma_{\rm RS}}^2 + {\sigma_{\rm BC}}^2}$ being in $[0.4,1.6]$ for the various possible colors, implying significant overlap between the populations.
For Buzzard, across redshift range $z|[.05,.70]$, Red Dragon found $\delta|[1,3]$, again implying non-trivial overlap between RS and BC.
Though hard cut selections can technically give continuous membership probabilities $P_{\rm RS}$, such probabilities would solely come from photometric error, rather than reflecting the overlapping nature of the model. This would yield more certain probabilities (i.e. values closer to zero or one), resulting in falsely skewed (and more discrete) red fractions and richness estimates. More careful probability estimates would better characterize the red sequence population.
\fi
In order to combat $f_R(z)$ discontinuities, move beyond hard cuts in photometry, and better select the photometry-space population of RS galaxies, we present the Red Dragon algorithm.
\section{Methods} \label{sec:methods}
Red Dragon is a novel method for calculating red sequence membership probabilities $P_{\rm RS}$. In its most general construction, a Red Dragon RS selector uses a Gaussian mixture in multi-color space to select populations of galaxies (RS, BC, and optionally additional components).
In \S\ref{sec:construction}, we outline the algorithm, including the sequence of operations and the relevant likelihood function.
Considerations when applying the algorithm are presented in \S\ref{sec:considerations}, discussing choices such as the optimal number of colors or model components.
\subsection{Algorithm Construction} \label{sec:construction}
Here we give an overview of the algorithm (\S\ref{sec:overview}), introduce the core likelihood function for Red Dragon (\S\ref{sec:llh}), and detail interpolation of GMM parameters across redshift (\S\ref{sec:interpolation}).
\subsubsection{Overview of algorithm} \label{sec:overview}
\begin{figure}\centering
\begin{tikzpicture}[node distance=1.5cm]
\node (input) [io] {Input Data: \\ $z$, $\delta_z$; $\vec{m}$, $\vec{\delta}_m$};
\node (lbl1) [vertex, left of=input, xshift=-1cm] {1};
\node (z_sel) [process, below of=input] {Probabilistic Redshift Selection};
\node (lbl2) [vertex, left of=z_sel, xshift=-1cm] {2};
\node (GMM) [process, below of=z_sel] {GMM};
\node (lbl3) [vertex, left of=GMM, xshift=-1cm] {3};
\node (output) [io, below of=GMM] {Output~Data: $\{ \vec{\theta}_\alpha \}$ (per~$z$-bin)};
\node (lbl4) [vertex, left of=output, xshift=-1cm] {4};
\draw [arrow] (input) -- (z_sel);
\draw [arrow] (z_sel) -- (GMM);
\draw [arrow] (GMM) -- (output);
\node (match) [process, below of=output] {Component Matching};
\node (lbl5) [vertex, left of=match, xshift=-1cm] {5};
\node (interp) [process, below of=match] {Interpolate};
\node (lbl6) [vertex, left of=interp, xshift=-1cm] {6};
\node (RD) [io, below of=interp] {Red Dragon: $\vec\theta_\alpha(z)$, $P_{\rm mem}$};
\node (lbl7) [vertex, left of=RD, xshift=-1cm] {7};
\draw [arrow] (output) -- (match);
\draw [arrow] (match) -- (interp);
\draw [arrow] (interp) -- (RD);
\draw [arrow,gray] (RD) -| ([xshift=.75cm]RD.east) |- node[anchor=south,rotate=270,xshift=3cm]{re-initialize components} (GMM);
\node (lbl8) [vertex, right of=match, xshift=+1.5cm] {8};
\end{tikzpicture}
\caption{
Work flow for RD algorithm. The process has two main portions: (1--4) moving from input data to creating GMM parameterizations $\{ \vec\theta_\alpha \}_i$ at each redshift slice $i$ and (4--7) interlinking components across redshift from the disarranged collection $\{ \vec\theta_\alpha \}$ to parameterize redshift-continuous GMM components, $\vec\theta_\alpha(z)$, and thereby membership probabilities $P_{{\rm mem},\alpha}(z,\vec m, \vec \sigma_m)$.
The grey arrow (8) indicates that after a dragon is created, it may be used to initialize components in fitting the GMM.
This is especially useful for outlier redshift bins which didn't fit like their neighbors.
By default, the algorithm does a single pass (1--7) fitting sparsely with the \texttt{sklearn} GMM, then uses that fit to inform initial conditions for the \texttt{pyGMMis} GMM, which gives the second pass (4--7) for a final fitting.
}
\label{fig:flowchart}
\end{figure}
Broadly speaking, there are two stages to the Red Dragon algorithm, as illustrated in Figure~\ref{fig:flowchart}.
In the first stage, it segments the data into discrete redshift shells\footnote{
Though we describe fits as functions of redshift here, any secondary variable may be used (given a thin enough redshift extent for the data), such as stellar mass or a single photometric band.
} and finds GMM parameterizations for each slice.
In the second stage, it matches components across redshift bins and interpolates parameters. This results in continuous and consistent definition of the RS and BC (and optionally further components).
\paragraph*{First stage: redshift-discrete fits.}
Red Dragon reads in photometry and redshift information as supplied by the user (see the values in Figure~\ref{fig:flowchart}, box~1).
Input redshift estimates $z$ and errors $\delta_z$ allow for binned analysis of redshift evolution.
From input magnitudes $\vec m$ and magnitude errors $\vec\delta_m$, Red Dragon calculates colors $\vec c$ and the corresponding noise covariance matrix $\Delta$ (see equation~\ref{eqn:noise}).
This set of input variables are then sent to a GMM to find fit parameters $\vec \theta_\alpha$ in each redshift bin. See section~\ref{sec:llh} for details on the likelihood model and fitting process.
At this point, the algorithm has saved as output data $\{ \vec\theta_\alpha \}_i$, i.e. at each redshift bin $i$ and for each component $\alpha$, it saves Gaussian mixture parameters $\vec\theta$ (see table~\ref{tab:parameters}).
These components are unordered at this point, so the $\alpha=0$ component in redshift bin $i=1$ may not correspond to the $\alpha=0$ component in redshift bin $i=2$.
This set of Gaussian parameterizations must be linked across redshift continuously to avoid spurious rapid changes of classification over small redshift ranges.
\paragraph*{Second stage: redshift-continuous fits.}
Using the calculated set of Gaussian mixtures, $\{ \vec\theta_\alpha \}_i$, the algorithm now matches similar Gaussian components across redshift bins.
While distinguishing continuous components across redshift is relatively simple for two-component models ($K=2$), matching for even three components ($K=3$) can be challenging.
(Does the wide, redder portion of the BC connect to the narrower but similar in color component in the adjacent redshift bin, or does it connect to the component with similar scatter despite its significantly bluer color?)
Despite these challenges, the matching process can be largely automated, as discussed in appendix~\ref{apx:continuity}.
After successful matching across redshift bins, the now linked set of parameters $\{ \vec\theta_{\alpha'} \}_i$ can then be interpolated across redshift, giving the continuous parameterizations $\vec\theta_\alpha(z)$. These continuously evolving Gaussian components can then yield component membership probabilities for a galaxy at a given redshift with given photometry for each component $\alpha$: $P_{\rm mem,\alpha} (z, \vec{m}, \vec{\delta}_m)$.
This interpolated parameterization now presents the user with a trained dragon, smoothly characterizing each of the populations across redshift.
\subsubsection{Likelihood model} \label{sec:llh}
\begin{table}\centering
\begin{tabular}{clcl}
\hline
$\theta_\alpha$ & model parameters &
$x_j$ & galaxy data \\
\hline
$w_\alpha$ & weight (where $\sum_\alpha w_\alpha = 1$) &
$\vec c_j$ & color \\
$\vec \mu_\alpha$ & mean color &
$\vec \delta_j$ & errors on galaxy colors \\
$\Sigma_\alpha$ & intrinsic covariance &
$\Delta_j$ & noise covariance \\
\hline
\end{tabular}
\caption{
Summary of variables in likelihood model (see equations~\eqref{eqn:L_ij} and~\eqref{eqn:RD6C}).
Left column shows GMM model parameters $\theta_\alpha$ characterizing component $\alpha$ of $K$; right column shows input data for galaxy $j$ of $N_{\rm gal}$.
Galaxy colors, with their respective errors and noise covariance, are calculated from input magnitudes $\vec m \pm \vec\delta_m$ within redshift bins (determined by galaxy $z \pm \delta_z$).
}
\label{tab:parameters}
\end{table}
Red Dragon employs a multi-dimensional Gaussian mixture model which accounts for photometric errors; parameters of the Gaussian mixture evolve with redshift to give continuous characterizations of each GMM component.
A Gaussian mixture of $K$ components in a single color $c$ has a set of parameters $\theta$ that constitute the model. For each component $\alpha$, this parameter set includes the component weight $w_\alpha$, mean color $\mu_\alpha$, and the intrinsic population scatter $\sigma_\alpha$ (see table~\ref{tab:parameters} for a summary of parameters used). The weights are normalized such that $\sum_\alpha w_\alpha = 1$. For a set of $N_{\rm gal}$ galaxies with input colors $\{ c_j \}$, and color errors $\{ \delta_i \}$, the model parameters maximize the likelihood
\begin{equation} \label{eqn:H09}
\mathcal{L} ( \theta \big| x ) = \prod_{j=1}^{N_{\rm gal}} \left\{ \sum_{\alpha=1}^K \frac{w_\alpha}{\sqrt{2\pi ({\sigma_\alpha}^2 + {\delta_j}^2)}} \exp\left[ - \frac12 \frac{(c_j-\mu_\alpha)^2}{{\sigma_\alpha}^2 + {\delta_j}^2} \right] \right\} .
\end{equation}
This type of error-corrected Gaussian mixture model (ECGMM) was introduced by \citet{Hao+09} with SDSS $g-r$ as the color classifier.
Expanding this model into an $N$-dimensional color space requires that we employ for each component $\alpha$ an intrinsic color covariance matrix $\Sigma_\alpha$. The errors then must be handled as a noise covariance matrix $\Delta_i$ for each galaxy.
Consider the DES four-band optical $griz$ photometry used in Buzzard, with input magnitudes $\vec m = [m_g, \, m_r, \, m_i, \, m_z]$. We define a vector of {\bf primary colors} based on neighboring photometric bands:
\begin{equation}\label{eqn:primarycolor}
\vec c = [g-r, \, r-i, \, i-z].
\end{equation}
Colors are derived from magnitudes by the matrix operation $\vec c = A \, \vec m$, where the transform matrix is
\begin{equation}
A = \begin{bmatrix}
1 & -1 & 0 & 0 \\
0 & 1 & -1 & 0 \\
0 & 0 & 1 & -1
\end{bmatrix}.
\end{equation}
We assume that the photometric errors of each galaxy are determined independently in each band, and so take them to be uncorrelated. The magnitude error covariance matrix $M_j$ for each galaxy is then diagonal.
Transformed to the space of primary colors (equation~\eqref{eqn:primarycolor}), the noise covariance of galaxy $j$ is then
\begin{equation} \label{eqn:noise}
\Delta_j = A \, M_j \, A^\top =
\begin{bmatrix}
{\delta_g}^2 + {\delta_r}^2 & -{\delta_r}^2 & 0 \\
-{\delta_r}^2 & {\delta_r}^2 + {\delta_i}^2 & -{\delta_i}^2 \\
0 & -{\delta_i}^2 & {\delta_i}^2 + {\delta_z}^2
\end{bmatrix}_j
\end{equation}
where $\delta_x$ above refer to the photometric error of band~$x$ for galaxy $j$.
Note that for this matrix to be non-singular, the selection of colors must be linearly independent (e.g. one cannot use each of $g-r$, $r-i$, and $g-i$ in an error-inclusive model). For symmetry, simplicity,
and to avoid singularity, we employ the set of primary colors.
The derivation of $\Delta_j$ is similar for SDSS photometry (which includes $u$ band). The primary color vector for $ugriz$ is then
\begin{equation}\label{eqn:primary_SDSS}
\vec c = [u-g, \, g-r, \, r-i, \, i-z]
\end{equation}
and the corresponding $\Delta_j$ matrices come from a straightforward extension of the above matrices.
This likelihood of this error-cognizant $N$-dimensional Gaussian mixture model is then
\begin{equation} \label{eqn:RD6C}
\mathcal{L} ( \theta \big| x) =
\prod_{j=1}^{N_{\rm gal}} \sum_{\alpha=1}^K
\mathcal{L}_\alpha ( \theta_\alpha \big| x_j)
\end{equation}
where the likelihood for each galaxy $j$, component $\alpha$, is
\begin{equation}
\begin{aligned} \label{eqn:L_ij}
\mathcal{L}_\alpha ( \theta_\alpha \big| x_j) = \,
& \frac{w_\alpha}{\sqrt{(2\pi)^N} \left| \Sigma_\alpha + \Delta_j \right|} \\
& \times \exp
\left[
-\frac12
(\vec c_j - \vec \mu_\alpha)^\mathrm{T}
(\Sigma_\alpha + \Delta_j)^{-1}
(\vec c_j - \vec\mu_\alpha)
\right] .
\end{aligned}
\end{equation}
Here, $x_j$ includes all primary colors $\vec c_j$ as well as the noise covariance matrix $\Delta_j$ for each galaxy (see table~\ref{tab:parameters}).
At individual redshift slices, we use the error-inclusive Gaussian Mixture package \href{https://github.com/pmelchior/pygmmis} {pyGMMis} \citep{Melchior_Goulding_2018} to find best-fit parameters $\theta_\alpha$ for each component $\alpha$.
Without a reasonable input for a first guess at parameters, pyGMMis sometimes struggles to properly characterize populations. To provide a rough first guess, we first sparsely fit the data using \texttt{sklearn}'s error ignorant \href{https://scikit-learn.org/stable/modules/generated/sklearn.mixture.GaussianMixture.html}{GaussianMixture} package \citep{sklearn}.
This extremely quick fit gives a rough initial guess to the fit parameters, yielding better results than running \texttt{pyGMMis} blind.
\subsubsection{Fit interpolation} \label{sec:interpolation}
Red Dragon interpolates best-fit parameters across redshift bins, continuously defining populations.
After fitting weights, the normalization $\sum_\alpha w_\alpha(z) = 1$ is re-enforced.
To interpolate the covariance matrix, log variances are interpolated first, followed by interpolating the correlations (enforcing $|\rho| \leq 1$), which together then provide a better fit than purely fitting the covariance matrix all at once (which could result in unphysical negative variances).
Fitting is linear by default (with flat endpoint extrapolation), but other methods such as smoothed spline interpolation \citep[SciPy:][]{2020SciPy-NMeth} or kernel-localized linear regression \citep[KLLR:][] {Farahi+18, Anbajagane+20} are available to give smoother fits.
These redshift-continuous fits can then predict for individual galaxies its membership likelihood for each component.
The probability that galaxy $j$ is a member of GMM component $\alpha$ is
\begin{equation}
P_\alpha(x_j) = \frac{ \mathcal{L}_\alpha (\theta_\alpha \big| x_j) } { \sum_\beta \mathcal{L}_\beta ( \theta_\beta \big| x_j)}.
\end{equation}
A two-component model would then have red sequence membership probability $P_{\rm RS} = \mathcal{L}_{\rm RS} / (\mathcal{L}_{\rm RS} + \mathcal{L}_{\rm BC})$.
This parameterization results in a redshift-continuous definition of the red sequence over large redshift spans, without the jumps or transitions incurred by single or double-color RS selection. Its more objective definition of the RS better characterizes the nuances of galaxy multi-color space than hard cuts.
\subsection{Algorithm Considerations} \label{sec:considerations}
In this section, we
define accuracy in selecting the quenched population for this analysis (\S\ref{sec:bACC}),
detail the accuracy gains from added bands (\S\ref{sec:optimal_bands}),
discuss the optimal count of Gaussian components (\S\ref{sec:elment_count}),
and discuss whether Gaussian features must be allowed to run with magnitude to accurately select the quiescent population (\S\ref{sec:magnitude_run}).
\subsubsection{Balanced Accuracy} \label{sec:bACC}
To quantify goodness of fit for the RS, we use the binary classification measure of `balanced accuracy,' comparing RS members selected by Red Dragon to the quenched population. We convert Red Dragon red component probability to a binary RS classifier by the condition $P_{\rm RS} > 0.5$ and defined quenched galaxies using a threshold in specific star formation rate as a function of redshift
\begin{equation}\label{eqn:quenched}
\log_{10} \left( {\rm sSFR} \cdot {\rm yr} \right) < -11 + z
\end{equation}
\citep[adapted from][for our mass and redshift ranges]{Moustakas+13}.
A more complicated determination of a truth label for the RS could include measures of stellar mass, dust, metallicity, and age as metrics to aid in separating RS and BC (see section~\ref{sec:beyond}). For example, one could simply add these into a GM or other machine learning structure along with the colors, giving the structure more information to aid in the separation.
While such a model may serve as a more accurate truth label to test against, our benchmark hard cut in sSFR defines a straightforward underlying truth in the photometric distribution of galaxies; its strong correlation with idealized galaxy characterization gives it value in discriminating between RS selectors.
Balanced Accuracy (also BA or bACC) takes the average of sensitivity and specificity, i.e. the true positive rate TPR$\equiv$TP/(TP+FN) and the true negative rate TNR$\equiv$TN/(TN+FP).
This compensates for unequal population ratios:
the relative weight between RS and BC varies significantly across redshift and magnitude, so bACC equally represents selection accuracy between the two populations.\footnote{
Note that a score of 50\% would be earned by a worthless test categorizing all as either solely positive or negative.
}
We caution the reader that achieving 100\% accuracy is not only practically impossible (without overfitting), but also not quite ideal. Since the sSFR distributions of RS and BC overlap, a hard cut in sSFR to score selection would mischaracterize a set fraction of galaxies from each. More nuanced selection of the RS and BC (defined from the more complicated definition above) would then have an accuracy at some value below 100\%, though still high. Therefore, balanced accuracies from a hard cut in sSFR below 100\% should be no cause for worry, and indeed, could indicate the method is working properly.
Balanced accuracy requires a binary classification, so it is somewhat limited in its ability to score goodness of fit for ambiguous cases where e.g. the quen\-ch\-ed probability $P_{\rm Q} = 49\%$ (the chance, taking error bars into account, that equation~\ref{eqn:quenched} is true) but the RS membership probability $P_{\rm RS} = 51\%$ (the chance, derived from Gaussian Mixtures, that it belongs to the redder component).
The sSFR distribution is lognormal skewed, with the bulk of galaxies falling near the sSFR cut of equation~\eqref{eqn:quenched}, so middle probabilities are common, with $\sim 40\%$ of galaxies lying in $P_Q|(25,75)\%$. Quenched probabilities are therefore somewhat sensitive to the sSFR cut one uses to define the quenched population; any hard cut in sSFR will necessarily change the resulting bACC.
However, the distribution of $P_{\rm RS}$ values is strongly bimodal, with generally $\lesssim 15\%$ of galaxies lying in $P_{\rm RS}|(25,75)\%$ for Buzzard and $\lesssim 1\%$ for SDSS (TNG sSFR values are without errors).
Since probabilities generated by Red Dragon tend towards zero and one, the problem of hard cuts in binary classification is somewhat mitigated.
A simple binary classification metric aptly characterizes the large majority of galaxies and gives a simple measure for RS selection power.
\subsubsection{Accuracy gains from added colors} \label{sec:optimal_bands}
Typically, single-color RS selection uses a color constructed from the photometric band containing the 4000~\AA\ break and the longer-wavelength band immediately after ($g-r$ at low redshifts); two-color RS selection further includes the primary color with the next longest wavelength band after that ($r-i$ at low redshifts). However, other colors can aid in better distinguishing the RS from the BC.
For SDSS low-$z$, we used all possible colors (including the band-jumping secondary colors, like $g-i$ or $u-z$, in addition to the primary colors; only considering non-singular combinations of colors) to create single, double, and triple color Gaussian mixture models, revealing optimal color combinations along with accuracy gains from adding colors.
Comparing these optimized color groupings to our choice of using all primary colors for Red Dragon's spine, we can gauge to what extent selection accuracy depends on choice of the input color vector $\vec c$.
\begin{figure}\centering
\includegraphics [width=\linewidth] {figs/NC_bACC}
\vspace{-.75cm}
\caption{
Accuracy of selecting the quenched population in low-$z$ SDSS by Gaussian mixture for various combinations of input colors.
Each of the first three $N$-color groupings were optimized in their selection, having the highest median bACC of all possible $N$-color combinations.
Error bar plot shows bootstrap $\pm 2 \sigma$ quantiles.
}
\label{fig:cf_Ncol}
\end{figure}
Figure~\ref{fig:cf_Ncol} shows that for SDSS/low-$z$, single-color selection (using the optimal choices of either $g-r$ or $u-r$), selected the quenched population with $\sim 87\%$ accuracy.
Adding a second color significantly improved accuracies (again using optimal band choices from our analysis of all possible colors), raising balanced accuracy immensely ($+5.8\%$).
Optimal three-color selection gave a relatively small
increase in bACC ($+0.8\%$).
The four-color combination of the primary color vector (no optimization of color choices) performed similarly
to the best-case three-color combination ($-1.0\%$).
The primary color vector thus serves well as a blind baseline for selecting the quenched population.
\subsubsection{Optimal Gaussian Component Count} \label{sec:elment_count}
Though historically galaxy classification has been binary, galaxies transitioning from RS to BC are sometimes classified as members of the green valley (GV), adding a third category.
From an agnostic view of the color space data, components beyond two can simply be seen as an attempt to better model inherent non-Gaussianities in the populations \citep[see e.g.][]{Carretero+15}.
From an astrophysics view, galaxies quenched by different mechanisms belong to populations with distinct characteristics \citep{Peng+10,Davies+21,Dacunha+22}.
High-mass galaxies (which are primarily mass-quenched) have different trends for mean and scatter of colors than those of low-mass galaxies (which are primarily merger- or environment-quenched) \citep{Baldry+04},
so modeling them with distinct Gaussians could better represent the underlying populations.
For any of the above reasons, one may desire to model components beyond two.
Though the Red Dragon algorithm permits any number of components $K \geq 2$, different datasets or different luminosity cuts may favor particular component counts.
Appendix~\ref{sec:K_optimal} details our analysis of SDSS/low-$z$ for optimal component count.
In short, though Figure~\ref{fig:bi_panel} shows clear bimodality visually, and indeed, using two components gives a fair fit to the photometric color data, using three components fits the distribution of galaxies in photometric color space significantly better. Using more than three components gave no significant improvement in fit.
For simplicity of discussion and comparison, we chiefly employ the minimal two-component model in our results section, but an investigation of the effects of increasing component count is detailed in section~\ref{sec:RS_by_K}, using the Buzzard simulation.
\iffalse
\footnote{
To quote the MICE collaboration: ``For our purposes, it is indifferent whether this green sequence corresponds to a physically distinct type of galaxies or just to the inadequacy of the Gaussian distribution to represent the red and blue populations.'' \citep{Carretero+15}
}
\fi
\subsubsection{Running with magnitude} \label{sec:magnitude_run}
Gaussian mixture parameters (population weight, mean color, and scatter for RS and BC) are known to depend on magnitude at a fixed redshift.
Nearly 100\% of bright galaxies are red, while very few of the faintest galaxies are red \citep{Baldry+04}, so component weight runs strongly with magnitude.
The mean color of the RS is well-known to run with magnitude \citep{Kodama+96,Gladders+98}, with the slope modeled explicitly in the RS fitting of e.g. \citet{Hao+09} and \citet{Rykoff+14}.
The scatter of the RS and BC also runs non-linearly with magnitude \citep{Baldry+04,Balogh+04}, though this is less often modeled.
Therefore, {\bf a magnitude-ignorant fitting of the populations will have all parameters somewhat dependent on the limiting magnitude of the sample.}
The brighter the sample, the higher the $f_R$, the redder the mean RS color, and the smaller the RS scatter.
Though parameters do evolve with magnitude, how significantly does magnitude ignorance affect selection of the RS?
Using Buzzard, we quantify differences in RS selection between magnitude-cognizant and magnitude-ignorant models.
Appendix~\ref{apx:quan_mag_run} shows results of this analysis. In short, {\bf while magnitude running of GMM parameters is statistically significant, their running had a relatively minimal impact on selection.}
For thin redshift slices, selection of red sequence galaxies (where $P_{\rm RS}>.5$) was 95\% identical between the standard redshift-running and the niche magnitude-running versions of Red Dragon.
Since this difference in selection is relatively small, we leave magnitude running out of the current version of Red Dragon in favor for prioritizing smooth redshift evolution.
For those who wish to explicitly account for magnitude running or other secondary parameters, several workarounds exist, as detailed in section~\ref{sec:sim_sel}.
\section{Results} \label{sec:results}
Here we show results of running Red Dragon on SDSS, TNG, and Buzzard datasets. Our SDSS+TNG analysis focuses on the accuracy of selecting the quenched population whereas our Buzzard analysis highlights fit parameter evolution with redshift.
\subsection{Sloan analysis}
We run Red Dragon on Sloan and TNG data using the four primary colors derived from SDSS $ugriz$ photometry, with equation~\eqref{eqn:primary_SDSS} as the primary color vector.
Here we present the accuracy with which Red Dragon identifies the quenched galaxy population at low (\S\ref{sec:SDSS_limits_of_hard_cuts}, including TNG) and intermediate (\S\ref{sec:SDSS_quenched_accuracy_redshift}) redshifts.
Our comparison to typical CM and CC selections follow methods from the literature.
`Typical' CM selection follows \citet{Hao+09}. After fitting the red sequence population with a Gaussian mixture in color space, we fit a line to the red sequence population (in the CM space of $g-r$ vs $m_i$), find its scatter, then select all galaxies within $2\sigma$ of the mean relation.
`Typical' CC selection follows \citet{Adhikari+20}. After finding population means via Gaussian mixtures, we draw a line between maxima (in the CC space of $g-r$ vs $r-i$), then plot a perpendicular line at the minimum likelihood point between the two components (i.e. where a galaxy is equally likely to belong to either component).
These two methods give benchmark comparisons for standard efficiency of selection in CM and CC spaces for comparison to Red Dragon selection.
\subsubsection{Selection accuracy of the quenched population} \label{sec:SDSS_limits_of_hard_cuts}
\begin{figure}\centering
\includegraphics [width=\linewidth] {figs/combined}
\caption{
Balanced accuracy in selecting the quenched population of low-redshift ($z \doteq 0.1$) TNG and SDSS galaxies. Error bars generated from Poisson error estimates on each of the classification components.
``CM'' and ``CC'' methods draw hard cuts through color-magnitude and color-color spaces respectively, whereas the ``RD 2$K$'' and ``RD 3$K$'' methods use a Gaussian mixture model with $K=2$ and $K=3$ components respectively.
The three-component dragons have tighter selections on the RS, yielding a lower false positive rate but a higher false negative rate, which for TNG galaxies results in a slightly decreased balanced accuracy.
}
\label{fig:bACC_CM_CC_GM}
\end{figure}
To exemplify limitations of CM and CC hard-cut selections, we compare balanced accuracy in selecting the quenched population using hard cuts vs Gaussian mixtures for both the SDSS low-$z$ sample alongside the TNG sample, both at $z = 0.1$ approximately.
Using methods from the literature, we make hard cut fits (labeled `typical') in CM and CC spaces.
Next we use sSFR values to optimize fits, drawing the hard cut lines which maximize balanced accuracies (labeled `sSFR'), giving a best-case scenario for hard cut selection methods.
Finally, we compare these fits to Gaussian mixture fitting of the populations, i.e. using a Red Dragon approach for selecting the red sequence (labeled `RD 2K/3K').
Figure~\ref{fig:bACC_CM_CC_GM} shows accuracies of these various selection types.
For SDSS, even optimized CM selection typically incurs $\gtrsim 10\%$ error (i.e. $\sim 10\%$ of the RS and BC contain star-forming or quenched galaxies respectively) while optimized CC selection typically incurs $\gtrsim 6\%$ error, showing as did Figure~\ref{fig:cf_Ncol} that two colors (CC space) work significantly better than one (CM space).
Hard cuts in CM and CC spaces select the quenched population more accurately in TNG than in SDSS, largely due to its more pronounced GV (see appendix~\ref{apx:SDSS_vs_TNG_hist}), with a typical error of only $\sim 5\%$ by any selection method.
Gaussian mixtures (without any optimization from sSFR truth) perform generally on par with optimized CM and CC fits but have higher selection accuracy than CM and CC fits similarly ignorant of sSFR.
Given a spectroscopic sample of galaxies, where sSFR values are known, Figure~\ref{fig:bACC_CM_CC_GM} shows that one could define hard cut selections of the RS which would have accuracies similar or superior to a GM selection of the RS. However, at redshifts where the RS \& BC are not well defined from spectroscopy, or at any redshift where sSFR values are unknown, a GM would give superior selection of the quenched population.
\subsubsection{Redshift continuity of quenched galaxy selection} \label{sec:SDSS_quenched_accuracy_redshift}
Here we investigate Red Dragon's accuracy in selecting the quenched population as a function of redshift. The SDSS mid-$z$ sample centers around the transition redshift of $z \sim 0.4$, where the 4000~\AA\ break moves from $g$~band into $r$~band.
\begin{figure}\centering
\includegraphics[width=\linewidth]{figs/SDSS_range.pdf}
\caption{
Balanced accuracy in selecting the quenched population of bright SDSS galaxies:
Red Dragon (RD; black) performs similarly or superior to hard cuts in single colors.
Bootstrap $\pm 1 \sigma$ error shown with transparencies; these increase with redshift chiefly due to decreasing number counts (rather than from increased intrinsic scatter).
Values localized by Gaussian kernel (width $\sigma_z = .02$).
}
\label{fig:SDSS_Z}
\end{figure}
Figure~\ref{fig:SDSS_Z} compares accuracy in selecting the quenched population between Red Dragon and two (redshift-evolving) choices of single-color cuts for defining the RS.
As the 4000~\AA\ break passes from $g$~band to $r$~band near $z=0.36$ (thus affecting the values of $g-r$ and $r-i$), the ability of $g-r$ to select the quenched population wanes while that of $r-i$ waxes, as expected.
If using single-color selection, $z=0.38$ would then be the best redshift to transition from selecting the RS with $g-r$ to selecting with $r-i$ (if your goal is to select the quenched population with greatest fidelity).
In comparison to these single-color selection methods, Red Dragon performs similarly to best-case single-band selection (within $3\sigma$), with vastly superior ($>6\sigma$) accuracy across $z=0.38$, the optimized transition redshift. We note here that the high-redshift side of the plot has significantly lower number counts, and lacks statistical power compared to the low-redshift side.
If using single-color selection of the RS, optimal transition redshifts between colors are somewhat subjective.
The initial calibration of the RS by RedMaPPer transitions from using $g-r$ to using $r-i$ at redshift $z=0.35$ (the 4000~\AA\ transition redshift; also the point below which the survey is volume limited and above which is magnitude limited).
However, they found that the redshift for which single-color richnesses $\lambda_{g-r}$ and $\lambda_{r-i}$ equaled their multi-color richness $\lambda$ was at redshift $z \doteq 0.42$ (see their Figure~28), so a reliable $\lambda_{\rm col}$ definition would use $z \doteq 0.42$ as a transition redshift.
Neither $z=0.35$ nor $z=0.42$ match the single-color transition point highlighted by Figure~\ref{fig:SDSS_Z} of $z=0.38$ (the redshift at which trading off from $g-r$ to $r-i$ maintains the highest quenched population selection accuracy).
Since these redshifts each have sound reasoning for their use in defining a single-color RS selection, no universal transition redshift stands out.
This leaves single-color RS selection transitions as messy at best, favoring the objectivity of multi-color analyses.
Red Dragon preserves accuracy in selecting the RS across redshift transitions while maintaining a continuous red fraction (by construction).
This then evades the discontinuities inherent in swapping bands
continuously selecting RS galaxies with high fidelity.
\subsection{Buzzard Flock analysis} \label{sec:Buzzard}
Extending our analysis to a wider redshift range, we turn to the synthetic galaxy catalogs of the Buzzard Flock with the three primary colors of equation~\eqref{eqn:primarycolor}.
After highlighting how fits interpolate across redshift (\S\ref{sec:fit_interp}), we discuss how the RS definition varies as component count increases from two to four (\S\ref{sec:RS_by_K}).
\subsubsection*{}
The Buzzard universe is a statistical replica of a deep-wide galaxy survey built from galaxy color distributions measured as a function of local cosmic overdensity \citep{Hogg_SDSS_2004}.
The ADDGALS method is trained empirically at low redshifts and extrapolated to high redshifts using a spectral energy distribution template approach \citep[for details, see][]{Wechsler+21}. While the method reproduces well the counts and two-point clustering statistics of galaxies \citep{DeRose+21}, behaviors of the Buzzard universe at high redshifts are less rooted in observation than those at low redshift.
With a sample size of 94M galaxies (see table~\ref{tab:datasets}) we are able to extract precise estimates of all model parameters. However, the statistical errors shown below are lower limits, in that systematic variations caused by a different galaxy catalog construction method (see e.g. MICE \citep{Carretero+15}, cosmoDC2 \citep[populated by GalSampler algorithm][]{Hearin+20}, etc.) remain to be investigated.
For RS mean and scatter, we compare to \citet{Hao+09} (SDSS catalogue) and \citet{Rykoff+14} (DES catalogue).
\citet{Hao+09} fit SDSS data using an error-corrected GMM in $g-r$. With that selection for blue and red, mean colors as a function of $i$ band were measured along with scatters, all as functions of redshift.
\citet{Rykoff+14} fit data using a multivariate error-corrected GM in the primary colors of $griz$ (see equation~\ref{eqn:primarycolor}). Their algorithm iteratively selects the RS, measuring a slope of its color as a function of $z$~band, giving a redshift-continuous fitting across redshift much like Red Dragon.
This method was then applied to DES~Y3 data to provide an observed RS fit (E.~Rykoff, private comm.).
Both methods only fit the RS, so no information on weight nor any fits for the BC are available for comparison to Red Dragon.
\subsubsection{Redshift Evolution of Two-Color GMM Components} \label{sec:fit_interp}
This section details a fitting of the evolving GMM parameters across redshift for Buzzard photometry.
The galaxy sample is magnitude-limited using the redshift-evolving cut of $0.2 \, L_{*,i}(z)$ from \citet{Rykoff+14} within the redshift range $0.05 < z < 0.84$. The galaxies are divided into narrow cosmological redshift bins of width $0.025$, resulting in counts per redshift bin of 60k to 7.5M galaxies.
Red Dragon is run on 50 bootstrapped samples of size $10^4$ (undersampling for the sake of speed, efficiency, and easing computational burden); the resulting median parameters $\theta(z_i)$ and $\pm1\sigma$ quantile range for each bin are shown in the figures below.
The discrete redshift parameters are then interpolated using KLLR with a Guassian kernel of width $\sigma_z = 0.02$, shown as lines in the figures below.
\iffalse
\begin{table}\centering
\begin{tabular}{r|l}
bin width & $dz \sim .025$ \\
bin extent & $z|[.05,.84]$ \\
$N_{\rm bootstrap}$ realizations & $50$ \\
luminosity cut & R14 (see \S2.1) \\
$\min(N_{\rm gal})$ & $59 661$ \\
$\max(N_{\rm gal})$ & $7 554 935$
\end{tabular}
\caption{
\wm{could have as table something like this; not sure how important each element is}
information regarding the bootstrapping for Buzzard fits
}
\label{tab:bootstrap}
\end{table}
\fi
\begin{figure}\centering
\includegraphics [width=\linewidth] {figs/Bz_boot_w}
\caption{
Component weight for RS (red) and BC (blue) as a function of redshift for the Buzzard flock.
Points show parameter fits from individual redshift bins with bootstrap errors shown as error bars.
Fit line interpolation smoothed with KLLR (Gaussian kernel width $\sigma_z = 0.02$) with uncertainty in fit shown as transparencies about the line.
Vertical grey lines indicate transition redshifts of the 4k\AA\ break from
Table~\ref{tab:4kA_break}.
}
\label{fig:Bz_w}
\end{figure}
\paragraph*{Component Weights.}
A variety of deep observations of the real universe indicate that star formation rates per unit baryon mass were much higher in the past \citep{Madau_1996, Connolly_1997, Madau_Dickinson_2014}.
Astrophysically speaking, while the Butcher-Oemler effect of reddening over time \citep{Butcher_Oemler_1978} applies primarily to galaxy clusters, the entire population of galaxies ages and tends to redden as a whole.
Quenching is nearly a one-way process for galaxies \citep[many models ignore the reverse direction entirely, e.g.][]{delaBella+21}, implying that the only way to decrease red fraction over time is to create new blue galaxies.
Since the peak of cosmic noon was at $z \sim 1.9$ \citep{Madau_Dickinson_2014}, galaxy samples below this redshift should redden over time.
We therefore expect Red Dragon, when applied to Buzzard, to extract a RS weight that declines with increasing redshift (i.e. the population becomes bluer with increasing redshift).
In good agreement with this expectation, Figure~\ref{fig:Bz_w} shows that the RS weight consistently decreases with redshift, ranging from roughly 70\% at redshift $z=0$ down to 25\% at the highest redshift of $0.84$.
Since the red fraction is highly luminosity dependent (as discussed in appendix~\ref{apx:quan_mag_run}),
one should rememeber that the weight reported here represents a weighted average of all galaxies above $0.2 L_\ast(z)$, which will be dominated by magnitudes near the cutoff. Choosing a brighter magnitude cutoff would uniformly raise the RS weights, and vice-versa.
The bootstrap uncertainties are typically quite small, but there is an increase near $z = 0.6$ (seen somewhat if figure~\ref{fig:Bz_mu} and especially in the correlations of \ref{fig:scat_corr}). Here, rather than giving slight variations around a single fit as at earlier redshifts, pyGMMis at this redshift debates between two distinct fits: one with wider scatter (and low correlation) and one with narrower scatter (and high correlation), each of which having differing RS weight. These two modes exist in very few of the previous bootstrap realizations, but near $z = 0.6$ make up roughly half of the fits. Since Bootstrap resampling yields two discrete modes, the overall uncertainty is relatively large compared to single-mode redshift bin fits.
\paragraph*{Mean Colors.}
\begin{figure}\centering
\includegraphics [width=\linewidth] {figs/Bz_boot_mu}
\caption{
Points show mean colors for each GMM component (RS \& BC) for Buzzard galaxies, using the same coloring as Figure~\ref{fig:Bz_w}. RS mean measurements from \citet{Hao+09} (SDSS, $g-r$ only) and \citet{Rykoff+14} (DES~Y3, all colors) are shown for comparison.
Due to their dependence on magnitude, we present ranges from $0.2 \, L_*$ (the magnitude limit of Buzzard; lower edge of transparencies) up to $L_*$ (upper edge of transparencies). respectively.
}
\label{fig:Bz_mu}
\end{figure}
Figure~\ref{fig:Bz_mu} shows the redshift evolution of the mean colors of the two components. The three panels each show measured BC and RS means in different colors with comparisons to observations in shades of grey (\citet{Hao+09} only fit $g-r$ at low redshift). Since these observations were magnitude dependent (whereas the Red Dragon fitting of Buzzard is magnitude independent), mean colors between $L_*$ (upper bound of transparency) and at $0.2 \, L_*$ (limiting luminosity of Buzzard galaxies; lower bound of transparency) are shown for comparison.
Because over two-thirds of Buzzard galaxies fall between these two limits, we expect our mean color fitting to also be between these bounds as well. However, the Buzzard photometry differs from photometry input for the comparison observations, so the fits will differ somewhat.
Note that each color has different vertical scaling: $\langle g-r \rangle$ spans the largest range while $\langle i-z \rangle$ spans the smallest range, exaggerating its features.
At transition redshifts (see Table~\ref{tab:4kA_break}), the slope of mean color with respect to redshift changes rapidly,
necessitating narrow redshift analysis bins and careful fitting.
Mean colors for both RS \& BC follow a general shape of rising as the 4000~\AA\ break enters the color's minuend, then falling as the break enters the color's subtrahend, as expected.
Compared to observed mean colors, Buzzard shows a bluer red sequence for $z \lesssim 0.45$ in $\langle g-r \rangle$, albeit by a small margin. Comparing to the \citet{Rykoff+14} method, deviations from observations are consistently $<0.034~{\rm mag}$ (only $0.005~{\rm mag}$ deviant on average).
For $z \gtrsim 0.45$, colors vary more with magnitude (steeper RS slope in CM space), such that Buzzard mean colors land between the $0.2 \, L_*$ and $L_*$ observations.
If the mean spectra of RS and BC galaxies had no time evolution, then the same general shape of each curve in figure~\ref{fig:Bz_mu} should appear for each color, shifted by roughly $\Delta z = .4$ (and vertically scaled somewhat for differences between bands).
We see this to some extent, e.g. with the rising of the RS $\langle g-r \rangle$ in $z|[0,0.4]$ mirroring the rising of the RS $\langle r-i \rangle$ in $z|[.4,.76]$.
Similarly, the BC bump in $\langle i-z \rangle$ at $z \sim 0.5$ mirrors the $z \sim 0.1$ BC bump in $\langle r-i \rangle$ (the vertical scaling difference between the axes belies their similarity).
As our analyses extended to higher redshifts in future work, we will find similar universal features or find time variation in RS and BC populations.
\paragraph*{Color Covariance.}
\iffalse
\begin{figure}\centering
\includegraphics [width=\linewidth] {figs/Bz_R14_2K_Sigma.pdf}
\caption{
Redshift evolution of lower triangle of covariance matrix between colors (labels show which colors correspond to columns and rows; vertical direction indicates magnitude of covariance element).
Coloring of components and interpolation as detailed in Figure~\ref{fig:Bz_w}.
}
\label{fig:Bz_Sigma}
\end{figure}
\fi
\begin{figure}\centering
\includegraphics [width=\linewidth] {figs/Bz_boot_scatt}
\includegraphics [width=\linewidth] {figs/Bz_boot_corr}
\caption{
Intrinsic color scatter (upper panel) and correlations between colors (lower panel).
Green transparencies give one and two-sigma quantile distributions of photometric color errors in Buzzard.
Coloring of components and interpolation as detailed in Figure~\ref{fig:Bz_w}; comparison to observations as in Figure~\ref{fig:Bz_mu}.
%
}
\label{fig:scat_corr}
\end{figure}
The top row of Figure~\ref{fig:scat_corr} shows the redshift evolution of the intrinsic scatter in each color for both GMM components. Showed also are the fits from observations in grey (as before but without magnitude dependence) as well as $\pm1\sigma$ and $\pm2\sigma$ median quantiles for color errors, shown as green transparencies.
The bottom row of Figure~\ref{fig:scat_corr} shows intrinsic correlations between colors inferred from the covariance matrix, giving $\rho(g-r, \, r-i)$, $\rho(g-r, \, i-z)$, and $\rho(r-i, \, i-z)$ from left to right.
In accordance with expectations, at redshifts where the color's minuend contains the 4000~\AA\ break, the RS has significantly lower scatter than the BC (by a factor of $\sim 2$), indicating a tighter population.
Due to an increase of photometric errors at higher redshifts (as shown by the green transparencies), it's difficult to accurately constrain population scatters. Beyond redshift $z \sim 0.4$, the median photometric error exceeds the intrinsic RS scatter, making it difficult to measure.
Compared to observed RS scatters, Buzzard scatters are generally wider by a factor of 1.5 (consistently within a factor of three above or below).
Running of the RS mean color with magnitude means that a magnitude-ignorant fitting of the population scatter would find larger values than a magnitude-cognizant model, so Buzzard's generally wider scatters are expected.
Since \citet{Rykoff+14} only fit the RS, a Red Dragon fitting of the RS (which also accounts for and fits the BC) is not guaranteed to identify the same exact RS. This will be further investigated in our papers to come, analyzing DES data.
\paragraph*{}
While measures of the RS scatter in various colors are available in the literature \citep{Hao+09,Rykoff+14}, to our knowledge there is no published work on measuring the intrinsic scatter of the BC nor covariance among primary (or otherwise) colors for RS and BC galaxies.
If this is correct, the BC scatter and full color covariance as a functions of redshift constitute empirically unexplored territory.
Our Buzzard measurements are then establishing a first estimate of these quantities, albeit one more likely to reflect that of the true galaxy population at low redshift than at high redshift.
Though intrinsic color correlations within each of the RS and BC are expected to be $\gtrsim 90\%$ \citep[][]{Rykoff+14},
the GMM fitting of Buzzard usually has lower (and occasionally even has negative) correlations between colors.
Since the simulated photometric errors in Buzzard were of similar order to the RS scatter for $z \gtrsim 0.4$ (see the green transparencies of Figure~\ref{fig:scat_corr}'s upper panel), these correlations represent more so relations between photometry than intrinsic correlations within the RS. In comparison, the BC has a width larger than the photometric error at all but the highest redshifts, and accordingly, we see high correlations, around 80\% to 90\% across redshift.
These fits show that Red Dragon can map out the RS and BC to high redshift (spanning across multiple transition redshifts),
continuously parameterizing important aspects of each population.
As future studies create more complete samples of galaxies to higher redshifts, Red Dragon will be able to detail population characteristics smoothly across even wider redshift ranges.
\subsubsection{RS robustness to component count} \label{sec:RS_by_K}
Here we investigate how Red Dragon identifies the RS in models with more than two components.
On adding additional components to a Gaussian mixture model, one generally expects (1) a reduction in weight of each component, (2) a reduction in scatter, and (3) a shift of the means as the new component displaces the old.
Figure~\ref{fig:Bz_Kcf} compares red fraction, RS mean $g-r$ color, and RS $g-r$ scatter for varying component counts $K = \{2, 3, 4\}$ in the Buzzard flock.
We find that while the RS has a relatively consistent mean and scatter on adding components, the RS is subdivided between components at low redshifts, resulting in a drastically different weight for the reddest component.
\begin{figure*}\centering
\includegraphics [width=\linewidth] {figs/rd_cf_NK}
\caption{
RS fit variation between component count K, shown for photometric color ($g-r$).
{\bf Left}: Red fraction decreases significantly with the addition of more components below $z=0.4$, but for higher redshifts, weight varies less than $5\%$ (typically by $\sim 1\%$).
{\bf Middle}: Mean RS color varies by less than $0.05~{\rm mag}$ on adding a component (typically by $\lesssim 0.01~{\rm mag}$).
{\bf Right}: Scatter in the RS decreases with the sub-division of additional components, on average by $<4\%$, consistent to less than a factor of $1.5$ across all redshifts.
}
\label{fig:Bz_Kcf}
\end{figure*}
We find good consistency in red fraction (leftmost plot of Figure~\ref{fig:Bz_Kcf}) for $z > 0.4$, with only minor reduction in weight with added components (typically order 1\%, consistently $<5\%$).
For $z < 0.4$, we see a severe reduction in red fraction of the reddest components for $K=3$ and $K=4$, due to the RS being subdivided into multiple components.
\begin{figure}\centering
\includegraphics [width=\linewidth] {figs/Zp2_Bz_NK}
\caption{
Red Dragon population characterization for Buzzard galaxies in the thin redshift slice $z=0.2 \pm 0.005$ for different component counts. {\bf Left:} $K=2$, {\bf Middle:} $K=3$, {\bf Right:} $K=4$. With four components, the RS and BC are further subdivided into bluer and redder components.
}
\label{fig:Zp2_Bz_NK}
\end{figure}
At a single redshift slice ($z \approx 0.2$), Figure~\ref{fig:Zp2_Bz_NK} shows this sub-division of the RS for each component count. On the same axes of $g-r$ vs $r-i$, the thin sample of galaxies are colored by component classification for $K=2$ to $K=4$ component mixtures. Addition of components primarily splits populations into bluer and redder sub-populations, but the fraction of star-forming or quiescent galaxies belonging to each population varies strongly with changing $K$ values.
\begin{figure}\centering
\includegraphics [width=\linewidth] {figs/Bzz_3K_mu}
\caption{
Mean color values for all three components of a $K=3$ fitting of Buzzard photometry. The red lines consistently track a quiescent population and the blue lines consistently track the star-forming population. The green component moves from clearly matching the star-forming population at high redshifts to clearly matching the quiescent population at low redshifts.
}
\label{fig:Bzz_3K_mu}
\end{figure}
Figure~\ref{fig:Bzz_3K_mu} illustrates this sub-division in mean colors $\mu_\alpha(Z)$ for the $K=3$ model across redshift, similar to Figure~\ref{fig:Bz_mu}: the middle component (green line) matches the BC at high redshifts, but matches the RS at low redshifts.
Though this could be characterizing galaxy evolution from BC to RS, it may be a statistical artifact, where the RS is more non-Gaussian at low redshifts than at high redshifts, as compared to the BC. Without data on sSFR and galaxy evolution in Buzzard, it's difficult to say.
If using $K>2$, one must be careful in labeling the RS, since the quiescent population may be split between components.
Besides weight, we find excellent consistency in the other GM parameters. Mean color changed on average by $\lesssim 0.01~{\rm mag}$ on adding a new component (consistently less than $0.05~{\rm mag}$; see Figure~\ref{fig:Bz_Kcf}, center plot).
Scatter reduced on average by $<5\%$ with each added component (consistently less than $<33\%$; see Figure~\ref{fig:Bz_Kcf}, rightmost plot).
Correlations varied by less than 12\% on average.
Though the populations are consistently characterized, single components may not consistently correspond to the same population; middle components may at one redshift characterize the quiescent population but at another characterize the star-forming population.
As discussed earlier (see \S\ref{sec:elment_count}), the optimal count of Gaussian components to use in modeling depends on the dataset; increased non-Gaussianity requires more nuanced modeling. Despite non-Gaussianities in our datasets, the two-component model still aptly characterizes the galaxy population. Adding extra components must be done with care due to complexities in modeling and interpretation.
\iffalse
\wm{[this following \P\ was for older versions of RD, but could still be relatively true; is it worth recalculating the particular values? I'm not sure...]}
Using the $K=2$ RS as a reference `truth' for balanced accuracies, the three and four component models have ${\rm bACC} \sim 98\%~{\rm and}~96\%$ respectively.
The scatter of the difference in their $P_{\rm red}$ values were $\sim 4\%$ and $\sim 9\%$ off zero respectively.
This shows fair agreement between component counts, especially between two and three components.
\fi
\iffalse
\subsubsection{Selection of redMaGiC galaxies}
The algorithm redMaGiC selects luminous red galaxies, aiming to minimize photo-$z$ uncertainties \citep{Rozo+16}. This yielded a sample of RS galaxies with high redshift accuracy (median error $z_{\rm spec} - z_{\rm photo} = 0.005$ and scatter $\sigma_z/(1+z) = 0.017$).
We investigated Red Dragon selection of redMaGiC-selected galaxies to see whether a RS defined by Red Dragon had high fidelity to the RS selected by redMaGiC.
Using the DES wide, high density redMaGiC sample\footnote{redMaGiC Gold 2.2.1 sof color v0.5.1 Y3A2 + Deep MOF High Density, 0.5L*, 1.0e-3}, we found that a Buzzard-trained Red Dragon generally selects $\gtrsim 95\%$ of those as RS members at a given redshift slice.
This high sensitivity gives credence to Red Dragon's ability to identify RS galaxies (simultaneously showing that Buzzard's RS closely matches observations).
The fact that Red Dragon doesn't select 100\% of redMaGiC galaxies as RS members reflects a greater specificity given by Red Dragon's modeling of components beyond the RS.
\fi
\iffalse
\subsubsection{Fitness as mass proxy} \label{sec:proxy}
\begin{figure}\centering
\includegraphics [width=\linewidth] {figs/cf__sigrel_mu2}
\caption{
Relative scatter in log richness as a function of mass for different richness definitions. Orange uses richnesses calculated by Red Dragon $\lambda_{\rm RD}$ while blue uses richnesses calculated by ``GMM+'', a probabilistic richness definition in CM space.
Lower values indicate superior fitness as a mass proxy, therefore at higher masses ($M \gtrsim 10^{14.2}~{\rm M}_{\odot}$), a Red Dragon defined richness $\lambda_{\rm RD}$ may serve as a superior mass proxy, with some 10\% increase in constraining power as compared to the more traditional Gaussian Mixture Model.
}
\label{fig:sigrel}
\end{figure}
First results from the Buzzard flock simulation set show that Red Dragon modeling of the red sequence results in a lower scatter in richness at fixed mass (for massive halos, $M \gtrsim 10^{14.2}~{\rm M}_{\odot}$; see Figure~\ref{fig:sigrel}) than more traditional color--magnitude cuts. The $\sim 10\%$ reduction in scatter suggests that a Red Dragon richness $\lambda_{\rm RD}$ could serve as a better mass proxy for cluster cosmology than a more traditional CM-space richness.
Poisson noise from low galaxy counts at low halo masses and Poisson noise from low cluster counts at high halo masses results in low significance at low ($M \lesssim 10^{14}~{\rm M}_{\odot}$) and high ($M \gtrsim 10^{15}~{\rm M}_{\odot}$) halo masses respectively, but further studies can clarify and solidify extremal trends.
\fi
\section{Conclusions} \label{sec:conclude}
We present Red Dragon, a new method for galaxy population characterization, which evolves Gaussian mixture models in the space of broad-band optical colors across redshift. With the red sequence of quiescent galaxies as a target population in both observed and simulated galaxy samples, we
|
demonstrate the method's ability to identify the quenched population with similar accuracy to previous approaches but with smoother continuity across redshift (in addition to characterizing the BC).
Jumping from using one color alone to another as a RS selector (e.g the transition from $g-r$ to $r-i$ near $z=0.4$) gives an inherent discontinuity in red fraction $f_R(z)$ or in accuracy of selecting the quenched population. Since Red Dragon interpolates Gaussian mixture parameters across redshifts in multi-color space, the resulting characterization of galaxy photometry yields a continuous red sequence definition (thereby resulting in a continuous red fraction).
If metrics such as richness and red fraction are to be broadly interpretable across wide redshift spans, we must move beyond discontinuous single-color selection of the RS.
By construction, Red Dragon also offers a new way to explore RS, GV, and BC photometric behavior through its explicit fits to population weight, mean color, scatter, and correlation between colors. This fitting allows investigations into the photometric sub-populations of galaxies, such as fraction of transitioning galaxies as a function of redshift, or the evolution of mean color and scatter of blue cloud galaxies.
Fitting the population with Gaussian mixtures results in similar or superior selection of quenched galaxies as optimized color–magnitude (CM) or color–color (CC) selections (see figures~\ref{fig:cf_Ncol} and~\ref{fig:bACC_CM_CC_GM}).
Simple CM and CC selections lack information gained from the other colors (corresponding to properties such as dust, age, and metallicity; see section~\ref{sec:beyond}), which information would help disentangle degeneracies between red sequence and blue cloud in photometric space.
Though an optimized cut in CC space can select the quenched population with similar accuracy to that of a GMM, such a selection only works well for a limited redshift span. To preserve accuracy across larger redshift spans, interpolating Gaussian mixtures in multi-color space serves as a straightforward and natural way to extend the RS to higher redshifts.
We note here that in addition to extending deeper into the infrared (towards $Y$, $J$, $H$ filters), higher-energy wavelengths could also be added. Even X-ray data show differences between RS and BC \citep{Comparat+22}, so pan-chromatic analyses of populations would certainly improve characterization of the RS and BC, better distinguishing the two populations.
A continuous RS definition across wide redshift spans will be critical as future galaxy surveys push deeper and fuller into the redshift regime.
The RS has already been detected beyond $z=2$, with evidence for a quenched population out to $z<2.5$
\citep{Kriek+08,Williams+09,Gobat+11,GM+21}.
Future telescopes such as \emph{Euclid}\footnote{
Planned launch date: Q1 2023; its \href{https://www.euclid-ec.org/?page_id=2490} {Y, J, H filters} span 900 to 2000~nm.
} and the \emph{Nancy Grace Roman Space Telescope}\footnote{
Planned launch date: by May 2027; its \href{https://roman.gsfc.nasa.gov/science/WFI_technical.html}{six filters} span 480 to 2300~nm.
} will provide NIR-band observations of galaxies out to high redshifts, measuring the 4000~\AA\ break for high-$z$ galaxies.
Modeling of the RS continuously beyond $z=1$ (and eventually $z=2$) will be critical moving forward, allowing for meaningful interpretation of measures such as red fraction or richness.
\section*{Acknowledgements}
The authors thank the Leinweber Center for Theoretical Physics (LCTP) at University of Michigan for their generous graduate fellowship which enabled this research projects' completion.
WKB thanks Johnny Esteves, Eric Bell, Eli Rykoff, Oleg Gnedin, and Peter Melchior for their insights and assistance in this project. WKB also thanks his wife Eden and his son Fletcher, by whom his days are vastly brightened and to whom he owes his life.
This work was made possible by the generous open-source software of \href{https://numpy.org/doc/stable/} {\sc NumPy} \citep{numpy}, \href{https://matplotlib.org/} {\sc Matplotlib} \citep{pyplot}, \href{https://docs.h5py.org/en/stable/} {h5py} \citep{h5py}, \href{https://scikit-learn.org/stable/} {\sc Sci-kit Learn} \citep{sklearn}, and \href{https://github.com/afarahi/kllr/tree/master/kllr} {KLLR} \citep{Farahi+18,Anbajagane+20}.
\section*{Data Availability}
The current iteration of Red Dragon is available on Bitbucket at \href{https://bitbucket.org/wkblack/red-dragon-gamma/src/master/}{wkblack/red-dragon-gamma}.
Methods used in this analysis are also available at \href{https://bitbucket.org/wkblack/red-dragon/src/master/}{wkblack/red-dragon} on Bitbucket.
Data from \href{http://skyserver.sdss.org/dr16/en/tools/search/sql.aspx}{SDSS} and \href{https://www.tng-project.org/data/downloads/TNG300-1/}{TNG} are publicly available on their respective servers.
\bibliographystyle{mnras}
\typeout{}
\section{Introduction}
The journal \textit{Monthly Notices of the Royal Astronomical Society} (MNRAS) encourages authors to prepare their papers using \LaTeX.
The style file \verb'mnras.cls' can be used to approximate the final appearance of the journal, and provides numerous features to simplify the preparation of papers.
This document, \verb'mnras_guide.tex', provides guidance on using that style file and the features it enables.
This is not a general guide on how to use \LaTeX, of which many excellent examples already exist.
We particularly recommend \textit{Wikibooks \LaTeX}\footnote{\url{https://en.wikibooks.org/wiki/LaTeX}}, a collaborative online textbook which is of use to both beginners and experts.
Alternatively there are several other online resources, and most academic libraries also hold suitable beginner's guides.
For guidance on the contents of papers, journal style, and how to submit a paper, see the MNRAS Instructions to Authors\footnote{\label{foot:itas}\url{http://www.oxfordjournals.org/our_journals/mnras/for_authors/}}.
Only technical issues with the \LaTeX\ class are considered here.
\section{Obtaining and installing the MNRAS package}
Some \LaTeX\ distributions come with the MNRAS package by default.
If yours does not, you can either install it using your distribution's package manager, or download it from the Comprehensive \TeX\ Archive Network\footnote{\url{http://www.ctan.org/tex-archive/macros/latex/contrib/mnras}} (CTAN).
The files can either be installed permanently by placing them in the appropriate directory (consult the documentation for your \LaTeX\ distribution), or used temporarily by placing them in the working directory for your paper.
To use the MNRAS package, simply specify \verb'mnras' as the document class at the start of a \verb'.tex' file:
\begin{verbatim}
\documentclass{mnras}
\end{verbatim}
Then compile \LaTeX\ (and if necessary \bibtex) in the usual way.
\section{Preparing and submitting a paper}
We recommend that you start with a copy of the \texttt{mnras\_template.tex} file.
Rename the file, update the information on the title page, and then work on the text of your paper.
Guidelines for content, style etc. are given in the instructions to authors on the journal's website$^{\ref{foot:itas}}$.
Note that this document does not follow all the aspects of MNRAS journal style (e.g. it has a table of contents).
If a paper is accepted, it is professionally typeset and copyedited by the publishers.
It is therefore likely that minor changes to presentation will occur.
For this reason, we ask authors to ignore minor details such as slightly long lines, extra blank spaces, or misplaced figures, because these details will be dealt with during the production process.
Papers must be submitted electronically via the online submission system; paper submissions are not permitted.
For full guidance on how to submit a paper, see the instructions to authors.
\section{Class options}
\label{sec:options}
There are several options which can be added to the document class line like this:
\begin{verbatim}
\documentclass[option1,option2]{mnras}
\end{verbatim}
The available options are:
\begin{itemize}
\item \verb'letters' -- used for papers in the journal's Letters section.
\item \verb'onecolumn' -- single column, instead of the default two columns. This should be used {\it only} if necessary for the display of numerous very long equations.
\item \verb'doublespacing' -- text has double line spacing. Please don't submit papers in this format.
\item \verb'referee' -- \textit{(deprecated)} single column, double spaced, larger text, bigger margins. Please don't submit papers in this format.
\item \verb'galley' -- \textit{(deprecated)} no running headers, no attempt to align the bottom of columns.
\item \verb'landscape' -- \textit{(deprecated)} sets the whole document on landscape paper.
\item \verb"usenatbib" -- \textit{(all papers should use this)} this uses Patrick Daly's \verb"natbib.sty" package for citations.
\item \verb"usegraphicx" -- \textit{(most papers will need this)} includes the \verb'graphicx' package, for inclusion of figures and images.
\item \verb'useAMS' -- adds support for upright Greek characters \verb'\upi', \verb'\umu' and \verb'\upartial' ($\upi$, $\umu$ and $\upartial$). Only these three are included, if you require other symbols you will need to include the \verb'amsmath' or \verb'amsymb' packages (see section~\ref{sec:packages}).
\item \verb"usedcolumn" -- includes the package \verb"dcolumn", which includes two new types of column alignment for use in tables.
\end{itemize}
Some of these options are deprecated and retained for backwards compatibility only.
Others are used in almost all papers, but again are retained as options to ensure that papers written decades ago will continue to compile without problems.
If you want to include any other packages, see section~\ref{sec:packages}.
\section{Title page}
If you are using \texttt{mnras\_template.tex} the necessary code for generating the title page, headers and footers is already present.
Simply edit the title, author list, institutions, abstract and keywords as described below.
\subsection{Title}
There are two forms of the title: the full version used on the first page, and a short version which is used in the header of other odd-numbered pages (the `running head').
Enter them with \verb'\title[]{}' like this:
\begin{verbatim}
\title[Running head]{Full title of the paper}
\end{verbatim}
The full title can be multiple lines (use \verb'\\' to start a new line) and may be as long as necessary, although we encourage authors to use concise titles. The running head must be $\le~45$ characters on a single line.
See appendix~\ref{sec:advanced} for more complicated examples.
\subsection{Authors and institutions}
Like the title, there are two forms of author list: the full version which appears on the title page, and a short form which appears in the header of the even-numbered pages. Enter them using the \verb'\author[]{}' command.
If the author list is more than one line long, start a new line using \verb'\newauthor'. Use \verb'\\' to start the institution list. Affiliations for each author should be indicated with a superscript number, and correspond to the list of institutions below the author list.
For example, if I were to write a paper with two coauthors at another institution, one of whom also works at a third location:
\begin{verbatim}
\author[K. T. Smith et al.]{
Keith T. Smith,$^{1}$
A. N. Other,$^{2}$
and Third Author$^{2,3}$
\\
$^{1}$Affiliation 1\\
$^{2}$Affiliation 2\\
$^{3}$Affiliation 3}
\end{verbatim}
Affiliations should be in the format `Department, Institution, Street Address, City and Postal Code, Country'.
Email addresses can be inserted with the \verb'\thanks{}' command which adds a title page footnote.
If you want to list more than one email, put them all in the same \verb'\thanks' and use \verb'\footnotemark[]' to refer to the same footnote multiple times.
Present addresses (if different to those where the work was performed) can also be added with a \verb'\thanks' command.
\subsection{Abstract and keywords}
The abstract is entered in an \verb'abstract' environment:
\begin{verbatim}
\begin{abstract}
The abstract of the paper.
\end{abstract}
\end{verbatim}
\noindent Note that there is a word limit on the length of abstracts.
For the current word limit, see the journal instructions to authors$^{\ref{foot:itas}}$.
Immediately following the abstract, a set of keywords is entered in a \verb'keywords' environment:
\begin{verbatim}
\begin{keywords}
keyword 1 -- keyword 2 -- keyword 3
\end{keywords}
\end{verbatim}
\noindent There is a list of permitted keywords, which is agreed between all the major astronomy journals and revised every few years.
Do \emph{not} make up new keywords!
For the current list of allowed keywords, see the journal's instructions to authors$^{\ref{foot:itas}}$.
\section{Sections and lists}
Sections and lists are generally the same as in the standard \LaTeX\ classes.
\subsection{Sections}
\label{sec:sections}
Sections are entered in the usual way, using \verb'\section{}' and its variants. It is possible to nest up to four section levels:
\begin{verbatim}
\section{Main section}
\subsection{Subsection}
\subsubsection{Subsubsection}
\paragraph{Lowest level section}
\end{verbatim}
\noindent The other \LaTeX\ sectioning commands \verb'\part', \verb'\chapter' and \verb'\subparagraph{}' are deprecated and should not be used.
Some sections are not numbered as part of journal style (e.g. the Acknowledgements).
To insert an unnumbered section use the `starred' version of the command: \verb'\section*{}'.
See appendix~\ref{sec:advanced} for more complicated examples.
\subsection{Lists}
Two forms of lists can be used in MNRAS -- numbered and unnumbered.
For a numbered list, use the \verb'enumerate' environment:
\begin{verbatim}
\begin{enumerate}
\item First item
\item Second item
\item etc.
\end{enumerate}
\end{verbatim}
\noindent which produces
\begin{enumerate}
\item First item
\item Second item
\item etc.
\end{enumerate}
Note that the list uses lowercase Roman numerals, rather than the \LaTeX\ default Arabic numerals.
For an unnumbered list, use the \verb'description' environment without the optional argument:
\begin{verbatim}
\begin{description}
\item First item
\item Second item
\item etc.
\end{description}
\end{verbatim}
\noindent which produces
\begin{description}
\item First item
\item Second item
\item etc.
\end{description}
Bulleted lists using the \verb'itemize' environment should not be used in MNRAS; it is retained for backwards compatibility only.
\section{Mathematics and symbols}
The MNRAS class mostly adopts standard \LaTeX\ handling of mathematics, which is briefly summarised here.
See also section~\ref{sec:packages} for packages that support more advanced mathematics.
Mathematics can be inserted into the running text using the syntax \verb'$1+1=2$', which produces $1+1=2$.
Use this only for short expressions or when referring to mathematical quantities; equations should be entered as described below.
\subsection{Equations}
Equations should be entered using the \verb'equation' environment, which automatically numbers them:
\begin{verbatim}
\begin{equation}
a^2=b^2+c^2
\end{equation}
\end{verbatim}
\noindent which produces
\begin{equation}
a^2=b^2+c^2
\end{equation}
By default, the equations are numbered sequentially throughout the whole paper. If a paper has a large number of equations, it may be better to number them by section (2.1, 2.2 etc.). To do this, add the command \verb'\numberwithin{equation}{section}' to the preamble.
It is also possible to produce un-numbered equations by using the \LaTeX\ built-in \verb'\['\textellipsis\verb'\]' and \verb'$$'\textellipsis\verb'$$' commands; however MNRAS requires that all equations are numbered, so these commands should be avoided.
\subsection{Special symbols}
\begin{table}
\caption{Additional commands for special symbols commonly used in astronomy. These can be used anywhere.}
\label{tab:anysymbols}
\begin{tabular*}{\columnwidth}{@{}l@{\hspace*{50pt}}l@{\hspace*{50pt}}l@{}}
\hline
Command & Output & Meaning\\
\hline
\verb'\sun' & \sun & Sun, solar\\[2pt]
\verb'\earth' & \earth & Earth, terrestrial\\[2pt]
\verb'\micron' & \micron & microns\\[2pt]
\verb'\degr' & \degr & degrees\\[2pt]
\verb'\arcmin' & \arcmin & arcminutes\\[2pt]
\verb'\arcsec' & \arcsec & arcseconds\\[2pt]
\verb'\fdg' & \fdg & fraction of a degree\\[2pt]
\verb'\farcm' & \farcm & fraction of an arcminute\\[2pt]
\verb'\farcs' & \farcs & fraction of an arcsecond\\[2pt]
\verb'\fd' & \fd & fraction of a day\\[2pt]
\verb'\fh' & \fh & fraction of an hour\\[2pt]
\verb'\fm' & \fm & fraction of a minute\\[2pt]
\verb'\fs' & \fs & fraction of a second\\[2pt]
\verb'\fp' & \fp & fraction of a period\\[2pt]
\verb'\diameter' & \diameter & diameter\\[2pt]
\verb'\sq' & \sq & square, Q.E.D.\\[2pt]
\hline
\end{tabular*}
\end{table}
\begin{table}
\caption{Additional commands for mathematical symbols. These can only be used in maths mode.}
\label{tab:mathssymbols}
\begin{tabular*}{\columnwidth}{l@{\hspace*{40pt}}l@{\hspace*{40pt}}l}
\hline
Command & Output & Meaning\\
\hline
\verb'\upi' & $\upi$ & upright pi\\[2pt]
\verb'\umu' & $\umu$ & upright mu\\[2pt]
\verb'\upartial' & $\upartial$ & upright partial derivative\\[2pt]
\verb'\lid' & $\lid$ & less than or equal to\\[2pt]
\verb'\gid' & $\gid$ & greater than or equal to\\[2pt]
\verb'\la' & $\la$ & less than of order\\[2pt]
\verb'\ga' & $\ga$ & greater than of order\\[2pt]
\verb'\loa' & $\loa$ & less than approximately\\[2pt]
\verb'\goa' & $\goa$ & greater than approximately\\[2pt]
\verb'\cor' & $\cor$ & corresponds to\\[2pt]
\verb'\sol' & $\sol$ & similar to or less than\\[2pt]
\verb'\sog' & $\sog$ & similar to or greater than\\[2pt]
\verb'\lse' & $\lse$ & less than or homotopic to \\[2pt]
\verb'\gse' & $\gse$ & greater than or homotopic to\\[2pt]
\verb'\getsto' & $\getsto$ & from over to\\[2pt]
\verb'\grole' & $\grole$ & greater over less\\[2pt]
\verb'\leogr' & $\leogr$ & less over greater\\
\hline
\end{tabular*}
\end{table}
Some additional symbols of common use in astronomy have been added in the MNRAS class. These are shown in tables~\ref{tab:anysymbols}--\ref{tab:mathssymbols}. The command names are -- as far as possible -- the same as those used in other major astronomy journals.
Many other mathematical symbols are also available, either built into \LaTeX\ or via additional packages. If you want to insert a specific symbol but don't know the \LaTeX\ command, we recommend using the Detexify website\footnote{\url{http://detexify.kirelabs.org}}.
Sometimes font or coding limitations mean a symbol may not get smaller when used in sub- or superscripts, and will therefore be displayed at the wrong size. There is no need to worry about this as it will be corrected by the typesetter during production.
To produce bold symbols in mathematics, use \verb'\bmath' for simple variables, and the \verb'bm' package for more complex symbols (see section~\ref{sec:packages}). Vectors are set in bold italic, using \verb'\mathbfit{}'.
For matrices, use \verb'\mathbfss{}' to produce a bold sans-serif font e.g. \mathbfss{H}; this works even outside maths mode, but not all symbols are available (e.g. Greek). For $\nabla$ (del, used in gradients, divergence etc.) use \verb'$\nabla$'.
\subsection{Ions}
A new \verb'\ion{}{}' command has been added to the class file, for the correct typesetting of ionisation states.
For example, to typeset singly ionised calcium use \verb'\ion{Ca}{ii}', which produces \ion{Ca}{ii}.
\section{Figures and tables}
\label{sec:fig_table}
Figures and tables (collectively called `floats') are mostly the same as built into \LaTeX.
\subsection{Basic examples}
\begin{figure}
\includegraphics[width=\columnwidth]{example}
\caption{An example figure.}
\label{fig:example}
\end{figure}
Figures are inserted in the usual way using a \verb'figure' environment and \verb'\includegraphics'. The example Figure~\ref{fig:example} was generated using the code:
\begin{verbatim}
\begin{figure}
\includegraphics[width=\columnwidth]{example}
\caption{An example figure.}
\label{fig:example}
\end{figure}
\end{verbatim}
\begin{table}
\caption{An example table.}
\label{tab:example}
\begin{tabular}{lcc}
\hline
Star & Mass & Luminosity\\
& $M_{\sun}$ & $L_{\sun}$\\
\hline
Sun & 1.00 & 1.00\\
$\alpha$~Cen~A & 1.10 & 1.52\\
$\epsilon$~Eri & 0.82 & 0.34\\
\hline
\end{tabular}
\end{table}
The example Table~\ref{tab:example} was generated using the code:
\begin{verbatim}
\begin{table}
\caption{An example table.}
\label{tab:example}
\begin{tabular}{lcc}
\hline
Star & Mass & Luminosity\\
& $M_{\sun}$ & $L_{\sun}$\\
\hline
Sun & 1.00 & 1.00\\
$\alpha$~Cen~A & 1.10 & 1.52\\
$\epsilon$~Eri & 0.82 & 0.34\\
\hline
\end{tabular}
\end{table}
\end{verbatim}
\subsection{Captions and placement}
Captions go \emph{above} tables but \emph{below} figures, as in the examples above.
The \LaTeX\ float placement commands \verb'[htbp]' are intentionally disabled.
Layout of figures and tables will be adjusted by the publisher during the production process, so authors should not concern themselves with placement to avoid disappointment and wasted effort.
Simply place the \LaTeX\ code close to where the figure or table is first mentioned in the text and leave exact placement to the publishers.
By default a figure or table will occupy one column of the page.
To produce a wider version which covers both columns, use the \verb'figure*' or \verb'table*' environment.
If a figure or table is too long to fit on a single page it can be split it into several parts.
Create an additional figure or table which uses \verb'\contcaption{}' instead of \verb'\caption{}'.
This will automatically correct the numbering and add `\emph{continued}' at the start of the caption.
\begin{table}
\contcaption{A table continued from the previous one.}
\label{tab:continued}
\begin{tabular}{lcc}
\hline
Star & Mass & Luminosity\\
& $M_{\sun}$ & $L_{\sun}$\\
\hline
$\tau$~Cet & 0.78 & 0.52\\
$\delta$~Pav & 0.99 & 1.22\\
$\sigma$~Dra & 0.87 & 0.43\\
\hline
\end{tabular}
\end{table}
Table~\ref{tab:continued} was generated using the code:
\begin{verbatim}
\begin{table}
\contcaption{A table continued from the previous one.}
\label{tab:continued}
\begin{tabular}{lcc}
\hline
Star & Mass & Luminosity\\
& $M_{\sun}$ & $L_{\sun}$\\
\hline
$\tau$~Cet & 0.78 & 0.52\\
$\delta$~Pav & 0.99 & 1.22\\
$\sigma$~Dra & 0.87 & 0.43\\
\hline
\end{tabular}
\end{table}
\end{verbatim}
To produce a landscape figure or table, use the \verb'pdflscape' package and the \verb'landscape' environment.
The landscape Table~\ref{tab:landscape} was produced using the code:
\begin{verbatim}
\begin{landscape}
\begin{table}
\caption{An example landscape table.}
\label{tab:landscape}
\begin{tabular}{cccccccccc}
\hline
Header & Header & ...\\
Unit & Unit & ...\\
\hline
Data & Data & ...\\
Data & Data & ...\\
...\\
\hline
\end{tabular}
\end{table}
\end{landscape}
\end{verbatim}
Unfortunately this method will force a page break before the table appears.
More complicated solutions are possible, but authors shouldn't worry about this.
\begin{landscape}
\begin{table}
\caption{An example landscape table.}
\label{tab:landscape}
\begin{tabular}{cccccccccc}
\hline
Header & Header & Header & Header & Header & Header & Header & Header & Header & Header\\
Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit \\
\hline
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
\hline
\end{tabular}
\end{table}
\end{landscape}
\section{References and citations}
\subsection{Cross-referencing}
The usual \LaTeX\ commands \verb'\label{}' and \verb'\ref{}' can be used for cross-referencing within the same paper.
We recommend that you use these whenever relevant, rather than writing out the section or figure numbers explicitly.
This ensures that cross-references are updated whenever the numbering changes (e.g. during revision) and provides clickable links (if available in your compiler).
It is best to give each section, figure and table a logical label.
For example, Table~\ref{tab:mathssymbols} has the label \verb'tab:mathssymbols', whilst section~\ref{sec:packages} has the label \verb'sec:packages'.
Add the label \emph{after} the section or caption command, as in the examples in sections~\ref{sec:sections} and \ref{sec:fig_table}.
Enter the cross-reference with a non-breaking space between the type of object and the number, like this: \verb'see Figure~\ref{fig:example}'.
The \verb'\autoref{}' command can be used to automatically fill out the type of object, saving on typing.
It also causes the link to cover the whole phrase rather than just the number, but for that reason is only suitable for single cross-references rather than ranges.
For example, \verb'\autoref{tab:journal_abbr}' produces \autoref{tab:journal_abbr}.
\subsection{Citations}
\label{sec:cite}
MNRAS uses the Harvard -- author (year) -- citation style, e.g. \citet{author2013}.
This is implemented in \LaTeX\ via the \verb'natbib' package, which in turn is included via the \verb'usenatbib' package option (see section~\ref{sec:options}), which should be used in all papers.
Each entry in the reference list has a `key' (see section~\ref{sec:ref_list}) which is used to generate citations.
There are two basic \verb'natbib' commands:
\begin{description}
\item \verb'\citet{key}' produces an in-text citation: \citet{author2013}
\item \verb'\citep{key}' produces a bracketed (parenthetical) citation: \citep{author2013}
\end{description}
Citations will include clickable links to the relevant entry in the reference list, if supported by your \LaTeX\ compiler.
\defcitealias{smith2014}{Paper~I}
\begin{table*}
\caption{Common citation commands, provided by the \texttt{natbib} package.}
\label{tab:natbib}
\begin{tabular}{lll}
\hline
Command & Ouput & Note\\
\hline
\verb'\citet{key}' & \citet{smith2014} & \\
\verb'\citep{key}' & \citep{smith2014} & \\
\verb'\citep{key,key2}' & \citep{smith2014,jones2015} & Multiple papers\\
\verb'\citet[table 4]{key}' & \citet[table 4]{smith2014} & \\
\verb'\citep[see][figure 7]{key}' & \citep[see][figure 7]{smith2014} & \\
\verb'\citealt{key}' & \citealt{smith2014} & For use with manual brackets\\
\verb'\citeauthor{key}' & \citeauthor{smith2014} & If already cited in close proximity\\
\verb'\defcitealias{key}{Paper~I}' & & Define an alias (doesn't work in floats)\\
\verb'\citetalias{key}' & \citetalias{smith2014} & \\
\verb'\citepalias{key}' & \citepalias{smith2014} & \\
\hline
\end{tabular}
\end{table*}
There are a number of other \verb'natbib' commands which can be used for more complicated citations.
The most commonly used ones are listed in Table~\ref{tab:natbib}.
For full guidance on their use, consult the \verb'natbib' documentation\footnote{\url{http://www.ctan.org/pkg/natbib}}.
If a reference has several authors, \verb'natbib' will automatically use `et al.' if there are more than two authors. However, if a paper has exactly three authors, MNRAS style is to list all three on the first citation and use `et al.' thereafter. If you are using \bibtex\ (see section~\ref{sec:ref_list}) then this is handled automatically. If not, the \verb'\citet*{}' and \verb'\citep*{}' commands can be used at the first citation to include all of the authors.
\subsection{The list of references}
\label{sec:ref_list}
It is possible to enter references manually using the usual \LaTeX\ commands, but we strongly encourage authors to use \bibtex\ instead.
\bibtex\ ensures that the reference list is updated automatically as references are added or removed from the paper, puts them in the correct format, saves on typing, and the same reference file can be used for many different papers -- saving time hunting down reference details.
An MNRAS \bibtex\ style file, \verb'mnras.bst', is distributed as part of this package.
The rest of this section will assume you are using \bibtex.
References are entered into a separate \verb'.bib' file in standard \bibtex\ formatting.
This can be done manually, or there are several software packages which make editing the \verb'.bib' file much easier.
We particularly recommend \textsc{JabRef}\footnote{\url{http://jabref.sourceforge.net/}}, which works on all major operating systems.
\bibtex\ entries can be obtained from the NASA Astrophysics Data System\footnote{\label{foot:ads}\url{http://adsabs.harvard.edu}} (ADS) by clicking on `Bibtex entry for this abstract' on any entry.
Simply copy this into your \verb'.bib' file or into the `BibTeX source' tab in \textsc{JabRef}.
Each entry in the \verb'.bib' file must specify a unique `key' to identify the paper, the format of which is up to the author.
Simply cite it in the usual way, as described in section~\ref{sec:cite}, using the specified key.
Compile the paper as usual, but add an extra step to run the \texttt{bibtex} command.
Consult the documentation for your compiler or latex distribution.
Correct formatting of the reference list will be handled by \bibtex\ in almost all cases, provided that the correct information was entered into the \verb'.bib' file.
Note that ADS entries are not always correct, particularly for older papers and conference proceedings, so may need to be edited.
If in doubt, or if you are producing the reference list manually, see the MNRAS instructions to authors$^{\ref{foot:itas}}$ for the current guidelines on how to format the list of references.
\section{Appendices and online material}
To start an appendix, simply place the \verb'
|
\section{Introduction}
The Next-to-Minimal Supersymmetric Standard~Model (NMSSM) \cite{nmssm}
solves in a natural and elegant way the so-called $\mu$-problem
\cite{kim} of the MSSM: Within any super\-symmetric (SUSY) extension of
the Standard~Model (SM), a supersymmetric Higgs(ino) mass term $|\mu|
\gsim$ $100$~GeV is necessary in order to satisfy the LEP constraints on
chargino masses, but $|\mu| \lsim M_{SUSY}$ is required in order that
the effective potential develops a non-trivial minimum with
$\left<H_u\right>$, $\left<H_d\right> \neq 0$. (Here $M_{SUSY}$ denotes
the order of magnitude of the soft SUSY breaking scalar masses as
$m_{H_u}$ and $m_{H_d}$.) The question is, why a supersymmetric mass
parameter as $\mu$ happens to be of the same order as $M_{SUSY}$.
In the NMSSM, an (effective) $\mu$-term is generated by the vacuum
expectation value (VEV) of an
additional gauge singlet superfield $S$ and a corresponding Yukawa
coupling, similarly to the way how quark and lepton masses are generated
in the SM by the VEV of a Higgs field. To this end, the
$\mu$-term in the superpotential $W$ of the MSSM,
$ W_{MSSM} = {\mu} H_u H_d\ +\dots\; $,
has to be replaced by
\beq
W_{NMSSM} = {\lambda} S H_u H_d
+\frac{1}{3}{\kappa}S^3\ +\dots
\eeq
and the soft SUSY breaking term $ {\mu B} H_u H_d $ by
\beq
{\lambda A_\lambda} S H_u H_d +\frac{1}{3} {\kappa A_\kappa} S^3\; .
\eeq
Assuming that all soft SUSY breaking
|
terms are of ${\cal{O}}(M_{SUSY}$),
one obtains $\left< S\right> \sim M_{SUSY}/\kappa$ and hence an
effective $\mu$-parameter $\mu_{eff} \equiv \lambda \left< S\right> \sim
{\lambda}/{\kappa}\ M_{SUSY}$, which is of the desired order if
${\lambda}/{\kappa} \sim {\cal{O}}(1)$. Instead of the two parameters
$\mu$ and $B$ of the MSSM, the NMSSM contains four parameters $\lambda,\
\kappa,\ A_\lambda$ and $A_\kappa$, and the spectrum includes one
additional CP-even Higgs scalar, one CP-odd Higgs scalar and one
additional neutralino from the superfield $S$. Generally, these states
mix with the Higgs scalars and neutralinos of the MSSM. Then, each of
the neutralino/CP-even/CP-odd sectors can give rise to a phenomenology
different from that of the MSSM:
a) The Lightest Supersymmetric Particle (LSP) can be dominantly
singlino-like (consistent with WMAP constraints on $\Omega h^2$
\cite{wmap}, if its mass is only a few GeV below the one of the
Next-to-LSP (NLSP), see \cite{belsemenov} and below) implying an
additional contribution to sparticle decay chains; note that the NLSP
could have a long life time leading to observable displaced vertices
\cite{singdecay};
b) The SM-like CP-even Higgs scalar $h_1$ can be $\sim
15$~GeV heavier than in the MSSM (at {low} $\tan\beta$!);
c) A CP-odd Higgs scalar $a_1$ can be (very) light (see also the talk by
J.~Gunion, these proceedings). A light CP-odd Higgs scalar can have an
important impact on B physics (see the talk by M. Sanchis-Lozano, these
proceedings), and can imply that the lightest CP-even scalar $h_1$
decays
|
Z}_{7980}$& 132667500\\
13&$\mathbb{Z}_{25}\oplus\mathbb{Z}_{325}\oplus\mathbb{Z}_{325}\oplus\mathbb{Z}_{325}$& 858203125\\
14&$\mathbb{Z}_{17513}\oplus\mathbb{Z}_{245182}$& 4293872366\\
15&$\mathbb{Z}_{37069}\oplus\mathbb{Z}_{556035}$& 20611661415\\
16&$\mathbb{Z}_{84847}\oplus\mathbb{Z}_{1357552}$& 115184214544\\
17&$\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}
\oplus\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\oplus\mathbb{Z}_{23186}\oplus\mathbb{Z}_{394162}$& 584898568448\\
18&$\mathbb{Z}_{400843}\oplus\mathbb{Z}_{7215174}$& 2892151991682\\
19&$\mathbb{Z}_{898243}\oplus\mathbb{Z}_{17066617}$& 15329969253931\\
20&$\mathbb{Z}_{19}\oplus\mathbb{Z}_{19}\oplus\mathbb{Z}_{19}\oplus\mathbb{Z}_{19}\oplus\mathbb{Z}_{5453}
\oplus\mathbb{Z}_{109060}$& 77502443441780\\
21&$\mathbb{Z}_{4301807}\oplus\mathbb{Z}_{90337947}$& 388616412770229\\
22&$\mathbb{Z}_{9536669}\oplus\mathbb{Z}_{209806718}$& 2000857223542342\\
23&$\mathbb{Z}_{20949827}\oplus\mathbb{Z}_{481846021}$& 10094590780588367\\
24&$\mathbb{Z}_{5}\oplus\mathbb{Z}_{5}\oplus\mathbb{Z}_{9192295}\oplus\mathbb{Z}_{220615080}$& 50598972420215000\\
25&$\mathbb{Z}_{101468531}\oplus\mathbb{Z}_{2536713275}$& 257396569582449025\\
26&$\mathbb{Z}_{25}\oplus\mathbb{Z}_{325}\oplus\mathbb{Z}_{8923525}\oplus\mathbb{Z}_{17847050}$& 1293976099416406250\\
27&$\mathbb{Z}_{490309597}\oplus\mathbb{Z}_{13238359119}$& 6490894524578165043\\
28&$\mathbb{Z}_{49}\oplus\mathbb{Z}_{154342069}\oplus\mathbb{Z}_{4321577932}$& 32683062689111444092\\
29&$\mathbb{Z}_{2376466133}\oplus\mathbb{Z}_{68917517857}$& 163780147157583236981\\
30&$\mathbb{Z}_{19}\oplus\mathbb{Z}_{19}\oplus\mathbb{Z}_{275089049}\oplus\mathbb{Z}_{8252671470}$&
819549256247415262830\\
31&$\mathbb{Z}_{11507960491}\oplus\mathbb{Z}_{356746775221}$& 4105427794534925793511\\
32&$\mathbb{Z}_{25318259953}\oplus\mathbb{Z}_{810184318496}$& 20512457185525873990688\\
33&$\mathbb{Z}_{55700389051}\oplus\mathbb{Z}_{1838112838683}$& 102383600234281102459833\\
34&$\mathbb{Z}_{2}\oplus\mathbb{Z}_{4}\oplus\mathbb{Z}_{4}\oplus\mathbb{Z}_{4}\oplus\mathbb{Z}_{4}
\oplus\mathbb{Z}_{4}\oplus\mathbb{Z}_{4}\oplus\mathbb{Z}_{1915580948}\oplus\mathbb{Z}_{32564876116}$&
511022336096582352633856\\
35&$\mathbb{Z}_{269747901677}\oplus\mathbb{Z}_{9441176558695}$& 2546737566070056079431515\\
\end{tabular}
\end{table}
\clearpage
\begin{table}[h]
\caption{Graph $I(n,3,4)$}
\begin{tabular}{r|l|r} $n$ & $\textrm{Jac}(I(n,3,4))$ & $\tau_{3,4}(n)=|\textrm{Jac}(I(n,3,4))|$ \\ \hline
5& $\mathbb{Z}_{2}\oplus\mathbb{Z}_{10}\oplus\mathbb{Z}_{10}\oplus\mathbb{Z}_{10}$
|
& 2000 \\
6& $\mathbb{Z}_{19}\oplus\mathbb{Z}_{114}$ & 2166 \\
7& $\mathbb{Z}_{71}\oplus\mathbb{Z}_{497}$ & 35287 \\
8& $\mathbb{Z}_{73}\oplus\mathbb{Z}_{584}$ & 42632 \\
9& $\mathbb{Z}_{289}\oplus\mathbb{Z}_{2601}$ & 751689 \\
10& $\mathbb{Z}_{2}\oplus\mathbb{Z}_{12}\oplus\mathbb{Z}_{60}\oplus\mathbb{Z}_{60}\oplus\mathbb{Z}_{60}$ & 5184000 \\
11& $\mathbb{Z}_{1541}\oplus\mathbb{Z}_{16951}$ & 26121491 \\
12& $\mathbb{Z}_{11}\oplus\mathbb{Z}_{11}\oplus\mathbb{Z}_{209}\oplus\mathbb{Z}_{2508}$ & 63424812 \\
13& $\mathbb{Z}_{5}\oplus\mathbb{Z}_{5}\oplus\mathbb{Z}_{1555}\oplus\mathbb{Z}_{20215}$ & 785858125 \\
14& $\mathbb{Z}_{16969}\oplus\mathbb{Z}_{237566}$ & 4031257454 \\
15& $\mathbb{Z}_{2}\oplus\mathbb{Z}_{10}\oplus\mathbb{Z}_{17410}\oplus\mathbb{Z}_{52230}$ & 18186486000 \\
16& $\mathbb{Z}_{71321}\oplus\mathbb{Z}_{1141136}$ & 81386960656 \\
17& $\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}
\oplus\mathbb{Z}_{2}\oplus\mathbb{Z}_{23186}\oplus\mathbb{Z}_{394162}$ & 584898568448 \\
18& $\mathbb{Z}_{400843}\oplus\mathbb{Z}_{7215174}$ & 2892151991682 \\
19& $\mathbb{Z}_{37}\oplus\mathbb{Z}_{37}\oplus\mathbb{Z}_{23939}\oplus\mathbb{Z}_{454841}$ & 14906272578931 \\
20& $\mathbb{Z}_{8}\oplus\mathbb{Z}_{12}\oplus\mathbb{Z}_{120}\oplus\mathbb{Z}_{79080}\oplus\mathbb{Z}_{79080}$ & 72042006528000 \\
21& $\mathbb{Z}_{4487981}\oplus\mathbb{Z}_{94247601}$ & 422981442583581 \\
22& $\mathbb{Z}_{10002631}\oplus\mathbb{Z}_{220057882}$ & 2201157792287542 \\
23& $\mathbb{Z}_{22138559}\oplus\mathbb{Z}_{509186857}$ & 11272663275719063 \\
24& $\mathbb{Z}_{187}\oplus\mathbb{Z}_{187}\oplus\mathbb{Z}_{259369}\oplus\mathbb{Z}_{6224856}$ & 56458663080288216 \\
25& $\mathbb{Z}_{2114}\oplus\mathbb{Z}_{52850}\oplus\mathbb{Z}_{52850}\oplus\mathbb{Z}_{52850}$ & 312061332000250000 \\
\end{tabular} \vspace{1cm}
\end{table}
The first example of Jacobian $Jac(I(n,3,4))$ with the maximum rank 13:
$$n=170,$$
$$Jac(I(170,3,4))\cong\mathbb{Z}_{2}\oplus\mathbb{Z}_{4}^8\oplus\mathbb{Z}_{6108}\oplus\mathbb{Z}_{30540}
\oplus\mathbb{Z}_{2^{2}\cdot3\cdot5\cdot103\cdot509\cdot1699\cdot11593\cdot p\cdot q}
\oplus\mathbb{Z}_{2^{2}\cdot3\cdot5\cdot17\cdot103\cdot509\cdot1699\cdot11593\cdot p\cdot q},$$
and
$$\tau_{3,4}(170)=2^{25}\cdot3^{4}\cdot5^{3}\cdot17\cdot103^{2}\cdot509^{4}\cdot1699^{2}\cdot
11593^{2}\cdot p^{2}\cdot q^{2},$$
where $p=16901365279286026289$ and $q=34652587005966540929.$
\clearpage
\newpage
\section*{ACKNOWLEDGMENTS}
The author is grateful to professor D. Lorenzini for helpful comments on the preliminary results of the paper
and professor Young Soo Kwon, whose remarks and suggestions assisted greatly in completion of the text.
The author was supported by the Russian Foundation for Basic Research (16-31-00138)
and the Slovenian-Russian grant (2016-2017).
|
\section{INTRODUCTION}
Bag-of-Words (BoW) method was originally proposed for document retrieval. In recent years, the method has been successfully applied to image retrieval tasks in computer vision community \cite{nister}, \cite{Sivic2}. The method is attractive because of its efficient image representation and retrieval. BoW represents an image as a sparse vector of visual words, and thus images can be searched efficiently using an inverted index file system. Moreover, because the complexity of BoW does not grow with the size of the dataset, as much as that of other search techniques (e.g., direct feature matching) do, it can be employed for large-scale image search applications.
One major application area that benefits from BoW is the appearance-based mobile robot localization and SLAM\footnote{Simultaneously Localization And Mapping}. SLAM employs BoW to solve the \emph{loop closure detection} (LCD) problem which is a classic and difficult problem in SLAM. LCD is addressed as a place recognition problem: robot should be able to recognize places it has visited before to localise itself or refine the map of the environment. This task is performed by matching the current view of the robot to the existing map that contains the images of the previously visited locations.
In large-scale environments, SLAM maps contain a large number of images to match in order to solve the loop closure detection problem. The image search in such large maps is challenging and still an open problem. Although BoW proposes an efficient search technique, its vector quantization (VQ) step can be computationally expensive. Vector quantization maps the image feature descriptors to the words in a visual vocabulary. Typically hundreds to thousands of features are extracted from an image and need to be matched against tens or hundreds of thousands of visual words. Approximate nearest neighbor search algorithms, e.g. hierarchical k-means tree \cite{nister} and randomized KD-trees \cite{RKDtree_Silpa-AnanH08}, have been used to speed up the quantization process, however at the cost of search accuracy.
In this paper, we adapt the graph-based nearest neighbor search algorithm (GNNS) proposed in \cite{Hajebi} to increase the efficiency of the vector quantization. GNNS utilizes a $k$-NN graph as the search index, which is constructed in an offline pre-processing phase. However, the graph construction can be expensive when the dataset (i.e., the vocabulary in our application) is large. Fortunately by integrating GNNS into the BoW model, the $k$-NN graph can be extracted from the k-means procedure,
employed for visual vocabulary construction in BoW-based algorithms, and it just adds small additional computational cost compared to the cost of k-means clustering. Most importantly, we show that GNNS can exploit the sequential dependency in SLAM data to speed up the vector quantization process considerably unlike other search indices in which it is difficult to exploit such sequential dependency in data. This motivates the use of GNNS rather than other search algorithms in solving loop closure detection problem.
To support our claim, we experimentally show that adapting GNNS to solve the vector quantization problem,
outperforms the state-of-the-art search methods by decreasing the number of distance computations performed and hence increasing the search speedup,
\section{BACKGROUND}
\label{sec:bakgnd}
\subsection{Bag-of-Words for Image Retrieval}
\label{sec:bow}
Bag-of-words is a popular model that has been used in image classification, objection recognition, and appearance-based navigation. Because of its simplicity and search efficiency it has also been used as a successful method in large-scale image and document retrieval \cite{Sivic:2003,nister,Sivic2}.
Bag-of-words model represents an image by a sparse vector of visual words. Image features, e.g., SIFTs \cite{Lowe:2004}, are sampled and clustered (e.g., using \emph{k-means}) in order to quantize the space into a discrete set of visual words. The centroids of clusters are then considered as visual words which form the visual vocabulary. During image retrieval or robot navigation, when a new image arrives, its local features are extracted and vector-quantized into the visual words. Each word might be weighed by some score which is either the word frequency in the image (i.e., \emph{tf}) or the ``term frequency-inverse document frequency'' or \emph{tf-idf} \cite{Sivic:2003}. A histogram of weighted visual words, which is typically sparse, is then built and used to represent the image.
An inverted index file, used in the BoW framework, is an efficient image search tool in which the visual words are mapped to the database images. Each visual word serves as a table index and points to the indices of the database images in which the word occurs. Since not every image contains every word and also each word does not occur in every image, the retrieval through inverted-index file is independent of the map size and therefore fast.
\subsection{Bag-of-Words for Appearance-based SLAM}
\label{sec:relwork}
Bag-of-words model has been extensively used as the basis of the image search in appearance-based localization or SLAM algorithms \cite{fabmap,fabmap2,AcharJK11,Angeli08,bi-BoW}.
Cummins and Newman \cite{fabmap,fabmap2} propose a probabilistic framework over the bag-of-words representation of locations, for the appearance-based place recognition. Along with the visual vocabulary they also learn the Chow Liu tree to capture the co-occurrences of visual words. Similarly, Angeli et al. \cite{Angeli08} develop a probabilistic approach for place recognition in SLAM. They build two visual vocabularies incrementally and use two BoW representations as an input of a Bayesian filtering framework to estimate the likelihood of loop closures.
Assuming each image has hundreds of features, mapping the features to the visual words, using a linear search method, is computationally prohibitive and not practical for real-time localization. Researchers have tackled this problem with different approaches that speed up the search but at the expense of accuracy. A number of papers have employed compact feature descriptors that speed up the search. G\'{a}lvez-L\'{o}pez and Tard\'{o}s in \cite{bi-BoW} propose to use FAST \cite{FAST} and BREIF \cite{BRIEF} binary features and introduce a BoW model that descritizes a binary space. Similarly \cite{fabmap,fabmap2} use SURF \cite{Surf} to have a more compact feature descriptor. Another approach is to use approximate nearest neighbor search algorithms, like hierarchical k-means \cite{nister}, KD-trees \cite{fabmap} or locality sensitive hashing (LSH) \cite{LSH_shahbazi}, to speed up the quantization process.
Li and Kosecka \cite{Kosecka} and Zhang \cite{BoRF} propose reducing the number of features in each image, thereby reducing or removing the vector-quantization process
\section{Search Indices for Vector Quantization}
Vector-quantization as a nearest-neighbor search classification maps the high-dimensional feature descriptors into visual words. When the vocabulary is large, in order to provide sufficient discriminating power for image matching, the vector-quantization process can be a computationally expensive task. An efficient approximate nearest-neighbor search method is therefore required to speed up the search while minimizing the impact of the search method on accuracy.
In the following two subsections, we briefly explain two of the most popular approximate nearest neighbor search methods that are widely-adapted to BoW search: hierarchical k-means and KD-trees. Our method will be compared against these two methods in the evaluation section.
\subsection{KD-trees}
The classical KD-tree algorithm \cite{Bentley:1975,kdtree2,kdtree1} partitions the space by hyper-planes that are perpendicular to the coordinate axes. At the root of the tree a hyperplane orthogonal to one of the dimensions splits the data into two halves according to some splitting value, which is usually the median or the mean of the data to be partitioned. Each half is recursively partitioned into two halves with a hyperplane through a different dimension. Partitioning stops after $\log n$ levels, where n is the total number of data points, so that at the bottom of the tree each leaf node corresponds to one of the data points. The splitting values at each level are stored in the nodes. The query point is then compared to the splitting value at each node while traversing the tree from root to leaf to find the nearest neighbor. Since the leaf point is not necessarily the nearest neighbor, to find approximate nearest neighbors, a backtrack step from the leaf node is performed and the points that are closer to the query point in the tree are examined. Instead of simple backtracking, \cite{BBFKDtree} proposes to use Best-Bin-First (BBF) heuristic to perform the search faster. In BBF one maintains a sorted queue of nodes that have been visited and expands the bins in the order of their distance to the query point (\emph{priority search}).
\subsection{Hierarchical K-means tree}
Hierarchical k-means trees (HKM), proposed by \cite{HKM_Fukunage:1975}, is another type of partitioning trees based on k-means clustering. The tree is built by running k-means clustering recursively. The data points are first partitioned into $k$ distinct clusters to form the nodes of the first layer of the tree. Each cluster is recursively partitioned into $k$ (called branching factor) clusters and this process continues until there is no more than $k$ data points in each node. A depth-first search is a common tree traversal approach for searching the tree. \cite{FLANN} proposes to use priority queues to search the tree more efficiently. Similar to the Best-Bin-First approach \cite{BBFKDtree}, when traversing the tree, the unvisited branches of the nodes along the path, are added to the priority queue. When backtracking, the branches, in the order of their distance (i.e. the distance of their cluster centroid) to the query, are extracted and expanded.
\section{PROPOSED METHOD}
Our fast vector quantization method employs a graph-based $k$-nearest neighbor search algorithm, called GNNS, that outperforms the popular ANN-search methods widely-used in BoW models and SLAM systems. In the following sub-sections we describe GNNS's properties and how nicely GNNS can be adapted for the vector quantization in BoW and SLAM system.
\subsection{The Graph Nearest Neighbor Search (GNNS)}
The graph nearest neighbor search (GNNS) algorithm used in this work, has been originally proposed in \cite{Sebastian:Kimia} and independently re-discovered in \cite{Hajebi}. GNNS builds a $k$-NN graph and, when queried with a new point, it performs hill-climbing starting from a randomly sampled node of the graph. In our application the graph is constructed in an offline phase, which is explained in the following sub-sections\footnote{The following sub-sections are re-stated from our paper \cite{Hajebi} for the clarity of the presentation}.
\subsection{$k$-NN Graph Construction}
\label{sec:KNNG}
A $k$-NN graph is a directed graph $\MG=(\MD,\ME)$, where $\MD$ is the set of nodes (i.e. datapoints) and $\ME$ is the set of links. Node $X_i$ is connected to node $X_j$ if $X_j$ is one of the $k$-NNs of $X_i$. The computational complexity of the naive construction of this graph is $O(dn^2)$, where $n$ is the size of the dataset and $d$ is the dimensionality
The choice of $k$ is crucial to have a good performance. A small $k$ makes the graph too sparse or disconnected so that the hill-climbing method frequently gets stuck in local minima. Choosing a big $k$ gives more flexibility during the runtime, but consumes more memory and makes the offline graph construction more expensive.
\subsection{Approximate $K$-Nearest Neighbor Search Algorithm}
\label{sec:GNNS}
\begin{table}
\begin{center}
\framebox{\parbox{8cm}{
\begin{algorithmic}
\STATE \textbf{Input}: a $k$-NN graph $\MG=(\MD,\ME)$, a query point $Q$, the number of nearest neighbors to be returned $K$, the number of random restarts $R$, the number of greedy steps $T$, and the number of expansions $E$.
\STATE $\rho$ is a distance function. $N(Y, E, \MG)$ returns the first $E$ neighbors of node $Y$ in $\MG$.
\STATE $\MS=\{\}$.
\STATE $\MU=\{\}$.
\STATE $Z=X_1$.
\FOR{$r=1,\dots, R$}
\STATE $Y_0$: a point drawn randomly from a uniform distribution over $\MD$.
\FOR{$t=1,\dots,T$}
\STATE $Y_{t}=\argmin_{Y\in N(Y_{t-1},E,\MG)} \rho(Y,Q)$.
\STATE $\MS=\MS\bigcup N(Y_{t-1},E,\MG)$.
\STATE $\MU=\MU\bigcup\{\rho(Y,Q):\ Y\in N(Y_{t-1},E,\MG)\}$.
\ENDFOR
\ENDFOR
\STATE Sort $\MU$, pick the first $K$ elements, and return the corresponding elements in $\MS$.
\end{algorithmic}
}}
\end{center}
\caption{The Graph Nearest Neighbor Search (GNNS) algorithm for $K$-NN Search Problems.}
\label{alg:GNNS}
\end{table}
The GNNS algorithm, which is basically a best-first search method to solve the $K$ nearest neighbor search problem, is shown in Table~\ref{alg:GNNS}. Throughout this section, we use capital $K$ to indicate the number of nearest neighbors to be returne
, and small $k$ to indicate the number of neighbors to each node in the $k$-nearest neighbor graph.
Starting from a randomly chosen node from the $k$-NN graph, the algorithm replaces the current node $Y_{t-1}$ by the neighbor that is closest to the query:
$$Y_t=\argmin_{Y\in N(Y_{t-1},E,\MG)} \rho(Y,Q),$$
where $N(Y,E,\MG)$ returns the first $E\leq k$ neighbors of $Y$ in $\MG$, and $\rho$ is a distance measure (we use Euclidean distance in our experiments).
The algorithm terminates after a fixed number of greedy moves $T$.
We can alternatively terminate the search when the algorithm reaches a node that is closer to the query than its best neighbor. At termination, the current best $K$ nodes are returned as the $K$ nearest neighbors to the query. Figure~\ref{fig:NNG} illustrates the algorithm on a simple nearest neighbor graph with query $Q$, $K=1$ and $E=3$.
Parameters $R$, $T$, and $E$ specify the computational budget of the algorithm. By increasing each of them, the algorithm spends more time in search and returns a more accurate result. The difference between $E$ and $k$ and $K$ should be noted.
$E$ and $K$ are two input parameters to the search algorithm (online), while $k$ is a parameter of the $k$-NN graph construction algorithm (offline).
Given a query point $Q$, the search algorithm has to find the $K$ nearest neighbors of $Q$. The algorithm, in each greedy step, examines only $E$ out of $k$ neighbors (of the current node) to choose the next node. Hence, it effectively works on an $E$-NN graph.
In our vector quantization application in this paper, we set $K=1$ as we do not need to assign more than one visual word to each image feature.
\begin{figure}[tp]
\centering
\includegraphics[width=2in]{./images/gnn.eps}
\caption{The GNNS Algorithm on a simple nearest neighbor graph.}
\label{fig:NNG}
\end{figure}
\subsection{Graph Vocabulary Construction}
The vocabulary of a BoW model is usually constructed using k-means clustering. Feature descriptors from a training dataset are first clustered into visual words, and then a search index is built over the visual words in an offline phase.
In the previous sub-section, we mentioned to use a $k$-NN graph as the search index for the vocabulary so that each visual word corresponds to a node in the graph. However, the graph construction is a computationally expensive process especially when the dataset is large. Even though this search index is constructed only once offline, it is still desirable to minimize its computational cost.
As an alternative, we show that the graph construction can be efficiently integrated into the k-means clustering process.
In k-means processing, in every iteration, the distance of data points to each cluster centroid is computed to update their membership in the new clusters, and this continues until convergence. In the end, the cluster centroids are chosen as visual words. Given $n$ data points and $C$ clusters centroids\footnote{The number of clusters generated by k-means is generally denoted by ``k'', however to avoid confusion with other $k$ and $K$ notations we use for $k$-NN graph and $K$-NN search, we use $C$ to denote the number of cluster centroids.}, the complexity of each iteration is then $O(nC)$. The $k$-NN graph construction, can be embedded into k-means processing as follows:
in the last iteration, in addition to other data points, the distance of each centroid from the other centroids is computed and the nearest neighbors are used to build the $k$-NN graph. The additional computational cost is negligible, as $C\ll n$. The complexity of the last iteration will then change to $O((n+C)C)$ which is slightly higher than $O(nC)$. This shows that the complexity
|
of graph construction is comparable to the complexity of one iteration of k-means clustering and in applications where k-means clustering is performed (like in BoW for vocabulary construction), the construction time of graph-based index is absorbed by the k-means algorithm.
However, k-means clustering with the time complexity of $O(nC)$ is an expensive process and only practical for applications that require small vocabularies ($C < 10^5$). The hierarchical k-means (HKM) \cite{nister} with complexity $O(n\log C)$ has been proposed to reduce the computational cost of k-means. Philbin et. al [25] propose a modification of k-means where the exact nearest neighbor search is replaced by an approximate NN-search algorithm, e.g. KD-trees. They show that this modification achieves the complexity of HKM. They also demonstrate that approximate k-means (AKM) outperforms HKM, when applied to the vector quantization problem. This justifies the use of k-means or approximate k-means clustering in generating the visual vocabulary (before graph construction) in our proposed method.
\subsection{Exploiting Sequential Dependencies in Data}
\label{sec:seq}
The sequential property of data can be utilized to the advantage of image retrieval.
Unlike many image retrieval and classification applications that search in a pool of unordered images, in appearance-based SLAM we can take advantage of the temporal coherency of images to make the image search for loop closure detection efficient. We proposed to use GNNS to do the vector-quantization. In standard GNNS, search is initiated from a random node. By taking the sequential property of images into account, we can make the random initiation of GNNS smarter: sequential images usually have considerable overlap with their neighboring images and hence share a certain amount of features and visual words.
This property can reduce the amount of computations required for the vector-quantization step, as we can quantize a feature once in the image where it is first observed, and use its visual word in subsequent images as long as the feature is observed. This requires us to match each new frame to the previous frame(s) to find the repeatable features. This step does not incur a significant cost as feature matching in two images can be performed efficiently and can in fact incur no additional costs if the matching between consequent frames is already done in the process of key-frame detection: a new frame is matched against the previous key-frame in order to decide whether it is sufficiently different from the previous key-frame in terms of appearance to be considered a new location in the map. This process is done through direct feature matching between images \cite{keyframe}.
Our approach works as follows: once a new image is captured, the features are extracted. Each feature is vector-quantized through GNNS. Let $f$ be a feature in the current image that has a match $f^\prime$ in the previous key-frame. Let the visual word assigned to $f^\prime$ be $w^\prime$. Intuitively, there is a good chance that the visual word $w^\prime$ is also the word or one of the neighbors of the word corresponding to feature $f$. Therefore we start the GNNS search from $w^\prime$, rather than a random node. This can significantly reduce the number of iterations and distance computations in the GNNS search. This is an advantage of the graph-based index over other search indices when images to be processed are temporally dependent as in visual SLAM, as it is not trivial to employ such prior knowledge in other search indices.
In the GNNS algorithm described in Section \ref{sec:GNNS}, $R$, which indicates the number of random restarts, can take any number based on our computational budget. However, our proposed sequential method helps us choose only one good initial node to start the search with. Having $R>1$ might reduce the efficiency of our sequential algorithm. So we set $R$ to 1 in our experiments. As we show in the experiments section, this is still an effective choice in practice. In addition, we choose the version of GNNS in which the search terminates at the local minima, instead of having $T$ greedy moves. Note that improving the speedup, is not possible if we always make a fixed number of greedy moves.
\section{EXPERIMENTS}
In this section, we compare the performance of our method with hierarchical k-means (HKM) and KD-trees, when applied to the problem of vector quantization in the context of visual SLAM. We will describe the datasets we used for performance evaluation of all methods, followed by discussion of our experimental results.
\subsection{Datasets}
We performed our experiments on two datasets: an outdoor and an indoor dataset. The outdoor dataset is the City Center dataset\footnote{\url{http://www.robots.ox.ac.uk/~mobile/IJRR_2008_Dataset/}} (left-side sequence) \cite{fabmap} and contains 1237 images. The indoor dataset is a \emph{lab} dataset that has been taken inside a research laboratory using an ActivMedia Pioneer P3-AT mobile robot equipped with a Dragonfly IEEE-1394 camera, and contains X images.
Two different vocabularies with different sizes, 5000-word and 204,000-word, have been used for our study, that have been constructed using k-means clustering. We clustered $128$-dimensional SIFT \cite{Lowe:2004} feature descriptors extracted from different datasets than the above-mentioned. The 204,000-word one is used to evaluate the performance of our method on large-scale data.
\subsection{Vector Quantization Performance Evaluation}
We compare the performance of four methods on vector quantization. Randomized KD-trees, hierarchical k-means tree (HKM), GNNS and our proposed method, Sequential GNNS (SGNNS), which is the GNNS that considers the sequential property of data. For the KD-trees and HKM we used FLANN library\footnote{[\url{http://www.cs.ubc.ca/~mariusm/index.php/FLANN/FLANN}]} implementations.
Tables \ref{table:exp1}-\ref{table:exp4} compare the results of four search algorithms on vector quantization. The first two columns show the results when all image features are quantized, and the last two columns show the results when only matched features between images are quantized.
Matched features between each image and the previous keyframe are detected using feature matching with distance-ratio test and epipolar geometric verification. In SGNNS's implementation, for the features that have a match in the previous frame, GNNS search starts from the visual word assigned to their match in the previous frame. For the rest of image features, GNNS starts from a random node.
Our performance evaluation metric is the speedup over linear search while fixing the search accuracy in terms of the true nearest neighbors found. We select the parameters in the algorithms such that we can obtain a fixed accuracy and then calculate the speedup of the algorithms over the linear search at the same accuracy. The speedup over linear search is computed as the ratio of the number of distance computations one algorithm performs over the number of distance computations linear search performs.
The only parameter in GNNS and SGNNS that we set here is $k$, the size of the $k$-NN graph index. We explained in Section \ref{sec:seq} and \ref{sec:GNNS} how the other parameters are chosen. $E$, the number of expansions is also equal to $k$ in our experiments.
In the case of KD-trees, the FLANN parameters that we set include \emph{trees}, the number of randomized KD-trees, and \emph{checks}, the number of leaf nodes to check in one search. In the case of HKM, the parameter set includes \emph{iterations}, the maximum number of iterations to perform in one k-means clustering, \emph{branching}, which is the branching factor of the tree, and \emph{checks}, the number of leaf nodes to check.
Tables \ref{table:exp1}-\ref{table:exp3} shows the experiments on City Center dataset using the 5000-word vocabulary.
The average number of SIFT features extracted from each image is 316 and the average number of matched features is 42, which is roughly 13\% of all features. The speedups for accuracies of $\sim$87\% and $\sim$99\% have been shown in the first two tables.
To obtain the results in Table \ref{table:exp1}, we used a 50-NN graph for GNNS and SGNNS. For KD-trees, we set \emph{trees} and \emph{checks} to 1 and 200, respectively. For HKM, we set \emph{iterations}, \emph{branching} and \emph{checks} to 3, 8 and 40, respectively.
In Table \ref{table:exp2}, we used a 200-NN graph for GNNS and SGNNS. For KD-trees, we set \emph{trees} and \emph{checks} to 4 and 600, respectively. For HKM, we set \emph{iterations}, \emph{branching} and \emph{checks} to 7, 8 and 160, respectively.
In Table \ref{table:exp3}, we show the vector quantization results when the 204,000-word vocabulary is used. We used a 300-NN graph for GNNS and SGNNS. For KD-trees, we set \emph{trees} and \emph{checks} to 4 and 2200, respectively. For HKM, we set \emph{iterations}, \emph{branching} and \emph{checks} to 7, 8 and 500, respectively.
The last part of experiments has been done on an indoor dataset (Table \ref{table:exp4}) where the overlap between images is larger (around 82\% of features are matched). We used the 5000-word vocabulary for this experiment. For GNNS and SGNNS we used a 250-NN graph and for KD-trees, we set \emph{trees} and \emph{checks} to 6 and 400, respectively. For HKM, we set \emph{iterations}, \emph{branching} and \emph{checks} to 3, 8 and 160, respectively.
As can be seen in all experiments, both GNNS and HKM outperform KD-trees, and SGNNS, our proposed method, outperforms HKM by as much as 50\% for the case of vector-quantizing all features, or as much as 300\% if only matched features are used in creating the BoW representation of an image. This superior performance is due to both the efficiency of graph-based search (GNNS) - as indicated by the third row of each table, over the first two rows of each table - and by the exploitation of the sequential property of images whose features are to be vector-quantized, as indicated by the last row (SGNNS) of each table over the third row (GNNS).
We also observed that the features that are common between two images do not share common visual words, necessarily. On average, 64\% of corresponding features share the same visual words, in the experiments presented in Tables \ref{table:exp1}-\ref{table:exp2}. This amount was reduced to 36\%, when we used the 204k vocabulary (Table \ref{table:exp3}).
\begin{table}[!h]
\caption{Comparison of different search algorithms on Vector Quantization - City Center dataset, Accuracy fixed at $\sim$87\%
\label{table:exp1}
\begin{center}
\tabcolsep 4.5pt
{\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|c|c|c||c|c|}
\cline{2-5}
\multicolumn{1}{c}{} & \multicolumn{2}{|c||}{VQ of all features} & \multicolumn{2}{c|}{VQ of matched features} \\ \cline{2-5}
\multicolumn{1}{c|}{} & Accuracy & Speedup & Accuracy & Speedup \\ \hline
KD & 0.8839 & 8.4527 & 0.9209 & 8.3847 \\ \hline
HKM & 0.8892 & 27.0361 & 0.9130 & 26.1947 \\ \hline
GNNS & 0.8665 & 23.3501 & 0.8904 & 23.8412 \\ \hline
SGNNS & 0.8783 & 32.1414 & 0.9624 & 81.5666 \\ \hline
\end{tabular}}
\end{center}
\begin{tabular}{cc}
\vspace{.1 cm}
\end{tabular}
\caption{Comparison of different search algorithms on Vector Quantization - City Center dataset, Accuracy fixed at $\sim$99\%}
\label{table:exp2}
\begin{center}
\tabcolsep 4.5pt
{\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|c|c|c||c|c|}
\cline{2-5}
\multicolumn{1}{c}{} & \multicolumn{2}{|c||}{VQ of all features} & \multicolumn{2}{c|}{VQ of matched features} \\ \cline{2-5}
\multicolumn{1}{c|}{} & Accuracy & Speedup & Accuracy & Speedup \\ \hline
KD & 0.9899 & 2.0767 & 0.9951 & 2.0678 \\ \hline
HKM & 0.9925 & 6.9204 & 0.9956 & 6.3487 \\ \hline
GNNS & 0.9886 & 7.4055 & 0.9917 & 7.5927 \\ \hline
SGNNS & 0.9893 & 9.3872 & 0.9952 & 20.5244 \\ \hline
\end{tabular}}
\end{center}
\begin{tabular}{cc}
\vspace{.1 cm}
\end{tabular}
\caption{Comparison of different search algorithms on Vector Quantization - City Center dataset, Accuracy fixed at $\sim$92\%}
\label{table:exp3}
\begin{center}
\tabcolsep 4.5pt
{\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|c|c|c||c|c|}
\cline{2-5}
\multicolumn{1}{c}{} & \multicolumn{2}{|c||}{VQ of all features} & \multicolumn{2}{c|}{VQ of matched features} \\ \cline{2-5}
\multicolumn{1}{c|}{} & Accuracy & Speedup & Accuracy & Speedup \\ \hline
KD & 0.9319 & 22.4525 & 0.9376 & 22.3318 \\ \hline
HKM & 0.9024 & 86.7133 & 0.8984 & 88.0266 \\ \hline
GNNS & 0.9221 & 90.7018 & 0.9329 & 88.9622 \\ \hline
SGNNS & 0.9236 & 121.7556 & 0.9401 & 285.3466 \\ \hline
\end{tabular}}
\end{center}
\begin{tabular}{cc}
\vspace{.1 cm}
\end{tabular}
\caption{Comparison of different search algorithms on Vector Quantization - Lab dataset, Accuracy fixed at $\sim$98\% }
\label{table:exp4}
\begin{center}
\tabcolsep 4.5pt
{\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|c|c|c||c|c|}
\cline{2-5}
\multicolumn{1}{c}{} & \multicolumn{2}{|c||}{VQ of all features} & \multicolumn{2}{c|}{VQ of matched features} \\ \cline{2-5}
\multicolumn{1}{c|}{} & Accuracy & Speedup & Accuracy & Speedup \\ \hline
KD & 0.9833 & 2.9367 & 0.9829 & 2.9369 \\ \hline
HKM & 0.9921 & 7.2662 & 0.9919 & 7.2633 \\ \hline
GNNS & 0.9886 & 6.0819 & 0.9894 & 6.0555 \\ \hline
SGNNS & 0.9935 & 15.8861 & 0.9955 & 18.3208 \\ \hline
\end{tabular}}
\end{center}
\end{table}
\if0
\subsection{Discussion}
Vocabulary construction time
memory usage
sequential data with when overlap is small/large and how better GNNS work accordingly.
\fi
\section{CONCLUSION}
We proposed to use the graph nearest search (GNNS) algorithm to speed up the vector quantization task in BoW. We described the GNNS's properties and how it can integrated into the BoW and SLAM framework, taking advantage of the sequential property of SLAM data. We experimentally showed significant improvements over the state-of-the-art algorithms. We also observed that there is a bigger improvement if we only consider the matched features.
|
\section{Introduction}
The ability of ribonucleic acid (RNA) to adopt peculiar three-dimensional
structures that mediate a variety of biological functions makes it
the most versatile regulatory factor in the cell.\cite{bloo+00book}
Virtually involved in all cellular processing of the genetic information,
the RNA is able to achieve such a functional diversity by adaptively
acquiring very distinct conformations in response to specific conditions
of the cellular environment.\cite{hall+11acr} Among the structural
rearrangements engaged by RNA, the opening of complementary base pairs
is an ubiquitous process required to accomplish a wide range of metabolic
activities such as transcription, pre-mRNA splicing, ribosome biogenesis
or translation initiation.\cite{roca-lind04nrmcb} In the cell this
is usually catalyzed by enzymes called RNA helicases which have been
shaped by the evolution to unwind double-stranded (ds) RNA according
to its intrinsic dynamic properties.\cite{jank-fair07cosb,pyle08arb}
From a molecular standpoint, the opening and forming of individual
base pairs are fundamental, yet poorly understood, events which provide
the structural framework to large-scale RNA conformational transitions
and folding.\cite{norb-nils02acr,zhua+07nar,li+08arb,alha-walt08cosb,oroz+08cosb,xia08cocb,hall+11acr,rinn+11acr}
In this respect and related to the work presented herein, insightful
investigations have
been reported only for short
deoxyribonucleic acids (DNAs)
in the B-form helical geometry.\cite{varn-lave2002jacs,haga+03pnas,giud-lave2003jacs}
Using transition-path sampling, Hagan et al\cite{haga+03pnas} have
fully characterized the energetics of (un)pairing for a 5\textasciiacute{}-end
cytosine. However, the mechanism underlying the complete opening of
the duplex has not been systematically faced nor analyzed. Moreover,
differences in topology and thermodynamic parameters between B-form
DNA and A-form RNA suggest that the mechanism of duplex \textcolor{black}{separation}
might obey different rules.
Recently, combining thermodynamic information with the relative population
of unpaired \textcolor{black}{terminal} nucleotides (dangling ends)
observed in large ribosomal RNA (rRNA) crystal structures, Mohan et
al\cite{moha+2009jpcb} have proposed that stacking and pairing reactions
are not simultaneous, and that 3\textasciiacute{}-single-strand
stack leads the base pairing of the 5\textasciiacute{}-strand.
Nevertheless, collecting an unbiased data set of dangling-end population is not trivial
and, when viewed in the context of the full ribosomal assemblies,
the single stranded (ss) regions are seen to interact extensively
with other RNA elements.\cite{moha+2009jpcb}
On top of that,
since the
formation and opening of base pairs is a dynamic process, both the
ensemble-averaged thermodynamic properties and the detailed but static
X-ray picture have to be complemented with other methods able to directly
and quantitatively capture the dynamics of the investigated event.
Likely, this gap will be efficiently bridged by \emph{ad hoc} designed
spectroscopic approaches.\cite{alha-walt08cosb,lee+10pnas,rinn+11acr,hall+11acr}
For instance, femtosecond time-resolved fluorescence spectroscopy
is emerging as a powerful technique for the quantitative analysis
of base-stacking pattern and base motion,\cite{zhao-xia09methods}
although its applications to probe RNA dynamics are still in their
infancy and the method presents several limitations.\cite{xia08cocb}
As a matter of fact, the integration of spectroscopic approaches with
other powerful techniques is presently needed to gain molecular details
on the RNA intrinsic dynamics.
\textcolor{black}{
Among the possible methodological choices,
atomistic simulations\cite{karp-mcca2002natsb}
allow any base sequence to be characterized
and all the microscopic parameters to be controlled.
Additionally, when
combined with state-of-the-art free-energy methods,
they can provide an unparalleled perspective on the
mechanism and dynamics of the biomolecular process of interest. As
far as the reconstruction of free-energy profiles is concerned, the
capability of estimating those profiles along any suitable reaction
coordinate, without any further computational cost, would offer researchers
a powerful and versatile tool enabling both the disclosure of intermediate
states and the multifaceted analysis of complex conformational transitions. }
With this spirit, here we report an \emph{in silico} study elucidating
the mechanism for
\textcolor{black}{strand separation} in the RNA double
helix. In particular, we used atomistic steered molecular dynamics
(MD) simulations\cite{soto-schu07science} to enforce the \textcolor{black}{unbinding}
of nucleobases into the surrounding explicit water. To allow a systematic
analysis of different base sequences we devised a novel Jarzynski-equation-based
reweighting scheme which allowed the free-energy landscape to be reconstructed
as a function of different reaction coordinates and the unbinding
energies to be straightforwardly estimated. The computed free-energy
differences are consistent with experimental observations and suggest
that the \textcolor{black}{strand separation} mechanism occurs by a
stepwise process in which the probability of unbinding of the base
at the 5\textasciiacute{} \textcolor{black}{terminus} is significantly
higher than that at the 3\textasciiacute{} \textcolor{black}{terminus.}
The biological implications of these findings are discussed \textcolor{black}{and
related to the unwinding mechanism catalyzed by RNA processing machineries.}
\textcolor{black}{Given the general nature of our approach,} the introduced
methodology can be directly applied to analyze a broad range of molecular
unbinding processes.
\section{Methods}
\textcolor{black}{Throughout the manuscript the following nomenclature
will be consistently used to define each elementary step involving
one single base and occurring during the opening of a closed base pair:
unpairing is used to define the process undergone by a single base
for which both Watson-Crick hydrogen bonds and stacking interactions
with adjacent bases are broken; unstacking is the process breaking
the stacking interactions between a dangling terminal nucleotide and
its adjacent bases. The opening of a base pair is thus composed by
an initial unpairing followed by an unstacking (}\ref{fig:Thermodynamics-cycle}\textcolor{black}{).
In the manuscript we also use the term unbinding referring to both
unpairing and unstacking processes by no means of specificity.
Finally, the strand with the 5\textasciiacute{} terminal (or 3\textasciiacute{} terminal) nucleobase
being pulled
is referred to as the 5\textasciiacute-strand (or 3\textasciiacute-strand).
}
\begin{figure}
\includegraphics[width=1\columnwidth]{Fig1}
\caption{\label{fig:Thermodynamics-cycle}\textcolor{black}{Elementary steps
involved in the opening of a base pair. The thermodynamics cycle was
used to characterize different base pair combinations.}}
\end{figure}
\subsection{System set-up}
We simulated the unpairing and unstacking of nucleobases at both 3\textasciiacute{}-
and 5\textasciiacute{}-\textcolor{black}{termini} in dsRNAs of sequence
${\textrm{5\textasciiacute-CCGGGC-3\textasciiacute}\atop \textrm{3\textasciiacute-GGCCCG-5\textasciiacute}}$
and ${\textrm{5\textasciiacute-GGCCCG-3\textasciiacute}\atop \textrm{3\textasciiacute-CCGGGC-5\textasciiacute}}$
(\ref{fig:double-helix}). \textcolor{black}{Two sets of data can be
obtained from each dsRNA thus resulting }in four systems with different
combinations of Watson-Crick base pairing and stacking (\ref{fig:double-helix}A).
Both terminal and non-terminal base pairs (i.e.~a base pair at the
ss-ds RNA junction) were investigated. Non-terminal base pairs showed
the same trend in relative stability observed for terminal ones, and
are reported in the Supporting Information (SI). The A-form dsRNA
was built using ASSEMBLE\cite{jossi+10bionf} and then solvated with
$\sim$3600 water molecules, 20 Na$^{+}$ and 10 Cl$^{-}$ ions, resulting
in an excess salt concentration of about 0.15 M. The mobility of added
ions was fairly diffusive during the simulations. After minimization
and thermalization, each \textcolor{black}{system} \textcolor{black}{(or
intermediate)} was then evolved for 30 ns in the isothermal-isobaric
ensemble (300K, 1Atm)\cite{buss+07jcp,parr-rham81jap} using the Amber99
force field\cite{wang+00jcc} and TIP3P water.\cite{jorg+83jcp} Preliminary
calculations carried out using the recent refinement of the Amber99
force field (parmbsc0)\cite{pere+07bj} have shown quantitatively
similar results in the reconstructed free-energy profiles. This is
probably due to the poor involvement of the refined $\alpha$ and
$\gamma$ dihedrals during the unbinding trajectories. Long-range
electrostatic interactions were calculated with the particle mesh
Ewald method.\cite{dard+93jcp} Plain MD and biased steered MD trajectories
were generated with GROMACS 4.0.7\cite{hess+08jctc} combined with
PLUMED 1.2.\cite{bono+09cpc}
\subsection{Pulling simulations}
The starting configurations for the pulling simulations were randomly
sampled from the corresponding 30 ns-long runs. The distance between
the center of mass of two stacked bases (\ref{fig:double-helix}B)
was used as pulled collective variable (CV), and thus harmonically
restrained to a constant-velocity moving point, starting at a position
equal to the equilibrium average of the CV and pulling it by 0.75
nm in 1.5 ns. This resulted in a biasing potential equal to $V_{\mbox{bias}}(q,t)=\frac{k}{2}\left[s(q(t))-\left(s_{0}+vt\right)\right]^{2}$
where $k=1200$kcal/mol/nm$^{2}$ is the spring constant of the restraint,
$q$ are the microscopic coordinates, $s(q)$ is the CV value for
those coordinates, $s_{0}$ is the initial position of the restraint,
$v$ is the pulling velocity and $t$ is the time.
\begin{figure}[t]
\includegraphics[width=0.9\columnwidth]{Fig2}
\caption{\label{fig:double-helix}RNA double helix. A) Schematic view of the
combinations (red dotted boxes) of guanine (orange) and cytosine (blue) base-pairing
and -stacking investigated. B) Structural representation of the RNA
duplex in water; the distance between the center
of mass (green arrow) of the six-membered ring atoms (thick green
sticks) of two stacked bases was used as collective variable for the
pulling simulations.}
\end{figure}
The mechanical work done during the process was obtained by integrating
the force exerted on the system along the biased reaction coordinate.
After collecting about 400 realizations for each nucleobase-unbinding
process, the Jarzynski nonequilibrium work theorem\cite{jarz97prl}
was exploited to discount the dissipated work and to reconstruct \textcolor{black}{the
free-energy profile as a function of the restraint distance ($s_{0}+vt$).}
Although employing Jarzynski's equality in principle allows unbiased
free-energy differences to be estimated, its direct application is
limited by the number of collectable realizations as well as by the
complexity of the system.\cite{gore+2003pnas} A typical \textcolor{black}{free-energy
profile} is shown in \ref{fig:work}A as a function of the \textcolor{black}{restraint
distance}. The blue plot shows how, after a steep rise, a series of
alternating shoulders and local plateaus gradually brought the system
to higher free-energy states. Moreover, between distances ranging
from 1 to 1.2 nm (for the exemplified system), the profile was strongly
dominated by an outlier low-work realization and the difference in
reconstructing the free-energy profile with or without the outlier
was more than 3kcal/mol (see SI).
\begin{figure}[!t]
\includegraphics[width=0.9\columnwidth]{Fig3}
\caption{\label{fig:work}\textcolor{black}{Typical nucleobase unbinding process,
obtained by pulling along the distance between the center of mass
of two stacked bases. A) Mechanical work (W) performed (gray plot)
and its exponential average (blue plot) as in Jarzynski equality,
plotted as a function of the restraint position. The distance is practical
for biasing the system but hardly allowed defining the bound and unbound
states. B) Number of water molecules coordinating the unbinding nucleobase ($N_{\textrm{wat}}$) }
as a function
of time (main panel), and its probability distribution (right panel).
The coordination with water is an useful metrics for identifying the
bound and unbound states.}
\end{figure}
Within this framework, there was no clean way to automatically detect
when the nucleobase had reached the unbound configuration, and it
was difficult to avoid systematic errors in the comparison of many
profiles with small differences as we were interested in. To tackle
this problem, we decided to analyze our simulations in terms of CVs
different from the one used for the pulling. This \emph{a posteriori}
analysis could be done quickly, as a post-processing, and allowed
us to choose optimal CVs capable of describing in an user-independent
manner all the unbinding events.
\subsection{Reweighting scheme}
To project the free-energy landscape on putative CVs we devised a
proper reweighting scheme. Whereas suitable schemes have been proposed
to reweight other types of nonequilibrium simulations (e.g.~Ref.~\cite{bono+09jcc}),
a reweighting algorithm for steered MD has not been reported.\textcolor{black}{{}
For a different purpose, Hummer and Szabo developed a method which
enables the reconstruction of the free energy as a function of the
pulled coordinate.}\cite{humm-szab01pnas,gupt+11natphy}\textcolor{black}{{}
Here, we generalize this scheme so as to compute the free energy as
a function of any }\textcolor{black}{\emph{a posteriori}}\textcolor{black}{{}
chosen variable.}
Two different sorts of bias affect the steered MD trajectories and
needed to be removed: (a) the nonequilibrium nature of the pulling
and (b) the presence of artificial harmonic restraints on the pulled
CV. The nonequilibrium bias is removed by noticing that the equilibrium
probability $P_{\mbox{eq}}(q,t)$, for a restraint statically kept
in its position at time $t$, can be obtained from the non-equilibrium
one $P_{\text{neq}}^{(i)}(q,t)$ as observed in the $i$-th trajectory
exploiting a relation first reported by Crooks:\cite{croo00pre,will-evan10prl}
\begin{equation}
P_{\mbox{eq}}(q,t)=\sum_{i}e^{-\beta\left[W_{i}(t)-F(t)\right]}P_{\text{neq}}^{(i)}(q,t)\,,\end{equation}
where $W_{i}(t)$ is the work done on the $i$-th trajectory up to
time $t$ and $\beta=1/k_{B}T$ is the inverse thermal energy. Here
the free energy $F(t)$ represents the normalization factor corresponding
to the instantaneous position of the moving restraint at time $t$.
The bias of the harmonic restraint can then be removed by applying
the weighted-histogram analysis method.\cite{kuma+92jcc} Whereas
weighted histograms are traditionally used to combine independent
simulations performed with different static biasing potentials, here
we used it to combine snapshots obtained at different stages of the
pulling, thus writing the unbiased equilibrium probability as\begin{equation}
P_{\mbox{u}}(q)\propto\frac{\int_{0}^{\tau}dtP_{\text{eq}}(q,t)}{\int_{0}^{\tau}dte^{-\beta\left[V(q,t)-F(t)\right]}}\,,\end{equation}
where $\tau$ is the length of each pulling simulation. Finally, the
free energy as a function of an arbitrary, \emph{a posteriori} chosen
CV $\bar{s}$ is defined as $F(\bar{s})=-k_{B}T\log\int dqP_{u}(q)\delta\left(\bar{s}-\bar{s}(q)\right)$.
The scheme described so far closely resembles the one used by Hummer
and Szabo.\cite{humm-szab01pnas} \textcolor{black}{However, it is
conceptually different, as here the free energy can be reconstructed
also with respect to a variable different from the pulled one. Thus,
it potentially enables the disclosure and characterization of otherwise hidden features of the investigated
process.} To further simplify the
data manipulation and to avoid building multidimensional histograms,
with a further dependence on technical choices such as binning size,
we recast our approach assigning a weight to each of the sampled configurations,
in the same spirit as in Ref.~\cite{soua-roux01cpc}. After simple
manipulation, the weight can be shown to be equal to\begin{equation}
w_{i}(t)\propto\frac{e^{-\beta\left[W_{i}(t)-F(t)\right]}}{\int_{0}^{\tau}dt'e^{-\beta\left[V\left(q_{i}(t),t'\right)-F(t')\right]}}\,.\label{eq:weights}\end{equation}
The normalization factor for each time, $F(t)$, is then computed
iteratively up to convergence as $e^{-\beta F(t)}=\sum_{i}\int_{0}^{\tau}dt'w_{i}(t')e^{-\beta V(q_{i}(t'),t)}$.
Usually a few tens of iterations are enough to converge.
In summary, in
our reweighting scheme we first compute the weight of each of the
configurations saved along the MD simulations from \ref{eq:weights},
then estimate free energies as a function of any, \emph{a posteriori}
chosen CV as \begin{equation}
F(\bar{s})=-k_{B}T\log\sum_{i}w_{i}(t)\delta(\bar{s}-\bar{s}(q_{i}(t))\,.\label{eq:f-of-s}\end{equation}
\section{Results}
Using the reweighting scheme outlined in the previous Section we were
able to investigate several order parameters. Since solvent interactions
are known to affect the conformational state of nucleic acids,\cite{cant-schi80book}
we considered the solvation of an \textcolor{black}{unbinding} base
as an effective metrics for the progression of the \textcolor{black}{underlying
process}. This choice allowed \textcolor{black}{defining} the unbinding
in a manner which was totally independent from both the \textcolor{black}{terminus}
and the specific base, and, in our explicit-solvent simulation, could
be computed as the coordination among heavy atoms of the base and
water oxygens (\ref{fig:work}B). In this metrics, the bound and unbound
states could be clearly and unambiguously identified and corresponded
to approximately harmonic basins. Sample free energies computed \textcolor{black}{as
a function of the number of water molecules coordinating the unpairing
base are shown in \ref{fig:fitting}. The free-energy
profile reconstructed along such a reaction coordinate is in no way
biased by the absolute number of coordinated water molecules which
is merely used to distinguish one configuration from the other and
to properly collect the corresponding weights along the simulation
as in \ref{eq:f-of-s}.} Then, to compute accurately the bound/unbound
free-energy differences, we fit the free-energy profiles with the combination
of two quadratic functions,\cite{humm-szab05acr,humm-szab10pnas}\begin{equation}
e^{-\beta F(\bar{s})}=\sigma_{1}^{-1}e^{-\frac{\left(\bar{s}-\bar{s}_{1}\right)^{2}}{2\sigma_{1}^{2}}-\beta F_{1}}+\sigma_{2}^{-1}e^{-\frac{\left(\bar{s}-\bar{s}_{2}\right)^{2}}{2\sigma_{2}^{2}}-\beta F_{2}}\label{eq:combination}\end{equation}
where $F_{1}$ and $F_{2}$ are the free energies of bound and unbound
states. Both, when the two states were clearly resolvable (e.g.~\ref{fig:fitting},
left panels), and when the corresponding CV population was more overlapped,
(e.g.~\ref{fig:fitting}, right panels) the fitting procedure resulted
robust and poorly sensitive to outlier work realizations, thus enhancing
convergence of the results (e.g.~the difference in performing the
fit with or without the outlier low-work realization in \ref{fig:work}A
was less than 0.3 kcal/mol, see SI). Furthermore, this approach showed
very stable outcomes with respect to the choice of the details in
the definition of the solvation order parameter, and allowed comparing
systematically several similar situations without incurring of large
statistical errors or, worst, human biases in the interpretation of
the results.
\begin{figure}
\includegraphics[width=0.9\columnwidth]{Fig4}
\caption{\label{fig:fitting}Reconstruction of the free-energy profile as a
function of the number of water molecules ($N_{\textrm{wat}}$) surrounding the unbinding
base. In the left and right panels are shown typical free energy profiles
(red dots with error bar) for the \textcolor{black}{unpairing of a 3\textasciiacute{}-strand guanine
and 5\textasciiacute{}-strand cytosine}, respectively. The quadratic potentials
obtained from the double-well fitting are shown in light blue color,
whereas their combination {[}\ref{eq:combination}{]} is in blue.
Underneath each panel, the \textcolor{black}{unnormalized} population
of the CV is also shown.}
\end{figure}
Having an optimized statistical-mechanics tool able to provide
free-energy differences in a flexible and automatic manner, we pursued
a systematic step-by-step approach to investigate the feasibility
of different \textcolor{black}{opening} paths for the four possible
combinations of G-C base stacking and Watson-Crick pairing. The general
procedure, as outlined in \ref{fig:Thermodynamics-cycle},
relied on two subsequent steps: first, the Watson-Crick base pair
was partially opened by the \textcolor{black}{unpairing} of the base
on either the 5\textasciiacute{} or 3\textasciiacute{} \textcolor{black}{terminus};
second, the resulting dangling intermediate, on the 3\textasciiacute{}
or 5\textasciiacute{} \textcolor{black}{terminus} respectively, was
unstacked and the base pair opening completed.
\begin{table}
\caption{\label{tab:Context-dependent-base-unbinding}Context-dependent base-unbinding
free energy (kcal/mol)
\textcolor{black}{
corresponding to the elementary steps shown in \ref{fig:Thermodynamics-cycle}}.
}
\begin{tabular
{@{\extracolsep{\fill}}rrrrrr}
\multicolumn{1}{l}{Construct}&
\multicolumn{4}{c}{Opening steps}\cr
\hline
\multicolumn1c{$n$}&
\multicolumn1c{$3b$}&\multicolumn1c{$5b$}&
\multicolumn1c{$5a$}&\multicolumn1c{$3a$}&\cr
\hline\cr
\vspace{1.5 mm}
(1)$\textrm{\ensuremath{{\textrm{5\textasciiacute-CG}..\atop \textrm{3\textasciiacute-GC..}}}}$ \quad & 7.6 & 0.9 & 2.6 & 5.6\cr
\vspace{1.5 mm
(2)$\textrm{\ensuremath{{\textrm{5\textasciiacute-CC..}\atop \textrm{3\textasciiacute-GG..}}}}$ \quad & 8.9 & 0.3 & 3.9 & 4.5\cr
\vspace{1.5 mm
(3)$\textrm{\ensuremath{{\textrm{5\textasciiacute-GG..}\atop \textrm{3\textasciiacute-CC..}}}}$ \quad & 7.0 & 2.2 & 4.9 & 3.9\cr
\vspace{1.5 mm
(4)$\textrm{\ensuremath{{\textrm{5\textasciiacute-GC..}\atop \textrm{3\textasciiacute-CG..}}}}$ \quad & 7.4 & 2.0 & 5.7 & 2.8\cr
\hline
\end{tabular}
\end{table}
The relative stability of putative intermediates involved in the opening
of a base pair was estimated from the individual base-unbinding free
energies (\ref{tab:Context-dependent-base-unbinding} and \ref{fig:Thermodynamics-cycle}).
For all the considered combinations,
the difference in basepair-opening free energy computed biasing the
system along path (a) and (b) in \ref{fig:Thermodynamics-cycle} was lower than 1kcal/mol.
For sake of clarity, it should be reminded that a finite number of
unidirectional pulling simulations performed within a Jarzynski-like
scheme are known to provide overestimates of absolute free-energy
differences.\cite{gore+2003pnas} However, highly accurate estimates
of unbinding constants were not needed to characterize the \textcolor{black}{strand
separation} mechanism and the free-energy differences we estimated
were exploited as a quantitative tool to assess the relative stability
of different configurations.
\section{Comparison with experiments}
Below we discuss the results of the first and second unbinding steps
(\ref{fig:Thermodynamics-cycle}), and compare them with crystal structures
conformer distributions,\cite{moha+2009jpcb,moha+2010jacs} relative
population of stacked/unstacked bases detected by femtosecond time-resolved
fluorescence spectroscopy,\cite{Liu+08bioc} and thermodynamic data
based on dsRNA melting experiments.\cite{sugi+87bioc,turn+88arbb,bloo+00book}
The consistency with experimental observations and the capability
of our simulations to complement those results are highlighted.
The more general outcome arising from the comparison of free-energy
differences is that the paired base on the 5\textasciiacute{} \textcolor{black}{terminus}
always interacted more weakly than the complementary base on the 3\textasciiacute{}
\textcolor{black}{terminus} (steps 5a, 3b in \ref{tab:Context-dependent-base-unbinding}
and \ref{fig:Thermodynamics-cycle}). This could be directly related
to the A-form helical geometry of RNA in which the bases at the 5\textasciiacute{}
end of a ss-ds junction are less buried into the neighboring environment
and expose a wider portion of their surface to water molecules, thus
facilitating fraying events. The different stability of the nucleobase
on the 5\textasciiacute{} \textcolor{black}{terminus} can be reflected
in the probability of observing a certain type of blunt closing base
pair at ss-ds junctions. In this context, the stronger interaction
was estimated for the 5\textasciiacute{}-guanine in $\textrm{\ensuremath{{\textrm{5\textasciiacute-GC..}\atop \textrm{3\textasciiacute-CG..}}}}$(construct
\textbf{4}) which was $\sim$1.7 kcal/mol weaker than the complementary
cytosine on the 3\textasciiacute{}-\textcolor{black}{terminus}. Accordingly,
the combination $\textrm{\ensuremath{{\textrm{5\textasciiacute-GC..}\atop \textrm{3\textasciiacute-CG..}}}}$(construct
\textbf{4}) is the most abundant closing base-pair pattern observed
at ss-ds junctions in large rRNA crystal structures.\cite{moha+2009jpcb}
It can be further noticed that among the dangling ends (steps 5b and
3a in \ref{tab:Context-dependent-base-unbinding} and \ref{fig:Thermodynamics-cycle})
the most stable ones are those on the 3\textasciiacute{} \textcolor{black}{terminus},
consistently with ultrafast spectroscopy experiments which have detected
a large subpopulation of stacked conformers for a 3\textasciiacute{}-dangling
fluorescent purine probe, while only a relatively small one for a 5\textasciiacute{}-dangling
purine probe.\cite{Liu+08bioc} In particular, we found the most stable
3\textasciiacute{}-dangling end in construct \textbf{1} ($\textrm{\ensuremath{{\textrm{5\textasciiacute-}{}^{\textrm{C}}\textrm{G..}\atop \textrm{3\textasciiacute-}\textrm{GC..}}}}$)\textbf{,}
which has also been counted as the most common dangling end pattern
in rRNA crystal structures.\cite{moha+2009jpcb} Further agreement
can be found considering dsRNA optical melting experiments which have
shown that single-nucleotides overhanging at 3\textasciiacute{}-ends
of \textcolor{black}{an} RNA helix increase the stability of the duplex
in a sequence-dependent manner. Notably, such a stabilization has
been interpreted as the capability of the 3\textasciiacute{} dangling
ends to stack over the hydrogen bonds of the closing base pair protecting
them from water exchange.\cite{isak-chat05bioc} Those 3\textasciiacute{}-dangling
bases which are more likely stacked would thus provide a larger \textcolor{black}{contribution}
to duplex stabilization. In this light, the trend that we observed
in the \textcolor{black}{unstacking} energies of the four dangling constructs
($\textrm{\ensuremath{{\textrm{5\textasciiacute-}{}^{\textrm{C}}\textrm{G..}\atop \textrm{3\textasciiacute-}\textrm{GC..}}}}$>$\textrm{\ensuremath{{\textrm{5\textasciiacute-}{}^{\textrm{C}}\textrm{C..}\atop \textrm{3\textasciiacute-}\textrm{GG..}}}}$>$\textrm{\ensuremath{{\textrm{5\textasciiacute-}{}^{\textrm{G}}\textrm{G..}\atop \textrm{3\textasciiacute-}\textrm{CC..}}}}$>$\textrm{\ensuremath{{\textrm{5\textasciiacute-}{}^{\textrm{G}}\textrm{C..}\atop \textrm{3\textasciiacute-}\textrm{CG..}}}}$)
is in agreement with duplex stabilization observed in dsRNA melting
experiments.\cite{sugi+87bioc,turn+88arbb,bloo+00book} \textcolor{black}{It
should be remarked that the duplex stabilization induced by 5\textasciiacute{}-dangling
ends might not reflect the stacking energy of the dangling end itself
because of its small overlap with the hydrogen bonds of the closing
base pair. Further discussion on the comparison of computed and experimental
dangling-end stabilities can be found in the SI.}
Summarizing, the unbinding of the base on the \textcolor{black}{5\textasciiacute{}-strand} was, in
all the considered cases, favored over the unbinding of the complementary
\textcolor{black}{3\textasciiacute{}-strand} base. Whereas the relative \textcolor{black}{probability}
of 3\textasciiacute{}- and 5\textasciiacute{}-unbinding event
can be modulated by the sequence, the general trend remains unchanged.
From a structural standpoint, the \textcolor{black}{unpairing} could
proceed through two qualitatively different paths: one in which the
twisting and breaking of Watson-Crick hydrogen bonds occurred before
the \textcolor{black}{rupture of stacking interactions}; the other,
in which the unbinding followed the concerted rupture of both hydrogen
bonds and stacking interactions with, in some cases, the unbinding
base stacking over the dangling end of the opposite strand (\ref{fig:Snapshots}).
Similar unbinding geometries have also been described in other studies.\cite{poho+90ijsa,haga+03pnas}
In our simulations, these intermediate states occurred with a context-dependent
frequency. For instance, along the unbinding pathway the \textcolor{black}{5\textasciiacute{}-terminal} guanine
in $\textrm{\ensuremath{{\textrm{5\textasciiacute-GC..}\atop \textrm{3\textasciiacute-CG..}}}}$
(construct \textbf{4}) had $\sim$15\% of probability to stack upon
the 3\textasciiacute{}-dangling cytosine of the opposite strand.
Interestingly, this probability dropped to $\sim$3\% in $\textrm{\ensuremath{{\textrm{5\textasciiacute-GG..}\atop \textrm{3\textasciiacute-CC..}}}(construct \textbf{3})}$.
Such an inter-strand stacking pattern (\ref{fig:Snapshots}, panel
$t_{2}$), exchanging with the conventional stacking of a paired base,
could account for the delayed quenching of fluorescence detected by
ultrafast fluorescence spectroscopy for a construct similar to $\textrm{\ensuremath{{\textrm{5\textasciiacute-GC..}\atop \textrm{3\textasciiacute-CG..}}}}$
(construct \textbf{4}).\cite{Liu+08bioc}
\begin{figure}
\includegraphics[width=0.9\columnwidth]{Fig5}
\caption{\label{fig:Snapshots}Snapshots sampled from the opening of a base
pair. The unbinding base (here a \textcolor{black}{5\textasciiacute{}-terminal} guanine in construct
\textbf{4}) \textcolor{black}{could transiently} stack over the dangling
end of the opposite strand ($t{}_{2}$).}
\end{figure}
As final experimental evidences which corroborate our results, Xia
and co-workers\cite{Liu+08bioc} have reported that the dynamic behavior
of a 3\textasciiacute{}-terminal purine is not affected by the presence
of the opposite complementary base. Viceversa, the conformational
dynamics of a 5\textasciiacute{}-terminal purine is drastically
influenced by the presence of an opposite 3\textasciiacute{}-\textcolor{black}{terminal} pyrimidine
which would be likely stacked and potentially able to shift the population
of the \textcolor{black}{complementary 5\textasciiacute{}-terminal} base towards a paired and stacked
ensemble. Consistently with our systematic study, these data depict
the formation of a stable base pair as generally driven by the stacking
of the \textcolor{black}{3\textasciiacute{}-terminal base}, and then
by the energy gained by the system from both the stacking of the \textcolor{black}{5\textasciiacute{}-terminal
base} and Watson-Crick hydrogen-bonds.
\section{Biological implications}
\begin{figure*}
\includegraphics[width=1\textwidth]{Fig6}
\textcolor{black}{
\caption{
\label{fig:Helicase}
Model for RNA unwinding catalyzed by NS3 helicase.
A) The single-stranded 3\textasciiacute{} terminus
is loaded into the tracking tunnel.
B) The 5\textasciiacute{} terminus is mechanically displaced
by the helix opener.
C) The helicase proceeds
with 3\textasciiacute{} to 5\textasciiacute{} directionality,
displacing the 5\textasciiacute{}-strand.
}
}
\end{figure*}
The motion picture of
\textcolor{black}{duplex separation}
emerging from the outcome of our simulations complements and augments
with dynamic details and energetic considerations the helix propagation
model based on the analysis of static 3D structures.\cite{moha+2009jpcb}
Our computations link experimental data from different fields creating
a common reading frame among them. Taken together, these results suggest
that RNA unwinding occurs by a stepwise process in which the probability
of unbinding of the base on the 5\textasciiacute{} \textcolor{black}{strand}
is significantly higher than that on the 3\textasciiacute{} \textcolor{black}{strand}.
What could be the biological implications of this finding?
When considering the RNA as the substrate of molecular motors such
as helicases and other remodeling enzymes, the results could likely
be interpreted from an evolutionary point of view which could allow
deciphering the basis of the evolutionary pressure responsible for
the unwinding mechanism catalyzed by RNA-duplex processing enzymes.
The RNA unwinding catalyzed by helicases is coupled to adenosine triphosphate
(ATP) binding and hydrolysis. The underlying mechanism would reasonably
minimize the use of ATP, especially in a low-nutrient environment.
Provided that the intrinsic RNA dynamics implies that at ss-ds junctions
the unbinding of the 5\textasciiacute{}-strand
base is favored over the unbinding of the complementary 3\textasciiacute{}-strand base,
an enzymatic unwinding model would include a mechanism in which the separation
of the two complementary strands is accomplished by acting on the
weakest portion, i.e.~the 5\textasciiacute{}-strand
base. Thus, an ancestral enzyme using the 3\textasciiacute{}-strand
as running track rail (with 3\textasciiacute{} to 5\textasciiacute{}-directionality)
without perturbing its conformation and causing the displacement of
the 5\textasciiacute{}-strand by mechanical exclusion
could satisfy some energy-saving requirements.
The viral RNA helicase NS3 of hepatitis C virus, which is a prototypical
DEx(H/D) RNA helicases essential for viral replication, could satisfy
the above mentioned requirements.\cite{jank-fair07cosb,pyle08arb}
NS3 is a potentially relevant drug target and has been structurally
and functionally characterized in various contexts.\cite{dumo+06nat,buet+07nsmb,gu-rice10pnas,rane+10jbc}
It unwinds duplexes by first loading onto a \textcolor{black}{single-stranded 3\textasciiacute{}-terminus}
region and then processively translocating with 3\textasciiacute{}
to 5\textasciiacute{} directionality along this loading strand,
thereby peeling off the complementary 5\textasciiacute{}-strand
bases. In particular, the 3\textasciiacute{}-strand
would migrate through a tracking ssRNA tunnel running within the protein
whereas the complementary 5\textasciiacute{}-strand
is forced towards the back of the protein by the {}``helix opener''
hairpin (\ref{fig:Helicase}).\cite{Luo+08emboj,sere+09jbc} In light of the free-energy
calculations discussed above, it could be suggested that this mechanism
has been optimized according to the intrinsic RNA unwinding dynamics
disclosed in this work.
Arguably, the processing machineries are being constantly shaped by
the evolutionary pressure of a plethora of (often unknown) factors
contributing to the optimization of metabolism in the whole living
system, rather than to the local biochemical process. As a consequence,
the preference for a well-defined RNA processing directionality cannot
be ubiquitously observed.\cite{jank-fair07cosb,pyle08arb}
We speculate that \emph{all} the biochemical processes involving RNA
in which directionality plays a role (e.g.~transcription), could be
related to the energetics of RNA double helix forming and fraying
discussed in this Article.
\section{Conclusions}
This study lays down the basis for the molecular-level understanding
of intrinsic RNA dynamics and its role in function. The asymmetric
behavior of the 3\textasciiacute{}- and 5\textasciiacute{}-strand
could be responsible for the directionality observed in RNA processing.
From a computational perspective, the approach we introduced can be
generalized to analyze any kind of (un)binding event. Indeed, it allowed
the free-energy landscape to be reconstructed along different reaction
coordinates and the unbinding energies to be easily computed in an
automatic and user-independent manner, therefore removing statistical
and human biases. We foresee the application of our approach to a
wider range of molecular systems, including the typical ligand-target
complex faced in drug discovery.\cite{jorg10nat}
\section{Acknowledgment}
We thank Francesco Di Palma, Rolando Hong and Vittorio Limongelli for critically reading
the manuscript and an anonymous referee for several useful suggestions. We acknowledge the CINECA Award N.~HP10BLIT9Z, 2011
for the availability of high performance computing resources and MIUR
grant ``FIRB - Futuro in Ricerca'' N.~RBFR102PY5 for funding. Developed
software is available on request.
\begin{suppinfo}
Details of methodology and computations. Unbinding free-energy profiles
for both terminal and non-terminal base pairs. \textcolor{black}{Further
discussion on the comparison of computed and experimental dangling-end
stabilities.}
\end{suppinfo}
\providecommand*\mcitethebibliography
|
{\thebibliography}
\csname @ifundefined\endcsname{endmcitethebibliography}
{\let\endmcitethebibliography\endthebibliography}{}
\begin{mcitethebibliography}{55}
\providecommand*\natexlab[1]{#1}
\providecommand*\mciteSetBstSublistMode[1]{}
\providecommand*\mciteSetBstMaxWidthForm[2]{}
\providecommand*\mciteBstWouldAddEndPuncttrue
{\def\unskip.}{\unskip.}}
\providecommand*\mciteBstWouldAddEndPunctfalse
{\let\unskip.}\relax}
\providecommand*\mciteSetBstMidEndSepPunct[3]{}
\providecommand*\mciteSetBstSublistLabelBeginEnd[3]{}
\providecommand*\unskip.}{}
\mciteSetBstSublistMode{f}
\mciteSetBstMaxWidthForm{subitem}{(\alph{mcitesubitemcount})}
\mciteSetBstSublistLabelBeginEnd
{\mcitemaxwidthsubitemform\space}
{\relax}
{\relax}
\bibitem[Bloomfield et~al.(2000)Bloomfield, Crothers, and Tinoco]{bloo+00book}
Bloomfield,~V.~A.; Crothers,~D.~M.; Tinoco,~I.~J. \emph{Nucleic Acids:
Structures, Properties, and Functions}; University Science Books: Sausilito,
CA, 2000\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Haller et~al.(2011)Haller, Souli{{\`e}}re, and Micura]{hall+11acr}
Haller,~A.; Souli{{\`e}}re,~M.~F.; Micura,~R. \emph{Acc Chem Res}
\textbf{2011}, \emph{44}, 1339--1348\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Rocak and Linder(2004)Rocak, and Linder]{roca-lind04nrmcb}
Rocak,~S.; Linder,~P. \emph{Nat. Rev. Mol. Cell Biol.} \textbf{2004}, \emph{5},
232--241\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Jankowsky and Fairman(2007)Jankowsky, and Fairman]{jank-fair07cosb}
Jankowsky,~E.; Fairman,~M. \emph{Curr. Opin. Struct. Biol.} \textbf{2007},
\emph{17}, 316--324\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Pyle(2008)]{pyle08arb}
Pyle,~A.~M. \emph{Annual Review of Biophysics} \textbf{2008}, \emph{37},
317--336, PMID: 18573084\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Norberg and Nilsson(2002)Norberg, and Nilsson]{norb-nils02acr}
Norberg,~J.; Nilsson,~L. \emph{Acc. Chem. Res.} \textbf{2002}, \emph{35},
465--472\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Zhuang et~al.(2007)Zhuang, Jaeger, and Shea]{zhua+07nar}
Zhuang,~Z.; Jaeger,~L.; Shea,~J. \emph{Nucleic Acids Res.} \textbf{2007},
\emph{35}, 6995\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Li et~al.(2008)Li, Vieregg, and Tinoco~Jr]{li+08arb}
Li,~P.; Vieregg,~J.; Tinoco~Jr,~I. \emph{Annu. Rev. Biochem.} \textbf{2008},
\emph{77}, 77--100\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Al-Hashimi and Walter(2008)Al-Hashimi, and Walter]{alha-walt08cosb}
Al-Hashimi,~H.; Walter,~N. \emph{Curr. Opin. Struct. Biol.} \textbf{2008},
\emph{18}, 321--329\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Orozco et~al.(2008)Orozco, Noy, and P{\'e}rez]{oroz+08cosb}
Orozco,~M.; Noy,~A.; P{\'e}rez,~A. \emph{Curr. Opin. Struct. Biol.}
\textbf{2008}, \emph{18}, 185--193\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Xia(2008)]{xia08cocb}
Xia,~T. \emph{Curr. Opin. Chem. Biol.} \textbf{2008}, \emph{12}, 604--611\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Rinnenthal et~al.(2011)Rinnenthal, Buck, Ferner, Wacker, F{\"u}rtig,
and Schwalbe]{rinn+11acr}
Rinnenthal,~J.; Buck,~J.; Ferner,~J.; Wacker,~A.; F{\"u}rtig,~B.; Schwalbe,~H.
\emph{Acc Chem Res} \textbf{2011}, \emph{44}, 1292--1301\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[V{\'a}rnai and Lavery(2002)Várnai, and Lavery]{varn-lave2002jacs}
V{\'a}rnai,~P.; Lavery,~R. \emph{J. Am. Chem. Soc.} \textbf{2002}, \emph{124},
7272--7273\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hagan et~al.(2003)Hagan, Dinner, Chandler, and
Chakraborty]{haga+03pnas}
Hagan,~M.~F.; Dinner,~A.~R.; Chandler,~D.; Chakraborty,~A.~K. \emph{Proc. Natl.
Acad. Sci. U. S. A.} \textbf{2003}, \emph{100}, 13922--13927\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Giudice and Lavery(2003)Giudice, and Lavery]{giud-lave2003jacs}
Giudice,~E.; Lavery,~R. \emph{J. Am. Chem. Soc.} \textbf{2003}, \emph{125},
4998--4999\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Mohan et~al.(2009)Mohan, Hsiao, VanDeusen, Gallagher, Krohn, Kalahar,
Wartell, and Williams]{moha+2009jpcb}
Mohan,~S.; Hsiao,~C.; VanDeusen,~H.; Gallagher,~R.; Krohn,~E.; Kalahar,~B.;
Wartell,~R.~M.; Williams,~L.~D. \emph{J. Phys. Chem. B} \textbf{2009},
\emph{113}, 2614--2623\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Lee et~al.(2010)Lee, Gal, Frydman, and Varani]{lee+10pnas}
Lee,~M.-K.; Gal,~M.; Frydman,~L.; Varani,~G. \emph{Proc. Natl. Acad. Sci. U. S.
A.} \textbf{2010}, \emph{107}, 9192--9197\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Zhao and Xia(2009)Zhao, and Xia]{zhao-xia09methods}
Zhao,~L.; Xia,~T. \emph{Methods} \textbf{2009}, \emph{49}, 128--135\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Karplus and McCammon(2002)Karplus, and McCammon]{karp-mcca2002natsb}
Karplus,~M.; McCammon,~J.~A. \emph{Nat Struct Biol} \textbf{2002}, \emph{9},
646--652\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Sotomayor and Schulten(2007)Sotomayor, and
Schulten]{soto-schu07science}
Sotomayor,~M.; Schulten,~K. \emph{Science} \textbf{2007}, \emph{316},
1144--1148\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Jossinet et~al.(2010)Jossinet, Ludwig, and Westhof]{jossi+10bionf}
Jossinet,~F.; Ludwig,~T.~E.; Westhof,~E. \emph{Bioinformatics} \textbf{2010},
\emph{26}, 2057--2059\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bussi et~al.(2007)Bussi, Donadio, and Parrinello]{buss+07jcp}
Bussi,~G.; Donadio,~D.; Parrinello,~M. \emph{J. Chem. Phys.} \textbf{2007},
\emph{126}, 014101\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Parrinello and Rahman(1981)Parrinello, and Rahman]{parr-rham81jap}
Parrinello,~M.; Rahman,~A. \emph{J. Appl. Phys.} \textbf{1981}, \emph{52},
7182--7190\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Wang et~al.(2000)Wang, Cieplak, and Kollman]{wang+00jcc}
Wang,~J.; Cieplak,~P.; Kollman,~P.~A. \emph{J. Comput. Chem.} \textbf{2000},
\emph{21}, 1049--1074\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Jorgensen et~al.(1983)Jorgensen, Chandrasekhar, Madura, Impey, and
Klein]{jorg+83jcp}
Jorgensen,~W.~L.; Chandrasekhar,~J.; Madura,~J.~F.; Impey,~R.~W.; Klein,~M.~L.
\emph{J. Chem. Phys.} \textbf{1983}, \emph{79}, 926\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[P{\'e}rez et~al.(2007)P{\'e}rez, March{\'a}n, Svozil, Sponer,
Cheatham~III, Laughton, and Orozco]{pere+07bj}
P{\'e}rez,~A.; March{\'a}n,~I.; Svozil,~D.; Sponer,~J.; Cheatham~III,~T.;
Laughton,~C.; Orozco,~M. \emph{Biophys. J.} \textbf{2007}, \emph{92},
3817--3829\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Darden et~al.(1993)Darden, York, and Pedersen]{dard+93jcp}
Darden,~T.; York,~D.; Pedersen,~L. \emph{J. Chem. Phys.} \textbf{1993},
\emph{98}, 10089--10092\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hess et~al.(2008)Hess, Kutzner, Van Der~Spoel, and
Lindahl]{hess+08jctc}
Hess,~B.; Kutzner,~C.; Van Der~Spoel,~D.; Lindahl,~E. \emph{J. Chem. Theory
Comput.} \textbf{2008}, \emph{4}, 435--447\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bonomi et~al.(2009)Bonomi, Branduardi, Bussi, Camilloni, Provasi,
Raiteri, Donadio, Marinelli, Pietrucci, Broglia, and Parrinello]{bono+09cpc}
Bonomi,~M.; Branduardi,~D.; Bussi,~G.; Camilloni,~C.; Provasi,~D.; Raiteri,~P.;
Donadio,~D.; Marinelli,~F.; Pietrucci,~F.; Broglia,~R.; Parrinello,~M.
\emph{Comput. Phys. Commun.} \textbf{2009}, \emph{180}, 1961--1972\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Jarzynski(1997)]{jarz97prl}
Jarzynski,~C. \emph{Phys. Rev. Lett.} \textbf{1997}, \emph{78}, 2690\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Gore et~al.(2003)Gore, Ritort, and Bustamante]{gore+2003pnas}
Gore,~J.; Ritort,~F.; Bustamante,~C. \emph{Proc. Natl. Acad. Sci. U. S. A.}
\textbf{2003}, \emph{100}, 12564--12569\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bonomi et~al.(2009)Bonomi, Barducci, and Parrinello]{bono+09jcc}
Bonomi,~M.; Barducci,~A.; Parrinello,~M. \emph{J. Comput. Chem.} \textbf{2009},
\emph{30}, 1615--1621\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hummer and Szabo(2001)Hummer, and Szabo]{humm-szab01pnas}
Hummer,~G.; Szabo,~A. \emph{Proc. Natl. Acad. Sci. U. S. A.} \textbf{2001},
\emph{98}, 3658--3661\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Gupta et~al.(2011)Gupta, Vincent, Neupane, Yu, Wang, and
Woodside]{gupt+11natphy}
Gupta,~A.; Vincent,~A.; Neupane,~K.; Yu,~H.; Wang,~F.; Woodside,~M.
\emph{Nature Physics} \textbf{2011}, \emph{7}, 631--634\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Crooks(2000)]{croo00pre}
Crooks,~G.~E. \emph{Phys. Rev. E} \textbf{2000}, \emph{61}, 2361--2366\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Williams and Evans(2010)Williams, and Evans]{will-evan10prl}
Williams,~S.~R.; Evans,~D.~J. \emph{Phys. Rev. Lett.} \textbf{2010},
\emph{105}, 110601\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kumar et~al.(1992)Kumar, Rosenberg, Bouzida, Swendsen, and
Kollman]{kuma+92jcc}
Kumar,~S.; Rosenberg,~J.~M.; Bouzida,~D.; Swendsen,~R.~H.; Kollman,~P.~A.
\emph{J. Comput. Chem.} \textbf{1992}, \emph{13}, 1011--1021\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Souaille and Roux(2001)Souaille, and Roux]{soua-roux01cpc}
Souaille,~M.; Roux,~B. \emph{Comput. Phys. Commun.} \textbf{2001}, \emph{135},
40--57\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Cantor and Schimmel(1980)Cantor, and Schimmel]{cant-schi80book}
Cantor,~R.; Schimmel,~P. \emph{Biophysical Chemistry}; W. H. Freeman: San
Francisco, 1980\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hummer and Szabo(2005)Hummer, and Szabo]{humm-szab05acr}
Hummer,~G.; Szabo,~A. \emph{Acc. Chem. Res.} \textbf{2005}, \emph{38},
504--513\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hummer and Szabo(2010)Hummer, and Szabo]{humm-szab10pnas}
Hummer,~G.; Szabo,~A. \emph{Proc. Natl. Acad. Sci. U. S. A.} \textbf{2010},
\emph{107}, 21441--21446\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Mohan et~al.(2010)Mohan, Hsiao, Bowman, Wartell, and
Williams]{moha+2010jacs}
Mohan,~S.; Hsiao,~C.; Bowman,~J.~C.; Wartell,~R.; Williams,~L.~D. \emph{J. Am.
Chem. Soc.} \textbf{2010}, \emph{132}, 12679--12689\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Liu et~al.(2008)Liu, Zhao, and Xia]{Liu+08bioc}
Liu,~J.~D.; Zhao,~L.; Xia,~T. \emph{Biochemistry} \textbf{2008}, \emph{47},
5962--5975\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Sugimoto et~al.(1987)Sugimoto, Kierzek, and Turner]{sugi+87bioc}
Sugimoto,~N.; Kierzek,~R.; Turner,~D.~H. \emph{Biochemistry} \textbf{1987},
\emph{26}, 4554--4558\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Turner et~al.(1988)Turner, Sugimoto, and Freier]{turn+88arbb}
Turner,~D.~H.; Sugimoto,~N.; Freier,~S.~M. \emph{Annual Review of Biophysics
and Biophysical Chemistry} \textbf{1988}, \emph{17}, 167--192\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Isaksson and Chattopadhyaya(2005)Isaksson, and
Chattopadhyaya]{isak-chat05bioc}
Isaksson,~J.; Chattopadhyaya,~J. \emph{Biochemistry} \textbf{2005}, \emph{44},
5390--5401\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Pohorille et~al.({1990})Pohorille, Ross, and Tinoco]{poho+90ijsa}
Pohorille,~A.; Ross,~W.; Tinoco,~I. \emph{International Journal of
Supercomputer Applications and High Performance Computing} \textbf{{1990}},
\emph{{4}}, {81--96}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Dumont et~al.(2006)Dumont, Cheng, Serebrov, Beran, {Tinoco Jr}, Pyle,
and Bustamante]{dumo+06nat}
Dumont,~S.; Cheng,~W.; Serebrov,~V.; Beran,~R.~K.; {Tinoco Jr},~I.;
Pyle,~A.~M.; Bustamante,~C. \emph{Nature} \textbf{2006}, \emph{439},
105\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[B{\"u}ttner et~al.(2007)B{\"u}ttner, Nehring, and Hopfner]{buet+07nsmb}
B{\"u}ttner,~K.; Nehring,~S.; Hopfner,~K.-P. \emph{Nat. Struct. Mol. Biol.}
\textbf{2007}, \emph{14}, 647--652\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Gu and Rice(2010)Gu, and Rice]{gu-rice10pnas}
Gu,~M.; Rice,~C. \emph{Proc. Natl. Acad. Sci. U. S. A.} \textbf{2010},
\emph{107}, 521\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Raney et~al.(2010)Raney, Sharma, Moustafa, and Cameron]{rane+10jbc}
Raney,~K.~D.; Sharma,~S.~D.; Moustafa,~I.~M.; Cameron,~C.~E. \emph{J. Biol.
Chem.} \textbf{2010}, \emph{285}, 22725--22731\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Luo et~al.(2008)Luo, Xu, Watson, Scherer-Becker, Sampath, Jahnke,
Yeong, Wang, Lim, Strongin, Vasudevan, and Lescar]{Luo+08emboj}
Luo,~D.; Xu,~T.; Watson,~R.~P.; Scherer-Becker,~D.; Sampath,~A.; Jahnke,~W.;
Yeong,~S.~S.; Wang,~C.~H.; Lim,~S.~P.; Strongin,~A.; Vasudevan,~S.~G.;
Lescar,~J. \emph{EMBO J.} \textbf{2008}, \emph{27}, 3209--3219\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Serebrov et~al.(2009)Serebrov, Beran, and Pyle]{sere+09jbc}
Serebrov,~V.; Beran,~R. K.~F.; Pyle,~A.~M. \emph{J. Biol. Chem.} \textbf{2009},
\emph{284}, 2512--2521\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Jorgensen(2010)]{jorg10nat}
Jorgensen,~W.~L. \emph{Nature} \textbf{2010}, \emph{466}, 42--43\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\end{mcitethebibliography}
\end{document}
|
\section{Introduction}
Non-orthogonal multiple access (NOMA) has been recognized as a promising multiple access technique for the fifth-generation (5G) wireless networks due to its high spectral efficiency and user fairness \cite{Ding2015b}.
Compared to conventional orthogonal multiple access (OMA), NOMA transmission allows multiple users to share the same frequency resource via exploiting the power domain multiplexing and performing successive interference cancellation (SIC) at the receiver side. It has been shown that NOMA offers considerable performance gains over OMA in previous works \cite{Zhang2016SSR,Liu2016,CN:NOMA_Yan_Sun,Dai2015,Saito2013,Benjebbour2013,Kim2013}. In particular, resource allocation of NOMA has received significant attention since it is critical for the performance of NOMA. In \cite{Saito2013,Benjebbour2013}, the authors evaluated the system-level performance of NOMA systems. The authors in \cite{Kim2013} studied a minimum total transmission power beamforming problem. In \cite{Sun2015b}, the optimal resource allocation for multiple-input multiple-output (MIMO) NOMA systems to maximize the instantaneous sum-rate was proposed. However, existing works \cite{Saito2013,Benjebbour2013,Kim2013,Sun2015b} on resource allocation of NOMA have relied on the assumption of perfect channel state information at transmitter (CSIT) which is difficult to obtain in practice.
Recently, the notion of imperfect CSIT in NOMA systems for resource allocation algorithm design has been pursued in \cite{Ding2014,Yang2016,Dingtobepublished,Sun2015,Timotheou2015} under various system performance metrics. In \cite{Ding2014}, for a fixed power allocation, the outage probability and ergodic sum-rate of NOMA under statistical CSIT were investigated in a cellular downlink scenario with randomly deployed users. In \cite{Yang2016}, the authors analyzed the performance degradation on these two system performance metrics due to partial CSIT. The authors in \cite{Dingtobepublished} investigated the impact of user pairing on the sum-rate of NOMA for a fixed power allocation scheme. Power allocation was proposed for the maximization of the ergodic capacity and the minimization of the maximum outage probability in \cite{Sun2015} and \cite{Timotheou2015} under statistical CSIT, respectively.
Apart from those performance metrics mentioned above, power efficiency is also important due to the rising energy costs and green communication concerns. In \cite{Sun2015a}, the authors solved the energy efficiency optimization problem for single-carrier NOMA systems. Yet, if multicarrier NOMA (MC-NOMA) systems are considered, the result from \cite{Sun2015a} may no longer be applicable. In \cite{Di2015,Lei2015,Liu2015}, various power allocation and user scheduling algorithms were proposed to maximize the sum-rate of MC-NOMA systems. However, the results from \cite{Di2015,Lei2015,Liu2015} were based on perfect CSIT assumption which may not available in practice, especially for MC-NOMA systems overloaded with exceedingly number of users. In addition, the aforementioned works have not taken into account the heterogeneous quality of service (QoS) requirements, which play an important role in 5G networks, in particular for small cells and massive access. In fact, power-efficient resource allocation based on statistical CSIT for MC-NOMA systems has not been reported in the literature so far.
In this paper, we focus on the power-efficient resource allocation for MC-NOMA systems with QoS constraints under statistical CSIT. Due to the absence of perfect CSIT, a SIC policy taking consideration of QoS requirements is proposed, where the BS only allows one user to perform SIC. Based on the adopted SIC policy, we formulate the resource allocation problem for MC-NOMA systems to minimize the total transmit power. Since the optimization problem is a mixed combinatorial non-convex problem, a suboptimal solution is proposed to solve the power allocation and the user scheduling problems separately. For a given user scheduling policy, the multicarrier power allocation problem is simplified to a per-subcarrier basis power allocation problem which facilitates the optimal power allocation design. Interestingly, based on the derived power allocation solution, an explicit metric for SIC decoding order associated with the level of QoS stringency is obtained, as an analogous to the channel gain based SIC decoding order for NOMA with perfect CSIT \cite{Saito2013,Benjebbour2013}. In addition, we have quantified the performance gain of NOMA over OMA in terms of power reduction and shown that the gain increases when the multiplexed users have more distinctive QoS stringency levels. For the user scheduling problem, a low computational complexity suboptimal scheduling algorithm based on agglomerative hierarchical clustering is proposed, which can achieve a close-to-optimal performance. Simulation results show that the proposed scheme significantly increases the power efficiency compared to a conventional OMA scheme.
The rest of the paper is organized as follows. Section II presents the system model and discusses the SIC policy with statistical CSIT. In Section III, we formulate the resource allocation as a non-convex optimization problem with QoS constraints. Section IV and Section V present the solution for power allocation and user scheduling, respectively. Simulation results are presented and analyzed in Section VI. Finally, Section VII concludes this paper.
Notations used in this paper are as follows. Boldface lower case letters denote vectors. $\mathbb{C}$ denotes the set of complex value; $\mathbb{R}^{M\times 1}$ denotes the set of all $M\times 1$ vectors with real entries; $\mathbb{Z}^{M\times 1}$ denotes the set of all $M\times 1$ vectors with integer entries; $\abs{\cdot}$ denotes the absolute value of a complex scalar; $\Pr \left\{ \cdot \right\}$ denotes the probability of a random event. The circularly symmetric complex Gaussian distribution with mean $\mu$ and variance $\sigma^2$ is denoted by ${\cal CN}(\mu,\sigma^2)$; the uniform distribution in the interval $[a, b]$ is denoted by $U[a,b]$; and $\sim$ stands for ``distributed as".
\begin{figure}[t]
\centering
\includegraphics[width=2.5in]{NOMA_model.eps}
\caption{A multicarrier downlink NOMA system where two users are multiplexed on one subcarrier in NOMA with perfect CSIT \cite{Benjebbour2013a}. User $1$ has a better channel quality who performs SIC to decode and remove the signal of user $2$ before decoding its desired signal. The allocated power for user $2$ is higher than user $1$.}\label{NOMA_model}
\end{figure}
\section{System Model}
In this section, we present the system model and the adopted assumptions for the considered MC-NOMA system.
\subsection{Multicarrier NOMA System}
A multicarrier downlink NOMA system with one base station (BS) and $K$ downlink users is considered, cf. Figure \ref{NOMA_model}. All transceivers are equipped with a single-antenna. $M$ subcarriers are provided to serve the $K$ users. In this paper, to provide fairness in resource allocation, we assume that only $L$ subcarriers are allocated to one user. In addition, we assume that each of the $M$ orthogonal subcarriers is allocated to at most two users to reduce the computational complexity and delay incurred at receiver side due to the SIC decoding, i.e., $KL \le 2M$. According to the NOMA protocol \cite{Benjebbour2013}, on subcarrier $m \in \left\{ {1, \ldots ,M} \right\}$, the BS transmits the messages of user $i$ and $j$, i.e., $s^m_i$ and $s^m_j$, with transmit power $p^m_i$ and $p^m_j$, $i,j \in \left\{ {1, \ldots ,K} \right\}$, respectively. The corresponding transmitted signal is represented by
\begin{equation}\label{SystemModelTx}
x^m = \sqrt {{p^m_i}} {s^m_i} + \sqrt {{p^m_j}} {s^m_j}.
\end{equation}
The received signal at user $i \in \left\{ {1, \ldots ,K} \right\}$ on subcarrier $m$ is given by
\begin{equation}\label{SystemModelRx}
y_i^m = h_i^m{x^m} + {z_i^m},
\end{equation}
where $z_i^m\sim{\cal CN}(0,\sigma^2)$ denotes the additive white Gaussian noise (AWGN) on subcarrier $m$ at user $i$. Variable $h_i^m \in \mathbb{C}$ represents the channel coefficient including the joint effect of large scale fading and small scale fading, i.e., $h_i^m = \frac{{g_i^m}}{{\sqrt {1 + d_i^\alpha } }}$ and $g_i^m \sim \mathcal{CN}(0,1)$, with $d_i$ denoting the distance between user $i$ to the BS and $\alpha$ denoting the path loss exponent. We assume that the channel gain of small scale fading is Rayleigh distributed and the path loss information is known at the BS due to long term measurement. The cumulative distribution function (CDF) of channel gain of user $i$ on subcarrier $m$ is given by
\begin{equation}\label{ChannelGainCDF}
{F_{{{\left| {{h_i^m}} \right|}^2}}}\left( x \right) = 1 - {e^{ - \left( {1 + d_i^\alpha } \right)x}},\;x \ge 0.
\end{equation}
\subsection{Successive Interference Cancellation Policy}
NOMA exploits the power domain to perform multiple access \cite{Ding2014,Choi2015,Liu2016}. Based on the availability of CSIT, the BS performs user scheduling and power allocation. Besides, SIC will be performed at some of the downlink users to mitigate multi-user interference, cf. Figure \ref{NOMA_model}. In the literature \cite{Saito2013,Benjebbour2013a,Kim2013,Sun2015b}, with perfect CSIT and without QoS consideration, the user with better channel quality (\emph{strong user}) decodes and removes message of user (\emph{weak user}) before decoding its own, while the weak user directly decodes its own message by treating the signal from the strong user as noise. Furthermore, the BS will allocate more power to the weak user to obtain fairness and facilitate SIC process.
Unfortunately, without perfect CSIT, the BS cannot decide the SIC decoding order based on the ordered channel gain information. Similar to the case of NOMA with perfect CSIT, distance might be a criterion to define a strong or weak user. However, this criterion does not take consideration of QoS requirements, which also affect the SIC decoding order and hence change the behavior of power allocation. For example, if a user near the BS requires a lower outage probability, selecting this user to perform SIC needs more transmit power than selecting the other user, which is in contrast to the case of conventional NOMA without considering the QoS requirements.
On the other hand, it can be shown that for the case of NOMA with perfect CSIT and QoS requirements, both users performing SIC will require more transmit power than selecting only the strong user to perform SIC. Intuitively, the allocated power for both users will be increased to cope with the interference in decoding the other user's message. Inspired by this fact, we assume that the BS only allows one user to perform SIC on each subcarrier. Specifically, the BS will select user $i$ to perform SIC on subcarrier $m$ if the power consumption based on this selection is lower than that of selecting user $j$ to perform SIC. Latter in this paper, based on this assumption, an explicit SIC decoding order is derived with the level of QoS stringency.
In addition, given the total required target rate for user $j$ as ${R}_j$, we can split it into $L$ allocated subcarriers equally, since only statistical CSIT is available at the BS. Therefore, the target rate of user $j$ on its allocated subcarrier $m$ is given by
\begin{equation}\label{AssignedRate}
\widetilde{R}_j^m = \frac{{{R_j}}}{L}
\end{equation}
and the corresponding target SINR is given by
\begin{equation}\label{AssignedSINR}
\widetilde{\gamma}_j^m = {2^{\widetilde{R}_j^m}} - 1.
\end{equation}
We assume that SIC at user $i$ on subcarrier $m$ is successful when the achievable rate for decoding the message of user $j$ is not smaller than the target rate of user $j$ on subcarrier $m$, i.e.,
\begin{equation}\label{SICSuccess}
R_{i \to j}^m \ge \widetilde{R}_j^m,
\end{equation}
where $R_{i \to j}^m$ denotes the achievable rate for user $i$ to decode the message of user $j$ on subcarrier $m$ and it is given by
\begin{equation}\label{SICRate}
R_{i \to j}^m = {\log _2}\left( {1 + \frac{{p_j^m{{\left| {h_i^m} \right|}^2}}}{{p_i^m{{\left| {h_i^m} \right|}^2} + \sigma^2}}} \right).
\end{equation}
\section{Problem Formulation}
In this section, we first define the QoS requirements and then formulate the power allocation and user scheduling problem for NOMA systems.
\subsection{Quality of Service}
QoS is usually defined by a target rate and a required outage probability. Given the target rate $\widetilde{R}_i^m$ for each user on each allocated subcarrier, the QoS required by user $i$ on subcarrier $m$ is given by the following outage probability constraint:
\begin{equation}\label{QoSConstraint}
{\mathrm{Pr}}\left\{ {{R_i^m} \ge {{\widetilde R}_i^m}} \right\} \ge 1 - {\delta _i^m},\;\forall i,
\end{equation}
with
\begin{equation}\label{AchievableRate}
R_i^m = \left\{ {\begin{array}{*{20}{l}}
R_{i,i}^m & \;\mathrm{if}\;{R_{i \to j}^m \ge \widetilde R_j^m},\\
R_{i,j}^m & \;\mathrm{otherwise},
\end{array}} \right.
\end{equation}
where $R_i^m$ and $\delta_i^m$ denote the achievable rate and the required outage probability of user $i$ on subcarrier $m$, respectively. Note that outage probability is defined on each subcarrier, which is commonly adopted in the literature for the simplification of resource allocation design \cite{Zhu2009,Kwan_AF_2010}. Variables $R_{i,i}^m$ and $R_{i,j}^m$ denote the achievable rates for user $i$ on subcarrier $m$ with and without SIC, respectively, and they are given by
\begin{eqnarray}
&& R_{i,i}^m = {\log _2}\left( {1 + \frac{{p_i^m{{\left| {h_i^m} \right|}^2}}}{{\sigma^2}}} \right)\;\mathrm{and}\label{RatesSIC1}\\
&& R_{i,j}^m = {\log _2}\left( {1 + \frac{{p_i^m{{\left| {h_i^m} \right|}^2}}}{{p_j^m{{\left| {h_i^m} \right|}^2} + \sigma^2}}} \right),\label{RatesSIC2}
\end{eqnarray}
respectively.
\subsection{Optimization Problem Formulation}
Now, the joint power allocation and user scheduling design for the MC-NOMA system can be formulated as the following optimization problem:
\begin{subequations}\label{P1}
\begin{equation}\label{P1_Objective}
\hspace*{-2mm}\mathop {\mino }\limits_{{\mathbf{p}},{\mathbf{c}}} \sum\limits_{m = 1}^M {\sum\limits_{i = 1}^K {\sum\limits_{j = 1}^K {c_{i,j}^m\left( {p_i^m + p_j^m} \right)} } }
\end{equation}
\begin{alignat}{1}
\mbox{s.t.}\;\;&{\mathrm{Pr}}\left\{ {{R_i^m} \ge {{\widetilde R}_i^m}} \right\} \ge c_{i,j}^m\left( 1 - {\delta _i^m} \right),\;\forall i, m,\label{P1_Constraint1}\\
&p_i^m \ge 0,\;\forall i,m,\label{P1_Constraint2}\\
&\sum\limits_{i = 1}^K {\sum\limits_{j = 1}^K {c_{i,j}^m {\leq 2}} } ,\;\forall m,\label{P1_Constraint3}\\
&\sum\limits_{m = 1}^M {\sum\limits_{j = 1}^K {c_{i,j}^m = L} } ,\;\forall i,\label{P1_Constraint4}\\
&c_{i,j}^m \in \left\{ {0,1} \right\},\;\forall i,j,m,\label{P1_Constraint6}
\end{alignat}
\end{subequations}
where $c_{i,j}^m$ is the subcarrier allocation variable which is one if both user $i$ and user $j$ are multiplexed on subcarrier $m$, and will be zero otherwise. Vectors $\mathbf{p}\in\mathbb{R}^{MK \times1}$ and $\mathbf{c}\in\mathbb{Z}^{MK^2 \times1}$ denote the collection of power allocation variables and user scheduling variables. Constraint \eqref{P1_Constraint1} guarantees the QoS of all the users on their allocated subcarriers and it is inactive when user $i$ is not allocated on subcarrier $m$, i.e., $c_{i,j}^m=0$. Constraint \eqref{P1_Constraint2} is non-negative constraint for power allocation variables. Constraints \eqref{P1_Constraint3} and \eqref{P1_Constraint6} are imposed to ensure that at most two users are multiplexed on one subcarrier and all the subcarriers are allocated to reduce the total power consumption. Constraint \eqref{P1_Constraint4} is introduced for resource allocation fairness such that all the users have the same amount of frequency resources.
We note that, for the case of $c_{i,j}^m=1$, $i=j$, subcarrier $m$ is exclusively allocated to user $i$, and the user scheduling policy for subcarrier $m$ is degenerated to conventional orthogonal assignment. In other words, the proposed optimization framework in \eqref{P1} generalizes the resource allocation for conventional OMA as a subcase. We note that the problem in \eqref{P1} is a mixed combinatorial non-convex problem, and there is no systematic and computational efficient approach to solve it optimally. According to \eqref{P1}, the user scheduling is jointly affected by distance $d_i$, target rate ${\widetilde{R}_i^m}$, and required outage probability $\delta_i$, while the counterpart of traditional NOMA with perfect CSIT depends only on channel gain order \cite{Dingtobepublished}. In addition, according to \eqref{P1_Objective} and \eqref{P1_Constraint1}, the power allocation and user scheduling variables are coupled. Therefore, in the following two sections, we propose a suboptimal solution which intends to solve the power allocation and user scheduling separately.
\section{Solution for Power Allocation Problem}\label{Section4}
For a given user scheduling policy $\mathbf{c}$ that satisfies constraints \eqref{P1_Constraint3}, \eqref{P1_Constraint4}, and \eqref{P1_Constraint6}, power allocation can be performed independently on each subcarrier. Therefore, the original problem \eqref{P1} can be simplified to a per-subcarrier two-user power allocation problem. For notational simplicity, we drop the subcarrier index $m$. The simplified optimization problem is given by
\begin{subequations}\label{P2}
\begin{equation}\label{P2_Objective}
\hspace*{-5mm}\mathop {\mino }\limits_{{p_1,p_2}} \;\sum\limits_{i = 1}^2 {{p_i}}
\end{equation}
\vspace*{-5mm}
\begin{alignat}{1}
\mbox{s.t.}\;\;\;\;&{\mathrm{Pr}}\left\{ {{R_i} \ge {{\widetilde R}_i}} \right\} \ge 1 - {\delta _i},\;i \in \left\{ {1,2} \right\},\label{P2_Constraint1}\\
&{p_i} \ge 0,\;i \in \left\{ {1,2} \right\},\label{P2_Constraint2}
\end{alignat}
\end{subequations}
where $R_i$ is given by \eqref{AchievableRate}.
In the following, we first solve the problem in \eqref{P2} by assuming only one user to perform SIC, then derive the SIC decoding order based on power allocation solution, and compare its performance with OMA.
\subsection{Power Allocation Solution}
The optimal power allocation solution for the problem in \eqref{P2} can be obtained via the following two cases. For the first case, we only allow user $1$ to perform SIC and obtain the corresponding power allocation solution. The power allocation solution for the second case, which only allows user $2$ to perform SIC, is also obtained. Then, the optimal solution for the problem in \eqref{P2} is given by the solutions for both cases with the lower power consumption.
According to \eqref{SICSuccess}, if we allow user $1$ to perform SIC and prevent user $2$ to do that, the following prerequisites should be satisfied:
\begin{eqnarray}
&&{p_2} - {p_1}{\widetilde{\gamma} _2} > 0\quad\mathrm{and}\label{Prerequisite1}\\
&&{p_1} - {p_2}{\widetilde{\gamma} _1} \le 0.\label{Prerequisite2}
\end{eqnarray}
We note that, due to the channel uncertainty, the prerequisite in \eqref{Prerequisite1} cannot guarantee the success of SIC and the success of SIC also cannot guarantee outage free transmission. This makes the resource allocation for MC-NOMA in this paper fundamentally different from the case of perfect CSIT. Under these two prerequisites, \eqref{Prerequisite1} and \eqref{Prerequisite2}, the outage probability for both users are given by
\begin{eqnarray}
&&{\rm{P}}^{\mathrm{out}}_1 = {\rm{Pr}}\left\{ {{R_{1 \to 2}} \ge {{\tilde R}_2},{R_{1,1}} < {{\tilde R}_1}} \right\}\notag\\
&&\hspace*{10mm}+ {\rm{Pr}}\left\{ {{R_{1 \to 2}} < {{\tilde R}_2},{R_{1,2}} < {{\tilde R}_1}} \right\},\label{OutageProbability1}\\
&&{\rm{P}}^{\mathrm{out}}_2 = {\rm{Pr}}\left\{ {{R_{2,1}} < {{\tilde R}_2}} \right\},\label{OutageProbability2}
\end{eqnarray}
where ${\rm{P}}^{\mathrm{out}}_1$ and ${\rm{P}}^{\mathrm{out}}_2$ denote the outage probability of user $1$ and user $2$, respectively. Equation \eqref{P2_Constraint1} requires that ${\rm{P}}^{\mathrm{out}}_i \le \delta _i$. Note that ${\rm{P}}^{\mathrm{out}}_1$ consists of two terms which denote the outage probability with a successful SIC and an unsuccessful SIC at user $1$, respectively.
Substituting \eqref{SICRate}, \eqref{RatesSIC1}, and \eqref{RatesSIC2} into \eqref{OutageProbability1} and \eqref{OutageProbability2} yields
\begin{eqnarray}
&&\hspace*{-12mm}{\rm{P}}^{\mathrm{out}}_1 = {\rm{Pr}}\left\{ {{{\left| {{h_1}} \right|}^2} < \max \left( {\frac{{{\widetilde{\gamma} _1}{\sigma ^2}}}{{{p_1}}},\frac{{{\widetilde{\gamma} _2}{\sigma ^2}}}{{ {{p_2} - {p_1}{\widetilde{\gamma} _2}} }}} \right)} \right\} \; \mathrm{and} \label{OutageProbability3}\\
&&\hspace*{-12mm}{\rm{P}}^{\mathrm{out}}_2 = {\rm{Pr}}\left\{ {{{\left| {{h_2}} \right|}^2} < \frac{{{\widetilde{\gamma} _2}{\sigma ^2}}}{{ {{p_2} - {p_1}{\widetilde{\gamma} _2}} }}} \right\},\label{OutageProbability4}
\end{eqnarray}
respectively.
Exploiting the CDF of channel gain \eqref{ChannelGainCDF} in \eqref{OutageProbability3} and \eqref{OutageProbability4}, and substituting them into \eqref{P2_Constraint1}, we obtain the solution of \eqref{P2} with the minimized total transmit
|
power as:
\begin{eqnarray}
&&\hspace*{-5mm}{p_1^{(1)}} = \frac{{{\widetilde{\gamma} _1}}}{{{\beta _1}}}\quad \mathrm{and}\label{Solution1}\\
&&\hspace*{-5mm}{p_2^{(1)}} = \max \left( {\frac{{{\widetilde{\gamma} _1}{\widetilde{\gamma} _2}}}{{{\beta _1}}} + \frac{{{\widetilde{\gamma} _2}}}{{{\beta _1}}},\frac{{{\widetilde{\gamma} _1}{\widetilde{\gamma} _2}}}{{{\beta _1}}} + \frac{{{\widetilde{\gamma} _2}}}{{{\beta _2}}},\frac{1}{{{\beta _1}}}} \right),\label{Solution2}
\end{eqnarray}
where ${\beta _i} = - \frac{{\ln \left( {1 - {\delta _i}} \right)}}{{\sigma^2\left( {1 + d_i^\alpha } \right)}}$, ${p_i^{(1)}}$, $i \in \left\{ {1,2} \right\}$, is the allocated power for user $i$ for the first case, and the superscript $(1)$ denotes allowing user $1$ to perform SIC. Note that $\frac{1}{{{\beta _i}}}$ can be interpreted as the level of QoS stringency for user $i$, where a large $\frac{1}{{{\beta _i}}}$ means user $i$ is far away from the BS or has a small required outage probability, such that a higher transmit power is necessary to satisfy its stringent QoS requirement. Similar to the case of NOMA with perfect CSIT, we can define a user with larger ${{{\beta _i}}}$ as a \emph{QoS non-demanding user} and define the other user as a \emph{QoS demanding user}.
For the second case which allows user $2$ to perform SIC and prevents user $1$ to do that, the prerequisites are given by
\begin{equation}\label{Prerequisite34}
{p_1} - {p_2}{\widetilde{\gamma} _1} > 0 \quad \mathrm{and}\quad
{p_2} - {p_1}{\widetilde{\gamma} _2} \le 0.
\end{equation}
Similarly, the power allocation solution for \eqref{P2} can be derived and given as
\begin{eqnarray}
&&\hspace*{-5mm}{p_1^{(2)}} = \max \left( {\frac{{{\widetilde{\gamma} _1}{\widetilde{\gamma} _2}}}{{{\beta _2}}} + \frac{{{\widetilde{\gamma} _1}}}{{{\beta _2}}},\frac{{{\widetilde{\gamma} _1}{\widetilde{\gamma} _2}}}{{{\beta _2}}} + \frac{{{\widetilde{\gamma} _1}}}{{{\beta _1}}},\frac{1}{{{\beta _2}}}} \right)\;\mathrm{and}\label{Solution3}\\
&&\hspace*{-5mm}{p_2^{(2)}} = \frac{{{\widetilde{\gamma} _2}}}{{{\beta _2}}},\label{Solution4}
\end{eqnarray}
where the superscript $(2)$ denotes allowing user $2$ to perform SIC.
In summary, the optimal solution for the problem in \eqref{P2} can be selected by
\begin{equation}\label{GlobalSolution}
\left( {{{p_1}},{{p_2}}} \right) =
\left\{ \begin{array}{ll}
\hspace*{-1mm}\left( p_1^{(1)}, p_2^{(1)}\right)& \mathrm{if} \; p_1^{(1)}\hspace*{-1mm}+\hspace*{-1mm}p_2^{(1)} \le p_1^{(2)}\hspace*{-1mm}+\hspace*{-1mm}p_2^{(2)}, \\
\hspace*{-1mm}\left( p_1^{(2)}, p_2^{(2)}\right)& \mathrm{otherwise},
\end{array} \right.
\end{equation}
and the BS will inform user $i$ to perform SIC and forbid the other user to do that if $\left( p_1^{(i)}, p_2^{(i)}\right)$ is selected.
\subsection{A Simple SIC Decoding Order}
The selection of optimal power allocation in \eqref{GlobalSolution} incorporates the SIC decoding policy implicitly to achieve minimum power consumption. However, we can obtain an explicit rule to determine the SIC decoding order for a general condition of ${\widetilde{\gamma} _1}\ge1$ and ${\widetilde{\gamma} _2}\ge1$, which means that both users' target rates are not smaller than 1 bit/s/Hz. In such a condition, $\frac{1}{{{\beta _1}}}$ and $\frac{1}{{{\beta _2}}}$ will never be chosen in \eqref{Solution2} and \eqref{Solution3}, respectively. Thus, we have a simple solution for the joint optimal SIC decoding order and power allocation:
\begin{equation}\label{SICPolicy}
\left( {{{p_1}},{{p_2}}} \right) =
\left\{ \begin{array}{ll}
\left( p_1^{(1)}, p_2^{(1)}\right)& \mathrm{if} \; {{{\beta _1}}} \ge {{{\beta _2}}}, \\
\left( p_1^{(2)}, p_2^{(2)}\right)& \mathrm{otherwise},
\end{array} \right.
\end{equation}
which indicates that the QoS non-demanding user is always selected to perform SIC to minimize the power consumption.
Therefore, for a general condition of ${\widetilde{\gamma} _1}\ge1$ and ${\widetilde{\gamma} _2}\ge1$, ${\beta _i}$ defines the optimal SIC decoding policy in terms of power efficiency, where we only allow the QoS non-demanding user to perform SIC to reduce the total power consumption. Note that for ${\widetilde{\gamma} _i}<1$, we have to evaluate both solutions and compare them in \eqref{GlobalSolution} to find the SIC decoding order.
\subsection{Comparison between NOMA and OMA}
For the case of NOMA with perfect CSIT, it is well known that the performance gain of NOMA over OMA increases when the differences in channel gains between the multiplexed users become larger \cite{Benjebbour2013a,Saito2013,Dingtobepublished}. In this paper, we can obtain a similar conclusion with our scheme in terms of power reduction for the case of imperfect CSIT. For a fair comparison, we impose the same spectral efficiency for NOMA and OMA, where a single subcarrier is further split into two subcarriers with equal bandwidth for the OMA case. Therefore, the power allocation for two OMA users with statistical CSIT on one subcarrier is given by
\begin{equation}\label{OMA1}
\left( {p_1^{\mathrm{OMA}},p_2^{\mathrm{OMA}}} \right)=\left( \frac{{{2^{2{{\widetilde R}_1}}} - 1}}{{{2\beta _1}}},\frac{{{2^{2{{\widetilde R}_2}}} - 1}}{{{2\beta _2}}} \right),
\end{equation}
where the superscript ``$\mathrm{OMA}$" denotes the case of OMA.
Now, we provide a sufficient condition that the power consumption of NOMA is no larger than that of OMA. Suppose ${\widetilde{R} _1} \ge 1$ bit/s/Hz and ${\widetilde{R} _2} \ge 1$ bit/s/Hz, we can obtain the performance gain of NOMA over OMA in terms of power reduction as follows:
\begin{eqnarray}\label{NOMAOverOMA}
&&\hspace*{-8mm}p_{\mathrm{total}}^{\mathrm{OMA}} - p_{\mathrm{total}}^{\mathrm{NOMA}} = \notag\\
&&\hspace*{-8mm}\left\{ \begin{array}{ll}
\hspace*{-1mm}\frac{{{\widetilde{\gamma} _1}{\widetilde{\gamma} _2}}}{{\sqrt {{\beta _1}} }}\left( {\frac{1}{{\sqrt {{\beta _2}} }} \hspace*{-1mm}-\hspace*{-1mm} \frac{1}{{\sqrt {{\beta _1}} }}} \right) + \frac{1}{2}{\left( {\frac{{{\widetilde{\gamma} _2}}}{{\sqrt {{\beta _2}} }} \hspace*{-1mm}-\hspace*{-1mm} \frac{{{\widetilde{\gamma} _1}}}{{\sqrt {{\beta _1}} }}} \right)^2}\ge 0& \hspace*{-1mm}\mathrm{if} \; {{{\beta _1}}} \ge {{{\beta _2}}}, \\
\hspace*{-1mm}\frac{{{\widetilde{\gamma} _1}{\widetilde{\gamma} _2}}}{{\sqrt {{\beta _2}} }}\left( {\frac{1}{{\sqrt {{\beta _1}} }} \hspace*{-1mm}-\hspace*{-1mm} \frac{1}{{\sqrt {{\beta _2}} }}} \right) + \frac{1}{2}{\left( {\frac{{{\widetilde{\gamma} _1}}}{{\sqrt {{\beta _1}} }} \hspace*{-1mm}-\hspace*{-1mm} \frac{{{\widetilde{\gamma} _2}}}{{\sqrt {{\beta _2}} }}} \right)^2}>0& \hspace*{-1mm}\mathrm{otherwise},
\end{array} \right.
\end{eqnarray}
where $p_{\mathrm{total}}^{\mathrm{OMA}}$ and $p_{\mathrm{total}}^{\mathrm{NOMA}}$ denote the total power consumption of OMA and NOMA on a single subcarrier, respectively. It can be observed that under the sufficient condition, the power reduction of NOMA over OMA is non-negative. More importantly, with ${\widetilde{R} _1} \ge 1$ bit/s/Hz and ${\widetilde{R} _2} \ge 1$ bit/s/Hz, the performance gain of NOMA over OMA also increases when the difference in the level of QoS stringency or the target rate between the QoS demanding user and QoS non-demanding user become larger, e.g. $\beta_1 \gg \beta_2$ or $\widetilde{\gamma} _1 \gg \widetilde{\gamma} _2$. Note that the total power consumption difference between NOMA and OMA is zero for the case of statistical CSIT when the multiplexed users have identical distances and QoS requirements, i.e., $\beta_1 = \beta_2$ and $\widetilde{\gamma}_1=\widetilde{\gamma}_2$.
In summary, from \eqref{SICPolicy} and \eqref{NOMAOverOMA}, we conclude that the level of QoS stringency $\frac{1}{{{\beta _i}}}$ plays a significant role in power allocation and SIC decoding order design.
\section{User Scheduling Algorithm}
According to \eqref{P1}, the $K$ users can be treated as $KL$ independent virtual users since their QoS constraints \eqref{P1_Constraint1} are imposed on each subcarrier independently. In addition, NOMA provides significant system performance gain in high system load scenario. Thus we focus on a practical overload scenario, i.e., $KL>M$. Now, to serve $K$ users via $M$ subcarriers, we intend to generate a user combination consisting of $KL-M$ users pairs and $2M-KL$ single users, which correspond to NOMA and OMA, respectively.
In all candidate user combinations, user scheduling needs to select one combination which consumes the minimal power.
Without loss of generality, we assume $L=1$ in this section. For $L>1$, we simply eliminate the combinations where one user is paired with itself to satisfy constraint \eqref{P1_Constraint4}. As we mentioned before, the power consumption of each subcarrier is only affected by the two multiplexed users on it. To schedule $K$ user over $M$ subcarriers, there are $\dbinom{K}{2}$ possible candidate pairs of users, and we can get the power consumption for all the pairs from \eqref{GlobalSolution}, i.e., $p_{ij}$, $i,j \in \left\{ {1, \ldots ,K} \right\}$ and $i \ne j$. In addition, for orthogonal subcarrier assignment, the power consumption for user $i$ is given by $p_{ii}=\frac{{{\widetilde{\gamma} _i}}}{{{\beta _i}}}$.
The number of all candidate combinations is given by
\begin{equation}\label{TotalNumber}
N = \dbinom{K}{2M-K} \prod\limits_{m = 1}^{K-M} {\left( {2m - 1} \right)}.
\end{equation}
To obtain the optimal user scheduling policy, we need to verify and compare all these $N$ candidate combinations, where $N$ is prohibitively large even for moderate $K$ and $M$. Thus, we attempt to propose a heuristic user scheduling algorithm based on the following geometric illustration.
\begin{figure}[t]
\centering
\includegraphics[width=2.5in]{UserScheduling.eps}
\caption{Geometric illustration for the proposed user scheduling with $4$ users.}\label{UserScheduling}\vspace*{-2mm}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=3.5in]{DendrogramCase.eps}
\caption{Dendrogram for the case in Figure \ref{UserScheduling}.}\label{DendrogramCase}\vspace*{-4mm}
\end{figure}
Figure \ref{UserScheduling} illustrates a user scheduling case with $4$ users, where every point denotes a user, every line $l_{ij}$ denotes pairing user $i$ and user $j$. The length of line $l_{ij}$ is given by $p_{ij}$, which denotes the power consumption for pairing user $i$ and user $j$. Note that it is an undirected graph, i.e., $p_{ij} = p_{ji}$ since both $p_{ij}$ and $p_{ji}$ denote the power consumption for pairing user $i$ and user $j$. From a minimum power consumption perspective, user $i$ and user $j$ are more likely to be paired with each other if point $i$ and point $j$ are close and they are far away from other points, such as point $2$ and point $3$ in Figure \ref{UserScheduling}. Based on this simple idea, the user scheduling problem becomes a clustering problem among all the points on a two-dimensional plane.
Now, we apply agglomerative hierarchical clustering to build the hierarchy from the individual points by progressively merging clusters \cite{Kaufman2009}. Based on the length of all the lines $l_{ij}$, we can obtain the dendrogram structure of all the points, which illustrates the arrangement of clusters, cf. Figure \ref{DendrogramCase}. In the dendrogram, the vertical axis of a point denotes the average distance between this point with all the clusters below it, and the horizon axis presents the ordered points set in terms of distance to the clusters on the left of it. For example, Figure \ref{DendrogramCase} illustrates a dendrogram for the case in Figure \ref{UserScheduling}, where point $3$ is the nearest point to point $2$, and point $4$ is the farthest point to the cluster consists of points $2$, $3$, and $1$. In other words, the horizon axis illustrates the ordered users set in terms of average power consumption for pairing with each user on its left. Note that the generated order in dendrogram is based on the joint effect of $\frac{1}{{{\beta _i}}}$ and $\widetilde{\gamma} _i$, which provides a rule of thumb for user scheduling.
According to \eqref{Solution1}, \eqref{Solution2}, \eqref{Solution3}, and \eqref{Solution4}, the power consumption of NOMA increases with the target SINR of both multiplexed users. Thus we expect that the right $2M-K$ users in the horizontal axis of the dendrogram are assigned on $2M-K$ subcarriers exclusively to reduce the total system power consumption since these users are usually QoS demanding. For the remaining left $2K-2M$ users, we need to generate $K-M$ pairs of users. Since the performance gain over OMA increases with the difference of $\frac{1}{{{\sqrt{\beta _i}}}}$ as well as ${\frac{{{\gamma _i}}}{{\sqrt {{\beta _i}} }}}$ between paired users, referring to \eqref{NOMAOverOMA}, we intend to partition the remaining left $2K-2M$ users into two groups and pair them in successive order. For example, if we only have two subcarriers for the case of $4$ users in Figure \ref{DendrogramCase}, we partition them into two groups, $\left\{ {2,3} \right\}$ and $\left\{ {1,4} \right\}$. Then we pair user $2$ with user $1$ and pair user $3$ with user $4$. The user scheduling algorithm is summarized in \textbf{Algorithm 1}.
\begin{table}[t]
\begin{algorithm} [H]
\caption{User Scheduling Algorithm}
\label{alg2}
\begin{algorithmic} [1]
\small
\STATE Compute $p_{ij}$, $i \ne j$, $i,j \in \left\{ {1, \ldots ,K} \right\}$, through \eqref{GlobalSolution}.
\STATE Generate the dendrogram based on $p_{ij}$ via agglomerative hierarchical clustering \cite{Kaufman2009}.
\STATE Allocate the right $2M-K$ users on $2M-K$ subcarriers exclusively.
\STATE Partition the left $2K-2M$ users into two groups and pair them in successive order on $K-M$ subcarriers.
\end{algorithmic}
\end{algorithm}\vspace*{-8mm}
\end{table}
Note that for the case of $L>1$, each point in Figure \ref{UserScheduling} will be replaced by $L$ points which have the same distance to all the other points due to our equally target rate assignment \eqref{AssignedRate}. Therefore, the dendrogram in Figure \ref{DendrogramCase} will be extended by replacing each user with a cluster of $L$ users of equal altitude. Therefore, our scheduling will avoid the pair of users where one user is paired with itself unless $KL-M=1$. For the case of $KL-M=1$, we will select the first point of the right $2M-K$ users to pair to satisfy constraint \eqref{P1_Constraint4}.
We note that although the proposed user scheduling algorithm is suboptimal, it is more computational efficient compared to optimal exhaustive search. In particular, the complexity of agglomerative clustering algorithm is only $\mathcal{O}\left(K^3\right)$ in general case. Besides, the suboptimality of the proposed user scheduling algorithm will be verified in the simulation section.
\section{Results}
In this section, the performance of our proposed scheme is verified with simulations. In a single-cell with BS located at the center with cell size $D$, there are $K$ users randomly and uniformly distributed between $30$ m and $D$ m, i.e., $d_i \sim U[30,\;D]$ m. Similarly, the target rates of all the users are generated by $\widetilde{R}_i \sim U[0.1,\;10]$ bit/s/Hz. In the following simulations, two kinds of outage probability are evaluated to compare the performance gain by introducing the QoS constraints: Case I with equal outage probability $\delta_i = 10^{-2}$ and Case II with random outage probability $\delta_i \sim U[10^{-5},\;0.1]$. The user noise power on each subcarrier is $\sigma^2=-128$ dBm. The 3GPP path loss model with path loss exponent $\alpha = 3.6$ is adopted in our simulations \cite{Access2010}.
The simulation results shown in the sequel are averaged over $1000$ realizations of different user distances, target rates, multipath fading coefficients, and outage probability requirements.
\subsection{Power Consumption versus Cell Size}
In Figure \ref{PC_vs_CellSize}, we investigate the power consumption versus cell size $D$ for the considered MC-NOMA system with $M=5$, $K = 4$, and $L=2$ \footnote{Since the computational complexity of full search is extremely large, we adopt small values for $M$, $K$, and $L$ to compare our proposed scheme with the full search scheduling. We note that our proposed scheme is very computational efficient compared to exhaustive search, which can apply to a scenario with more users and subcarriers.}. For comparison, we also show the performance of OMA, random scheduling and, full search scheduling. Note that power consumption for OMA is given with \eqref{OMA1} by replacing $2{{\widetilde R}_i}$ with $K/M{{\widetilde R}_i}$ since the available frequency bandwidth is split equally for $K$ users. In addition, we note that the random scheduling and the full search scheduling are performed together with the proposed power allocation solution \eqref{GlobalSolution}. It can be seen that our proposed user scheduling method provides a significant power saving compared to the random scheduling, and achieves a performance close to the full search scheduling in both cases. The reason for this improvement is that our proposed user scheduling method takes into account the joint effect of $\frac{1}{{{\beta _i}}}$ and $\widetilde{\gamma} _i$, and exploits the heterogeneity of QoS requirements, which achieves a better utilization of power domain. More importantly, the performance gain of our proposed scheme over OMA in Case II is larger than Case I. This result demonstrates the effectiveness of our proposed scheme in exploiting the QoS heterogeneity to reduce power consumption.
\begin{figure}[t]
\centering
\includegraphics[width=3.5in]{PC_vs_CellSize.eps}
\caption{Power consumption versus cell size. The results for Case I and Case II are illustrated with black color and blue color, respectively. The double-sided arrows illustrate the performance gain of our proposed scheme over OMA in Case I and Case II, respectively.}
\label{PC_vs_CellSize}
\end{figure}
\subsection{Total Power Consumption versus Number of Users}
\begin{figure}[t]
\centering
\includegraphics[width=3.5in]{PC_vs_NumUser.eps}
\caption{Total power consumption versus the number of users. The results for Case I and Case II are illustrated with black color and blue color, respectively. The double-sided arrows illustrate the performance gain of our proposed scheme over OMA in Case I and Case II, respectively.}
\label{UserSchedulingAlgorithm}
\end{figure}
In Figure \ref{UserSchedulingAlgorithm}, we investigate the performance of our proposed scheme versus the number of users. A MC-NOMA system with $M=5$ subcarriers and cell size $D=200$ m is considered. We assume $L = 1$ in this case, and the number of users $K$ varies from $6$ to $10$. It can be observed that our proposed resource allocation scheme reduces the power consumption substantially compared to the random scheduling, and also performs closely to the full search scheduling in both cases. Besides, it can be observed that the performance gain of our proposed scheme over OMA in Case II is also larger than that in Case I owing to a better resource utilization via taking into account the diversification of QoS requirements. More importantly, it can be seen that the performance gain over OMA increases with number of users in both cases, which is consistent with the case of NOMA with perfect CSIT. In fact, the QoS requirements and average user channel gain become more heterogeneous for an increasing number of users. Thus the conventional OMA scheme fails to accommodate the diverse needs, and our proposed scheme will obtain more performance gain.
We note that the power consumption of OMA is lower than that of random scheduling for $K=6$ and $K=7$ but still higher than our proposed scheme. Although NOMA significantly outperforms OMA in single subcarrier \eqref{NOMAOverOMA}, MC-NOMA with random scheduling may consume more power than OMA in such low overload ratio, e.g. $\frac{K}{M}\approx 1$. In fact, with low overload ratio, due to the user scheduling constraint \eqref{P1_Constraint4} for fairness consideration, there are only few pairs of users in the user combination, and thus MC-NOMA with random scheduling cannot fully exploit the power domain.
In addition, we note that the performance gap between our proposed suboptimal scheme with the full search scheduling increases slightly with the number of users. However, the computational complexity of our proposed scheme is significantly lower than that of the full search scheduling.
\section{Conclusion}
In this paper, we studied the power-efficient resource allocation for MC-NOMA systems with statistical CSIT by taking into account the heterogeneity of QoS requirements. The resource allocation was formulated as a non-convex optimization problem to minimize the total power consumption. A low computational complexity suboptimal solution was proposed to solve power allocation and user scheduling problems separately. We derived the power allocation solution and characterized an explicit metric to decide the SIC decoding order associated with the level of QoS stringency. Besides, under a sufficient condition, we showed that the performance gain of NOMA over OMA in terms of power reduction increases with the difference in the level of QoS stringency between the multiplexed users. For the user scheduling, a computational efficient scheduling algorithm based on agglomerative hierarchical clustering was proposed. Simulation results demonstrated that our proposed scheme achieves a close-to-optimal performance and significant outperforms the conventional OMA scheme. Furthermore, our results also showed the effectiveness of our proposed scheme in exploiting the QoS heterogeneity to reduce power consumption.
\bibliographystyle{IEEEtran}
|
\section{Introduction}
The ideas of quantum cosmology \cite{Bryce,VilNB} and Euclidean
quantum gravity \cite{HH,H} are again attracting attention. One of
the reasons is the fact that the landscape of string vacua is too
big \cite{landscape} to hope that a reasonable selection mechanism
can be successfully worked out
within string theory itself. Thus, it is expected that other methods
have to be invoked, at least some of them appealing to the
construction of the cosmological wavefunction
\cite{OoguriVafaVerlinde,Tye1,Tye23,Brustein}.
This quantum state arises as a result of quantum tunneling from the
classically forbidden state of the gravitational field.
The
Hartle-Hawking wave function of the Universe \cite{HH,H}
describes nucleation of the de Sitter Universe from the Euclidean
4-dimensional hemisphere,$
\Psi_{\rm HH}\sim \exp(-S_{\rm E})
=\exp(3\pi /2G\Lambda)$ and has a negative action which diverges
to $-\infty$ for the
cosmological constant $\Lambda\to 0$. This implies a well-known
infrared catastrophe of small cosmological constant --- a vanishing
$\Lambda$ is infinitely more probable than any positive one.
Alternative tunneling proposals for the wave function of the
universe in the form of Linde \cite{Linde} or Vilenkin
\cite{Vilenkin} give preference to big values of $\Lambda$, which
opens the possibility for conclusions opposite to the Hartle-Hawking
case. In particular, the inclusion of one-loop effects allows one to
shift most probable values of the effective cosmological constant
from zero to a narrow highly peaked range compatible with the
requirements of inflation \cite{scale}.
In this work we study the Hartle-Hawking prescription of the
Euclidean path integration taking into account
essentially {\em nonlocal} quantum
effects mediated by nonlocal terms of non-vacuum nature.
The core of our suggestion is a simple observation that the presence
of radiation implies a statistical ensemble described by the density
matrix, rather than a pure state assumed in \cite{Tye1,Tye23}.
Density matrix in Euclidean quantum gravity \cite{Page},
$\rho[\,\varphi,\varphi']$, originates from an instanton with two
disjoint boundaries $\Sigma$ and $\Sigma'$ associated respectively
with its two entries, see
Fig.\ref{Fig.1}. Note that mixed nature of the quantum state is fundamental and the
origin of impurity is not caused by coarse graining or tracing out
environmental degrees of freedom.
\\
\begin{figure}[h]
\centerline{\epsfxsize 5cm \epsfbox{hh5.eps}} \caption{\small
Picture of instanton representing the density matrix. Dashed lines
depict the Lorentzian Universe nucleating from the instanton at the
minimal surfaces $\Sigma$ and $\Sigma'$. \label{Fig.1}}
\end{figure}
In contrast, the pure density matrix of the Hartle-Hawking state
corresponds to the situation when the instanton bridge between
$\Sigma$ and $\Sigma'$ is broken, so that topologically the
instanton is a union of two disjoint hemispheres. Each of the
half-instantons smoothly closes up at its pole which is a regular
internal point of the Euclidean spacetime ball, see Fig.\ref{Fig.2}
--- a picture illustrating the factorization of
$\hat\rho=|\Psi_{\rm HH}\rangle\langle\Psi_{\rm HH}|$.
\\
\begin{figure}[h]
\centerline{\epsfxsize 4.3cm \epsfbox{hh1.eps}} \caption{\small
Density matrix of the pure Hartle-Hawking state represented by the
union of two vacuum instantons. \label{Fig.2}}
\end{figure}
When calculated in the saddle-point approximation the density matrix
automatically gives rise to radiation whose thermal fluctuations
destroy the Hartle-Hawking instanton. Namely, the radiation stress
tensor prevents the half-instantons of the above type from closing
and, thus, forces the tubular structure on the full instanton
supporting the thermodynamical nature of the physical state. The
existence of radiation, in its turn, naturally follows from the
partition function of this state. The partition function originates
from integrating out the field $\varphi$ in the coincidence limit
$\varphi'=\varphi$.
(This procedure is equivalent to taking the trace
of the operator $\exp(-\beta H)$ inherent in the calculation of the
partition function for a system with the Hamiltonian $H$.)
This corresponds to the identification of
$\Sigma'$ and $\Sigma$, so that the underlying instanton acquires
toroidal topology. Its points are labeled by
the periodically identified Euclidean time, a period being related
to the inverse temperature of the quasi-equilibrium radiation. The
back reaction of this radiation supports the instanton geometry in
which this radiation exists, and we derive the equation which makes
this bootstrap consistent.
We show that for the radiation of conformally-invariant fields the
analytical and numerical solution of the bootstrap equations yields
a set of instantons --- a landscape --- only in the bounded range of
$\Lambda$,
\begin{eqnarray}
\Lambda_{\rm min}<\Lambda<\Lambda_{\rm max}. \label{2}
\end{eqnarray}
This set consists of the countable sequence of one-parameter
families of instantons which we call garlands, labeled by the number
$k=1,2,3,...$ of their elementary links. Each of these families
spans a continuous subset, $\Lambda_{\rm
min}^{(k)}<\Lambda\leq\Lambda_{\rm max}^{(k)}$, belonging to
(\ref{2}). These subsets of monotonically decreasing lengths
$\Lambda_{\rm max}^{(k)}-\Lambda_{\rm min}^{(k)}\sim 1/k^4$ do not
overlap and their sequence has an accumulation point at the upper
boundary $\Lambda_{\rm max}$ of the range (\ref{2}). Each of the
instanton families at its upper boundary $\Lambda_{\rm max}^{(k)}$
saturates with the static Einstein Universe filled by a hot
equilibrium radiation with the temperature $T_{(k)}\sim m_P/\ln
k^2$, $k\gg1$, and having the {\em negative} decreasing with $k$
action $\Gamma_0^{(k)}\sim -\ln^3k^2/k^2$. Remarkably, all values of
$\Lambda$ below the range (\ref{2}), $\Lambda<\Lambda_{\rm min}$,
are completely ruled out either because of the absence of instanton
solutions or because of their {\em infinitely large positive}
action.
The details of our approach have recently been presented in
\cite{we}, while here we discuss its basic points and, in
particular, establish its relation to the well-known Starobinsky
model \cite{Starob} of the self-consistent deSitter expansion driven
by the conformal anomaly of quantum fields.
\section{The effective action, generalized Friedmann equations and bootstrap}
As shown in \cite{we} the effective action of the cosmological model
with a generic spatially closed FRW metric $ds^2 = N^2(\tau) d\tau^2
+a^2(\tau)d^2\Omega^{(3)}=a^2(d\eta^2 + d^2\Omega^{(3)})$ sourced by
{\em conformal} quantum matter has the form
\begin{eqnarray}
&&\Gamma[\,a(\tau),N(\tau)\,]=
2 \int_{\tau_-}^{\tau_+} d\tau\Big(\!-\frac{a\dot{a}^2}N
-Na+N H^2 a^3\Big)\nonumber\\
&&\qquad\qquad\qquad\quad
+2B \int_{\tau_-}^{\tau_+}
d\tau \Big(\frac{\dot{a}^2}{Na}
-\frac16\,\frac{\dot{a}^4}{N^3 a}\Big)\nonumber\\
&&\qquad\qquad\qquad\quad
+ B \int_{\tau_-}^{\tau_+}
d\tau\,N/a+F\Big(2\int_{\tau_-}^{\tau_+}
d\tau\,N/a\Big)\, , \label{Gamma}
\end{eqnarray}
where $a(\tau)$ is the cosmological radius, $N(\tau)$ is the lapse
function, $H^2 = \Lambda/3$ and integration runs between two turning
points at $\tau_{\pm}$, $\dot a(\tau_\pm)=0$. Here the first line is
the classical part, the second line is the contribution of the
conformal transformation to the metric of the static instanton
$d\bar{s}^2 = d\eta^2 + d^2\Omega^{(3)}$ ($\eta$ is the conformal
time) and the last line is the one-loop action on this static
instanton. The conformal contribution $\Gamma_{\rm
1-loop}[g]-\Gamma_{\rm 1-loop}[\bar g]$ is determined by the
coefficients of $\Box R$, the Gauss-Bonnet invariant $E =
R_{\mu\nu\alpha\gamma}^2 -4R_{\mu\nu}^2 + R^2$ and Weyl tensor term
in the conformal anomaly
$g_{\mu\nu}\delta
\Gamma_{\rm 1-loop}/\delta g_{\mu\nu} =
g^{1/2}
(\alpha \Box R +
\beta E + \gamma
|
C_{\mu\nu\alpha\beta}^2)/4(4\pi)^2 $.
Specifically this contribution can be obtained by the technique of
\cite{TseytlinconfBMZ}; it contains higher-derivative terms $\sim
\ddot a^2$ which produce ghost instabilities in solutions of
effective equations. However, such terms are proportional to the
coefficient $\alpha$ which can be put to zero by adding the
following finite {\em local} counterterm
\begin{eqnarray}
\Gamma_{R}[g]
=\Gamma_{\rm 1-loop}[g]
+\frac1{2(4\pi)^2}
\frac\alpha{12}\int d^4x\,
g^{1/2}R^2(g). \label{renormalization}
\end{eqnarray}
This ghost-avoidance renormalization is justified by the requirement
of consistency of the theory at the quantum level. The contribution
$\Gamma_{R}[g]-\Gamma_{R}[\bar g]$ to the {\em renormalized}
action then gives the second line of (\ref{Gamma}) with
$B=3\beta/4$.
The static instanton with a period $\eta_0$ playing the role of
inverse temperature contributes $\Gamma_{\rm 1-loop}[\bar g]
=E_0\,\eta_0+F(\eta_0)$, where the vacuum energy $E_0$ and free
energy $F(\eta_0)$ are the typical boson and fermion sums over field
oscillators with energies $\omega$ on a unit 3-sphere
$E_0=\pm\sum_{\omega}
\omega/2\,,\,\,\,F(\eta_0)=\pm\sum_{\omega}
\ln\big(1\mp e^{-\omega\eta_0}\big)$.
The renormalization (\ref{renormalization}) which should be applied
also to $\Gamma_{\rm 1-loop}[\bar g]$ modifies $E_0$, so that
$\Gamma_{R}[\bar g] =C_0\,\eta_0+F(\eta_0)$, $C_0\equiv
E_0+3\alpha/16$. This gives the third line of Eq.(\ref{Gamma}) with
$C_0=B/2$. This universal relation between $C_0$ and $B=3\beta/4$
follows from the known anomaly coefficients \cite{confanomaly} and
the Casimir energy in a static universe \cite{E_0} for scalar, Weyl
spinor and vector fields.
The Euclidean Friedmann equation looks now as
\begin{eqnarray}
&&\frac{\dot{a}^2}{a^2}
+B \left(\frac12\,\frac{\dot{a}^4}{a^4}
-\frac{\dot{a}^2}{a^4}\right) =
\frac{1}{a^2} - H^2 -\frac{C}{ a^4}, \label{efeq}\\
&&C = B/2 +F'(\eta_0),\,\,
\eta_0 = 2\int_{\tau_-}^{\tau_+}
d\tau/a(\tau). \label{bootstrap}
\end{eqnarray}
The contribution of the nonlocal $F(\eta_0)$ in (\ref{Gamma})
reduces here to the radiation {\em constant} $C$ as a {\em nonlocal
functional} of $a(\tau)$, determined by the {\em bootstrap} equation
(\ref{bootstrap}), $F'(\eta_0)\equiv dF(\eta_0)/d\eta_0>0$ being the
energy of a hot gas of particles, which adds to their vacuum energy
$B/2$.
\begin{figure}[h]
\centerline{\epsfxsize 8.5cm \epsfbox{hh10.eps}} \caption{\small
Instanton domain in the $(H^2,C)$-plane. Garland families are shown
for $k=1,2,3,4$. Their sequence accumulates at the critical point
$(1/2B,B/2)$.
\label{Fig.4}}
\end{figure}
Periodic instanton solutions of Eqs.(\ref{efeq})-(\ref{bootstrap})
exist only inside the curvilinear wedge of $(H^2,C)$-plane between
bold segments of the upper hyperbolic boundary and the lower
straight line boundary of Fig.\ref{Fig.4},
\begin{equation}
4CH^2\leq 1,\,\,\,C \geq B-B^2 H^2,
\,\,\,B H^2\leq 1/2. \label{restriction1}
\end{equation}
Below this domain the solutions are either complex and aperiodic or
suppressed by {\em infinite positive} Euclidean action. Indeed,
a smooth Hartle-Hawking instanton
with $a_-=0$ yields $\eta_0\to\infty$ in view of
(\ref{bootstrap}), so that $F(\eta_0)\sim F'(\eta_0)\to 0$.
Therefore, its on-shell action
\begin{equation}
\Gamma_0= F(\eta_0)\!-\!\eta_0 F'(\eta_0)
+4\!\int_{a_-}^{a_+}\!
\frac{da \dot{a}}{a}\Big(B-a^2
-\frac{B\dot{a}^2}{3}\Big) \label{action-instanton}
\end{equation}
due to $B>0$ diverges to $+\infty$ at $a_-=0$ and completely rules
out pure-state instantons \cite{we}. For the instanton garlands,
obtained by glueing together into a torus $k$ copies of a simple
instanton \cite{Halliwell} the formalism is the same as above except
that the conformal time in (\ref{bootstrap}) and the integral term
of (\ref{action-instanton}) should be multiplied by $k$. As in the
case of $k=1$, garland families interpolate between the lower and
upper boundaries of (\ref{restriction1}). They exist for all $k$,
$1\leq k\leq\infty$, and their infinite sequence accumulates at the
critical point $C=B/2$, $H^2=1/2B$, with the on-shell action
$\Gamma_0^{(k)}\simeq -B\,\ln^3 k^2/4k^2\pi^2$ which, in contrast to
the tree-level garlands of \cite{Halliwell} is not additive in $k$.
\section{Conclusions: Euclidean quantum gravity and the Starobinsky model}
Elimination of the Hartle-Hawking vacuum instantons implies also
ruling out well-known solutions in the Starobinsky model of the
anomaly (and vacuum energy) driven deSitter expansion \cite{Starob}.
Such solutions (generalized to the case of a nonzero ``bare"
cosmological constant $3H^2$) can be obtained by the the usual Wick
rotation from the solutions of (\ref{efeq}) with the thermal
contribution switched off. The corresponding Euclidean Friedmann
equation reads
\begin{equation}
\frac{\dot{a}^2}{a^2} - \frac{1}{a^2} +
\frac{B}{2}\left(\,\frac{\dot{a}^2}{a^2} - \frac{1}{a^2}\,
\right)^2+ H^2=0. \label{star}
\end{equation}
and has a generic solution of the form $a = \sin h\tau/h$,
$1/a^2-\dot a^2/a^2=h^2$, with the following two values of the
Hubble parameter $h=h_\pm$,
$h^2_\pm = (1/B)(1\pm \sqrt{1 - 2BH^2})$.
For $H = 0$ this is exactly the Euclidean version of the
Starobinsky's solution \cite{Starob} with $h_+^2 = 2/B$. For larger
$H^2<1/2B$ we have two families of exactly deSitter instantons which
can be regarded as initial conditions for the ``generalized"
solutions of Starobinsky. However, all of them are ruled out by
their infinite positive Euclidean action.
What remains is a quasi-thermal ensemble of non-vacuum models in the
bounded cosmological constant range (\ref{2}) with $\Lambda_{\rm
min}>0$ and $\Lambda_{\rm max}=3/2B$ and with a finite value of
their effective Euclidean action. This implies the elimination of
the infrared catastrophe of $\Lambda\to 0$ and the ultimate solution
to the problem of unboundedness of the {\em on-shell} cosmological
action in Euclidean quantum gravity. As a byproduct, this suggests
also strong constraints on the cosmological constant apparently
applicable to the resolution of the cosmological constant problem.
\ack{A.K. is grateful to A.A Starobinsky and A. Vilenkin for useful
discussions. A.B. is indebted to C.Deffayet for stimulating
discussions and the hospitality of IAP in Paris. A.B. was supported
by the RFBR grant 05-02-17661 and the LSS 4401.2006.2 and SFB 375
grants. A.K. was partially supported by the RFBR grant 05-02-17450
and the grant LSS 1757.2006.2.
\section*{References}
|
\section*{Acknowledgments}
\section{Introduction}
\label{sec:Introduction}
\begin{figure}[!t]
\setlength\belowcaptionskip{-0.7\baselineskip}
\centering
\includegraphics[width=0.8\columnwidth]{Figures/intropic.png}
\caption{ Examples of humanoid robot teleoperation; from top left to bottom right corner: \cite{ramos2018}, \cite{darvish2019}, \cite{penco2019}, \cite{ishiguro2020bilateral}, \cite{jorgensen2019}, \cite{abi2018}, \cite{kim2013}, \cite{Cisneros2022Team}.}
\label{fig:intropic}
\end{figure}
There are many situations and environments where we need robots to replace humans at the site.
Despite the recent progress in robot cognition based on AI techniques, fully autonomous solutions are still far from producing socially and physically competent robot behaviors; that is why teleoperating robots (Fig.~\ref{fig:intropic}) acting as physical avatars of human workers at the site is the most reasonable solution.
In environments like construction sites, chemical plants, contaminated areas and space, teleoperated robots could be extremely valuable, relieving humans from any potential hazard.
Contrary to other conventional robotic platforms, humanoids' structure is a better fit for environments and tasks that are designed for and performed by humans. The operational versatility of these robots makes them suitable for work activities that require a variety of complex mobility and manipulation skills, such as inspection, maintenance, and interaction with human operators.
In certain contexts such as telenursing, where human subjects are expected to interact with a teleoperated robot, the human-likeness factor is important since it increases the acceptability, social closeness, and legibility of its intentions~\cite{dragan2013a}.
In the literature, different attempts have been made to \textit{``deploy the senses, actions, and presence of a human to a remote location in real-time, leading to a more connected world''}~\cite{AnaAvatarXprize}.
Inspired by a visit to the Tachi Lab, the XPRIZE Foundation has recently launched the ANA Avatar XPRIZE global competition~\cite{AnaAvatarXprize}.
Previously, in response to the 2011 Fukushima Daiichi nuclear disaster, the DARPA Robotics Challenge (DRC) was launched to promote innovation in human-supervised robotic technology.
In space applications, rovers and mobile manipulators were teleoperated from aboard the International Space Station (ISS), in the context of METERON and Kontur-2 projects~\cite{kontur-meteron}. In 2019, the humanoid Skybot F-850 was rocketed to the ISS~\cite{skybot2019}; however, it turned out to have a design that did not work well, demonstrating that there is still work to do to get humanoids into space.
Humanoid robot teleoperation involves many multidisciplinary and interleaved challenges, ranging from dynamics and control to communication and human psychophysiology. Uniquely, due to their resemblance to human appearance, societal expectations are high as well;
they are expected to do a wide range of tasks that are not expected from other types of robots. They are highly redundant with nonlinear, hybrid, and underactuated models.
While doing dynamic and agile motions with the feet like walking, running, or stepping over obstacles, they are supposed to perform dexterous power and precision manipulation.
At the same time,
they are expected to work alongside humans, be safe, friendly, and socially interact with others.
On the other hand, teleoperation interfaces and techniques should be designed such that the human operator receives minimal, effective, and informative haptic feedback from the humanoid robot, to cover for human errors, overcome communication delays, and above all, be telepresent. Along with these challenges, the field is new and due to its high resource demand for development, not many laboratories have been working on it.
Many efforts from the robotics community have been devoted to studying humanoid robots, teleoperation, evaluation metrics, or human-robot interaction.
Among them, the book on humanoid robotics \cite{goswami2019humanoid} studied comprehensively different aspects of humanoid robots, including their history, design, mechanics, control, simulation, and interaction.
Several survey papers likewise studied specific aspects of humanoid robots, for example humanoid dynamics~\cite{sugihara2020survey}, control \cite{yamamoto2020survey}, motion generation \cite{Tazaki2020survey}, or robot teleoperation interface design and metrics \cite{de2009survey}.
A primary work on bilateral teleoperation techniques has been presented by \cite{hokayem2006bilateral} as well.
Another seminal survey \cite{Goodrich2007Survey} covers many aspects of interactive robots, including their design, autonomy level, and human factors that helped us in articulating the current manuscript.
Another interesting survey \cite{goodrich2013teleoperation} highlights several aspects of humanoid teleoperation and autonomy.
However, \cite{goodrich2013teleoperation} is a decade old, and an up-to-date survey on the topic is missing, especially considering that humanoid teleoperation is a far-from-solved challenge and a highly active field of research where new solutions are proposed each year.
Following the workshop in~\cite{workshop2019}, this survey paper presents the latest results in humanoid robot teleoperation and draws in detail the challenges that the research community faces to effectively deploy such systems toward real-world scenarios.
Starting from what emerged from the workshop, we conducted a survey on teleoperation of humanoid robots. We present here the systems and devices that have been adopted so far to teleoperate humanoids (Sec.~\ref{sec:TeleoperationSystemDevices}) and how these robots have been modeled, retargeted, and controlled (Sec.~\ref{sec:RobotControl}). We also examine a
promising case of teleoperation in which the robot assists the user in accomplishing a desired task (Sec.~\ref{sec:AssistedTele}).
Later, we discuss complications along with some compensating solutions that arise due to non-ideal communication channels (Sec.~\ref{sec:CommunicationChannel}).
We explain the evaluation of teleoperation systems prior to development to meet the users' needs (Sec.~\ref{sec:EvaluationMetrics}).
Finally, discussions on current and potential applications and the associated challenges follow (Sec.~\ref{sec:Applications}).
\iffalse
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{Figures/humanoids.pdf}
\caption{Humanoid robots better fit narrow passages, stairs, unstructured, or cluttered environments \cite{workshop2019}.}
\label{fig:env}
\end{figure}
\fi
\section{Background}
\label{sec:Background}
\subsection{Modeling}
The human and the humanoid robot are modeled as multi-body mechanical systems with $n+1$ rigid bodies, called links, connected with $n$ joints each with one degree of freedom.
The configuration of a humanoid robot with $n$ joints depends on the robot shape, i.e., joint angles $\bm{s}\in \mathbb{R}^n$, and the position $^{\mathcal{B}}\bm{p}_{\mathcal{I}}\in \mathbb{R}^3$ and orientation $^{\mathcal{B}}{\bm{R}}_{\mathcal{I}}\in SO(3)$ of the floating base (usually associated with the pelvis) relative to the inertial or world frame $\mathcal{I}$. The robot configuration is indicated by $\bm{q} = (^{\mathcal{B}}\bm{p}_{\mathcal{I}}, ^{\mathcal{B}}{\bm{R}}_{\mathcal{I}}, s) \in \mathbb{R}^3 \times SO(3) \times \mathbb{R}^n$. The pose of a frame attached to a robot link $\mathcal{A}$ is computed via $(^{\mathcal{A}}\bm{p}_{\mathcal{I}}, ^{\mathcal{A}}\bm{R}_{\mathcal{I}}) = \bm{\mathcal{H}}_A(\bm{q})$, where $\bm{\mathcal{H}}_A(\cdot) : \mathbb{R}^3 \times SO(3) \times \mathbb{R}^n \to \mathbb{R}^3 \times SO(3)$ is the geometrical forward kinematics.
The velocity vector $\bm{\nu} \in \mathbb{R}^3 \times \mathbb{R}^3 \times \mathbb{R}^n$ of the model is summarized in the vector $\bm{\nu}=(^{\mathcal{B}}\dot{\bm{p}}_{\mathcal{I}},^{\mathcal{B}}{\bm{\omega}}_{\mathcal{I}}, \dot{\bm{s}})$, where $^{\mathcal{B}}\dot{\bm{p}}_{\mathcal{I}}$, $^{\mathcal{B}}{\bm{\omega}}_{\mathcal{I}}$ are the base linear and rotational (angular) velocity of the base frame, and $\dot{\bm{s}}$ is the joints velocity vector of the robot.
To compute the velocity of a frame $\mathcal{A}$ attached to the robot, i.e., $ ^{\mathcal{A}}\bm{v}_{\mathcal{I}}= (^{\mathcal{A}}\dot{\bm{p}}_{\mathcal{I}},^{\mathcal{A}}{\bm{\omega}}_{\mathcal{I}}) \in \mathbb{R}^{3} \times \mathbb{R}^{3}$, we have
\begin{equation}
^{\mathcal{A}}\bm{v}_{\mathcal{I}}= \bm{\mathcal{J}}_A(\bm{q}) \bm{\nu}.
\label{eq:jacobian}
\end{equation}
$\bm{\mathcal{J}}_A(\bm{q}) \in \mathbb{R}^{6 \times (n+6)}$ is called the \textit{Jacobian} of the frame $\mathcal{A}$ and has the linear and angular part.
The dynamical model of the robot, with $n_c$ distinct reaction forces applied on the robot, is described by the equation:
\begin{equation}
\bm{M}(\bm{q}) \dot{\bm{v}} + \bm{C}(\bm{q},\bm{v})\bm{v} + \bm{g}(\bm{q}) = \bm{B}\bm{\tau} + \sum_{k=1}^{n_c}\bm{\mathcal{J}}_{k}^{T}(\bm{q})\bm{f}^{ext}_{k},
\label{eq:dyn}
\end{equation}
with $\bm{M}(\bm{q})$ being the symmetric positive definite inertia matrix of the robot, $\bm{C}(\bm{q},\bm{v})$ the vector of Coriolis and centrifugal terms, $\bm{g}(\bm{q})$ the vector of gravitational term, $\bm{B}=(\bm{0}_{n\times{6}},\bm{1}_n)^T$ a selector matrix, $\bm{\tau}$ the vector of actuator joint torques, $\bm{\mathcal{J}}_{k}(\bm{q})$ the Jacobian matrix associated with the application point of the $k$-th external force/torque, and $\bm{f}^{ext}_{k}$ the vector of the $k$-th external wrenches acting on the robot.
More information about the humanoid robot modeling can be found in \cite{sugihara2020survey} and \cite{marsden2013introduction}.
\subsection{Simplified Humanoid Robot Models}
Various simplified models are proposed in the literature in order to extract intuitive control heuristics and for real-time lower order motion planning.
Some of them include the Linear Inverted Pendulum Plus Flywheel \cite{pratt2006}, the Spring-Loaded Inverted Pendulum (SLIP) \cite{Wensing_SLIP}, the Cart-Table model \cite{Cart-Table}, the centroidal dynamics model \cite{orin2013centroidal}, and the single rigid-body model.
The most well-known approximation of the humanoid dynamics is the inverted pendulum model \cite{kajita20013d}.
In this model, the support foot (base joint) is connected to the robot center of mass (COM). Assuming the constant height of the inverted pendulum \cite{kajita20013d}, one can derive the equation of motion of the \textit{Linear Inverted Pendulum Model} (LIPM) by:
\begin{equation}
\ddot{\bm{x}} = \frac{g}{z_0}({\bm{x} - \bm{x}_{b}}),
\label{eq:lipm}
\end{equation}
where $g$ is the gravitational acceleration constant, $z_0$ is the constant height, $\bm{x} \in \mathbb{R}^2$ is the COM coordinate vector, and $\bm{x}_{b} \in \mathbb{R}^2$ is the base of LIPM coordinate vector.
The base of the LIPM is often assumed to be the Zero Moment Point (ZMP) (equivalent to the center of pressure, COP) of the humanoid robot \cite{vukobratovic1972stability}.
The LIPM dynamics can be divided to stable and unstable modes, where the unstable part is referred to (instantaneous) Capture Point \cite{pratt2006}, Divergent Component of Motion (DCM) \cite{takenaka2009real, englsberger2015}, or Extrapolated Center of Mass \cite{hof2008extrapolated} in the literature. The DCM dynamics is characterised by a first-order system as:
\begin{equation}
{\bm{\xi}} = \bm{x} + b{\bm{\dot{x}}},
\label{eq:dcmCOM}
\end{equation}
where $\bm{\xi}$ and $b = \sqrt{\frac{z_0}{g}}$ are the DCM variable and the time constant. This equation shows that the COM follows the DCM.
Differentiating \eqref{eq:dcmCOM} and replacing it to \eqref{eq:lipm} results to:
\begin{equation}
\dot{\bm{\xi}} = \frac{1}{b} (\bm{\xi}- \bm{x}_b).
\label{eq:dcm}
\end{equation}
This equation shows that DCM dynamics is unstable, i.e., it has strictly positive eigenvalue.
Equations \eqref{eq:dcmCOM} and \eqref{eq:dcm} together represents the dynamics of LIPM.
In the literature, the notion of DCM is used both for planning the footsteps and for stabilizing the robot motion. They will be explained in more detail in Section \ref{sec:RobotControl}.
\section{Teleoperation System and Devices}
\label{sec:TeleoperationSystemDevices}
\begin{figure}[!t]
\setlength\belowcaptionskip{-0.7\baselineskip}
\centering
\includegraphics[width=\columnwidth]{Figures/teleoperation-architecture-3.pdf}
\caption{Schematic architecture for teleoperating a humanoid.}
\label{fig:teleoperation_architecture}
\end{figure}
In the literature, the terms teleoperation and telexistence have been used indistinctly in different contexts. Telexistence refers to the technology that allows human to virtually exist in a remote location through an avatar, experiencing real-time sensations from the remote site \cite{tachitelexistence}. Both the remote environment and the avatar can be real or virtual, but in this article we only consider a real environment and a surrogate humanoid robot as avatar. Telexistence has also been referred to as telepresence in the literature \cite{tachitelexistence}.
The concept of teleoperation, on the other hand, still refers to a human operator remotely controlling a robot, but the focus is mainly put on performing tasks that require high dexterity in the remote location.
We use these terms interchangeably throughout this survey.
From another perspective, the teleoperation setup represents an interactive system where the robot imitates the human's actions to reach a common objective \cite{Goodrich2007Survey}.
In a teleoperation setup, the user is the person who teleoperates the humanoid robot and identifies the teleoperation goal, i.e. intended outcome, through interfaces.
The interfaces are the means of the interaction between the user and the robot.
The nature of the exchanging information, constraints, the task requirements, and the degree of shared autonomy determine different preferences on the interface modalities, according to specific metrics which will be discussed later.
Moreover, the choice of the interfaces should make the user feel comfortable, hence enabling a natural and intuitive teleoperation.
\subsection{Teleoperation Architecture}
\label{sec:TeleoperationArchitecture }
Fig.~\ref{fig:teleoperation_architecture} shows a schematic view of the architecture for teleoperating a humanoid robot.
First, human kinematic and dynamic information are measured and transmitted to the humanoid robot motion for teleoperation.
More complex retargeting methods (i.e., mapping of the human motion to the robot motion), employed in assisted teleoperation systems (Sec.~\ref{sec:AssistedTele}), may need the estimation of the user reaction forces/torques.
There are cases in which also physiological signals are measured in order to estimate the psychophysiological state of the user, which can help enhancing the performance of the teleoperation.
On the basis of the estimated states, the retargeting policy is selected and the references are provided to the robot accordingly.
Teleoperation systems are employed not only for telemanipulation scenarios but also for social teleinteractions (i.e. remotely interacting with other people). In this case, the robot's anthropomorphic motion and social cues such as facial expressions can enhance the interaction experience. Therefore, rich human sensory information is indispensable.
To effectively teleoperate the robot, the user should make proper decisions; therefore he/she should receive various feedback from the remote environment and the robot.
In many cases, the sensors for perceiving the human data and the technology to provide feedback to the user are integrated together in an interface.
In the rest of the section, we will discuss different available interfaces and sensor technologies in teleoperation scenarios,
whereas their design and evaluation will be discussed later in Sec.~\ref{sec:EvaluationMetrics}.
The retargeting block (Fig.~\ref{fig:teleoperation_architecture}) maps human sensory information to the reference behavior for the robot teleoperation, hence the human can be considered as master or leader, and the robot as slave or follower.
We can discern two retargeting strategies: unilateral and bilateral teleoperation. In the unilateral approach, the coupling between the human and robot takes place only in one direction.
The human operator can still receive haptic feedback either as a kinesthetic cue not directly related to the contact force being generated by the robot or as an indirect force in a passive part of her/his body not commanding the robot.
But in bilateral systems, human and humanoid robot are directly coupled. The choice of bilateral retargeting depends on the task, communication rate, and degree of shared autonomy, as will be discussed~in~Sec.~\ref{sec:AssistedTele}.
The human retargeted information, together with the feedback from the robot, are streamed over a communication channel that could be non-ideal. In fact, long distances between the operator and the robot or poor network conditions may induce delays in the flow of information, packet loss and limited bandwidth, adversely affecting the teleoperation experience.
Sec.~\ref{sec:CommunicationChannel} details the approaches in the literature to teleoperate robots in such conditions.
Finally, the robot local control loop
generates
the low level commands- i.e., joints position, velocity, or torque references- to the robot, taking into account the human references (Sec.~\ref{sec:RobotControl}).
\subsection{Human Sensory Measurement Devices}
\label{sec:HumanSensoryMeasurementsDevices}
In this section, we provide a detailed description of the technologies available to sense various human states,
including the advances to measure the human motion, the physiological states, and the interaction forces with the environment.
We report in TABLE \ref{tab:teleoperation} the various measurement devices adopted so far for humanoid robot teleoperation.
\subsubsection{Human kinematics and dynamics measurements}
To provide the references for the robot motion, we need to sense human intentions.
For simple teleoperation cases, the measurements may be granted through simple interfaces such as keyboard, mouse, or joystick commands \cite{ando2020master}.
However, for a more complex system like a humanoid robot, those simple interfaces may not be enough, especially when the user wants to exert a high level of control authority over the robot. Therefore, the need for natural interfaces for effectively commanding the robot arises.
We can consider solutions benefiting from the similarity of the human and the humanoid robot anthropomorphic geometries, i.e., providing the references to the robot limbs with spatial analogies to those of the human (natural mapping).
Therefore, the need to measure the human kinematics emerges.
Different technologies have been employed in the literature to measure the motion of the main human limbs, such as legs, torso, arms, head, and hands.
An ubiquitous option are the Inertial Measurement Unit (IMU)-based wearable technologies. In this context,
two cases are possible,
a segregated set of IMU sensors are used throughout the body to measure the human motion
or an integrated network of sensors is used throughout the human body \cite{darvish2019, penco2019}.
The former case normally provides the raw IMU values, while the latter provides the information of the human limb movements.
In the second case,
a calibration process computes the sensors transformations with respect to body segments \cite{roetenberg2009xsens}. This technology is especially of interest because of the high accuracy and frequency of the retrieved human motion information
without the occlusion problem.
However, its accuracy may suffer from disturbances caused by the magnetic field and displacement of the sensors with respect to their initial emplacement.
Yet, another recent wearable technology to capture the hand pose is a stretchable soft glove embedded with distributed capacitive sensors \cite{glauser2019interactive}.
A review about the textile-based wearable technology used to estimate the human kinematics is provided in \cite{wang2020textile}.
Optical sensors are another technology used to capture the human motion, with active and passive variants. In the active case, the reflection of the pattern is sensed by the optical sensors. Some of the employed technologies include depth sensors and optical motion capture systems.
In the case of passive sensors, RGB monocular cameras (regular cameras) or binocular cameras (stereo cameras) are used to track the human motions. Thanks to the optical sensors, a skeleton of the human body is generated and tracked in 2-D or 3-D Cartesian space. The main problems with these methods are the occlusion and the low portability of the setup.
To track the users' motion in bilateral teleoperation scenarios, exoskeletons are often used.
In this case, the exoskeleton model and the encoder data are fed to the forward kinematics to estimate the human link's poses and velocities~\cite{ramos2019dynamic, ishiguro2020bilateral}.
The previously introduced technologies can be used to measure the human gait information with locomotion analysis.
Conventional or omnidirectional treadmills are employed in the literature for this purpose~\cite{ elobaid2018a}. While the treadmill can be used for even terrains, it would not work to retarget locomotion on uneven terrains.
To respond to this shortcoming, a cockpit-like teleoperation setup has been recently proposed~\cite{ishiguro2020bilateral}.
To estimate the interaction forces between the human and the teleoperation setup, force-torque sensors measuring the human wrenches can be integrated in ground plates, shoes, or exoskeletons.
Richer information can be obtained by distributed capacitance sensors that measure the pressure manifold.
\begin{table*}
\setlength\belowcaptionskip{-0.7\baselineskip}
\setlength\tabcolsep{1.5pt}
\centering
\arrayrulecolor{black}
\caption{Main works and technologies related to the teleoperation of humanoids.}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{4}{*}{ ref} &
\multirow{4}{*}{ robot} &
\multicolumn{11}{c|}{teleoperation devices} &
\multicolumn{4}{c|}{retargeting \& planning} &
\multicolumn{3}{c|}{stabilizer} &
\multicolumn{4}{c|}{ \makecell{whole-body \\ controller}} &
\multicolumn{2}{c|}{ \makecell{low-level \\ joint control}} \\
\cline{3-26}
&
&
\multicolumn{4}{c!{\color{black}\vrule}}{ \makecell{ human motion \\ measurement} } &
\multicolumn{7}{c!{\color{black}\vrule}}{feedback} &
\multirow{3}{*}{\rotatebox[]{-90}{\makecell{~GUI-based planner}}} &
\multicolumn{3}{c!{\color{black}\vrule}}{\makecell{retargeting \& motion \\ generation}} &
\multirow{3}{*}{\rotatebox[]{-90}{\makecell{~ZMP}}} &
\multirow{3}{*}{\rotatebox[]{-90}{\makecell{~DCM-ZMP}}} &
\multirow{3}{*}{\rotatebox[]{-90}{\makecell{~contact wrench, \\ force distribution }}} &
\multirow{3}{*}{\rotatebox[]{-90}{\makecell{~IK/velocity IK}}} &
\multirow{3}{*}{\rotatebox[]{-90}{\makecell{~inverse dynamics}}} &
\multirow{3}{*}{\rotatebox[]{-90}{\makecell{~momentum-based}}} &
\multirow{3}{*}{\rotatebox[]{-90}{\makecell{~QP method}}} &
\multirow{3}{0.6cm}{\rotatebox[]{-90}{\makecell{~joint position}}} &
\multirow{3}{0.6cm}{\rotatebox[]{-90}{\makecell{~joint torque}}} \\
\cline{3-13}\cline{15-17}
&
&
\multirow{2}{*}{\rotatebox[]{-90}{\makecell{~motion-capture \\ suit}}} &
\multirow{2}{*}{\rotatebox[]{-90}{\makecell{ ~optical-tracking}}} &
\multirow{2}{*}{\rotatebox[]{-90}{\makecell{ ~~~~~exoskeleton}}} &
\multirow{2}{*}{\rotatebox[]{-90}{\makecell{ mouse,keyboard,\\ \,joystick,treadmill~}}} &
\multicolumn{2}{c!{\color{black}\vrule}}{visual} &
\multicolumn{5}{c!{\color{black}\vrule}}{haptic} &
&
\multirow{2}{*}{\rotatebox[]{-90}{\makecell{~footstep motion \\ generation}}} &
\multirow{2}{*}{\rotatebox[]{-90}{\makecell{~upper-body \\ retargeting}}} &
\multirow{2}{*}{\rotatebox[]{-90}{\makecell{~whole-body \\ retargeting}}} &
&
&
&
&
&
&
&
&
\\
\cline{7-13}
&
&
&
&
&
&
\rotatebox[]{-90}{\makecell{ ~mono display, \\GUI~}} &
\rotatebox[]{-90}{\makecell{ ~Stereo, \\ AR/VR headset~}} &
\rotatebox[]{-90}{\makecell{ ~whole-body \\ exoskeleton}} &
\rotatebox[]{-90}{\makecell{ ~dual-arm \\exoskeleton}} &
\rotatebox[]{-90}{\makecell{ ~glove}} &
\rotatebox[]{-90}{\makecell{ ~balance}} &
\rotatebox[]{-90}{\makecell{ ~vibrotactile}} &
&
&
&
&
&
&
&
&
&
&
&
&
\\
\hline\hline
\rowcolor{Gray}
\cite{darvish2019} & iCub Genova04 & \checkmark & ~ & ~ & \checkmark & ~ & \checkmark & ~ & ~ & ~ & ~ & ~
& ~ & \checkmark & \checkmark & \checkmark & ~ & \checkmark & ~ & \checkmark & ~ & \checkmark & \checkmark & \checkmark & \checkmark \\
\cite{dafarra2022icub3} & iCub3 & \checkmark & \checkmark & ~ & \checkmark & ~ & \checkmark & ~ & ~ & \checkmark & ~ & \checkmark & ~ & \checkmark & \checkmark & ~ & ~ & \checkmark & ~ & \checkmark & ~ & ~ & \checkmark & \checkmark & ~ \\
\rowcolor{Gray}
\cite{Cisneros2022Team} & HRP-4CR & \checkmark & ~ & ~ & \checkmark & ~ & \checkmark & ~ & ~ & ~ & ~ & \checkmark & ~ & \checkmark & \checkmark & ~ & ~ & \checkmark & ~ & ~ & ~ & ~ & \checkmark & \checkmark & ~ \\
\cite{elobaid2018a} &iCub Genova04 & ~ & \checkmark & ~ & \checkmark & ~ & \checkmark & ~ & ~ & ~ & ~ & ~
& ~ & \checkmark & \checkmark & ~ & ~ & \checkmark & ~ & \checkmark & ~ & ~ & \checkmark & \checkmark & ~ \\
\rowcolor{Gray}
\cite{jorgensen2019} & Valkyrie & ~ & ~ & ~ & \checkmark & \checkmark & ~ & ~ & ~ & ~ & ~ & ~
& \checkmark & \checkmark & ~ & ~ & ~ & ~ & \checkmark & ~ & \checkmark & \checkmark & \checkmark & ~ & \checkmark \\
\cite{johnson2017team} & Atlas & ~ & ~ & ~ & \checkmark & \checkmark & ~ & ~ & ~ & ~ & ~ & ~
& ~ & \checkmark & ~ & ~ & ~ & ~ & \checkmark & ~ & \checkmark & \checkmark & \checkmark & ~ & \checkmark \\
\rowcolor{Gray}
\cite{chagas2021humanoid} & DRC-Hubo & ~ & \checkmark & ~ & \checkmark & ~ & \checkmark & ~ & ~ & ~ & ~ & ~ & ~ & \checkmark & \checkmark & ~ & \checkmark & ~ & ~ & \checkmark & ~ & ~ & ~ & \checkmark & ~ \\
\cite{penco2019} & iCub Nancy01 & \checkmark & ~ & ~ & \checkmark & ~ & \checkmark & ~ & ~ & ~ & ~ & ~
& ~ & ~ & ~ & \checkmark & \checkmark & ~ & ~ & \checkmark & ~ & ~ & \checkmark & \checkmark & ~ \\
\rowcolor{Gray}
\cite{zucker2015} & DRC-Hubo Beta & ~ & ~ & ~ & \checkmark & \checkmark & ~ & ~ & ~ & ~ & ~ & ~
& \checkmark & \checkmark & ~ & ~ & \checkmark & ~ & ~ & \checkmark & ~ & ~ & ~ & ~ & \checkmark \\
\cite{cisneros2016} & HRP-2KAI & ~ & ~ & ~ & \checkmark & \checkmark & ~ & ~ & ~ & ~ & ~ & ~
& \checkmark & ~ & ~ & ~ & ~ & \checkmark & \checkmark & \checkmark & ~ & ~ & ~ & \checkmark & ~ \\
\rowcolor{Gray}
\cite{ishiguro2020bilateral} & JAXON & ~ & ~ & \checkmark & ~ & ~ & \checkmark & \checkmark & ~ & ~ & ~ & ~
& ~ & ~ & ~ & \checkmark & ~ & \checkmark & ~ & \checkmark & ~ & ~ & ~ & \checkmark & ~ \\
\cite{ramos2018} & little HERMES & ~ & ~ & \checkmark & ~ & ~ & \checkmark & \checkmark & ~ & ~ & \checkmark & ~
& ~ & ~ & ~ & ~ & ~ & ~ & \checkmark & ~ & ~ & \checkmark & ~ & ~ & \checkmark \\
\rowcolor{Gray}
\cite{kim2013} & MAHRU & \checkmark & ~ & ~ & ~ & ~ & \checkmark & ~ & ~ & ~ & ~ & ~
& ~ & \checkmark & \checkmark & ~ & ~ & \checkmark & ~ & \checkmark & ~ & ~ & ~ & \checkmark & ~ \\
\cite{schwarz2021nimbro} & NimbRo Avatar & ~ & ~ & \checkmark & \checkmark & ~ & \checkmark & ~ & \checkmark & \checkmark & ~ & ~ & ~ & ~ & \checkmark & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & \checkmark \\
\rowcolor{Gray}
\cite{eramos2015} & HRP-2 & ~ & \checkmark & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~
& ~ & \checkmark & ~ & \checkmark & \checkmark & ~ & ~ & ~ & \checkmark & ~ & \checkmark & ~ & ~ \\
\cite{ishiguro2018} & JAXON & ~ & \checkmark & ~ & ~ & ~ & \checkmark & ~ & ~ & ~ & ~ & ~
& ~ & ~ & ~ & \checkmark & ~ & \checkmark & ~ & \checkmark & ~ & ~ & ~ & \checkmark & ~ \\
\rowcolor{Gray}
\cite{tachi2020telesar} & TELESAR VI & ~ & \checkmark & ~ & ~ & ~ & \checkmark & ~ & ~ & \checkmark & ~ & ~ & ~ & ~ & ~ & \checkmark & ~ & ~ & ~ & \checkmark & ~ & ~ & ~ & \checkmark & ~ \\
\cite{hu2014} & TORO & \checkmark & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~
& ~ & ~ & ~ & \checkmark & ~ & \checkmark & ~ & \checkmark & ~ & ~ & \checkmark & \checkmark & ~ \\
\rowcolor{Gray}
\cite{abi2018} & TORO & ~ & ~ & \checkmark & ~ & ~ & ~ & ~ & \checkmark & ~ & ~ & ~
& ~ & ~ & ~ & ~ & ~ & ~ & \checkmark & ~ & ~ & \checkmark & \checkmark & ~ & \checkmark \\
\cite{brygo2014b} & COMAN & ~ & \checkmark & ~ & ~ & ~ & ~ & ~ & ~ & ~ & \checkmark & \checkmark
& ~ & ~ & ~ & ~ & ~ & ~ & ~ & \checkmark & ~ & ~ & ~ & \checkmark & ~ \\
\rowcolor{Gray}
\cite{difava2016} & HRP-4 & \checkmark & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~
& ~ & ~ & ~ & \checkmark & ~ & ~ & ~ & ~ & \checkmark & ~ & \checkmark & \checkmark & ~ \\
\cite{peternel2013} & Fujitsu HOAP-3 & ~ & \checkmark & ~ & ~ & ~ & ~ & ~ & ~ & ~ & \checkmark & ~
& ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & \checkmark & ~ \\
\hline
\end{tabular}
\arrayrulecolor{black}
\label{tab:teleoperation}
\end{table*}
\subsubsection{Human physiological measurements}
Among the different sensors available to measure human physiological activities, we briefly describe those that have been mostly used in teleoperation and robotics literature.
An electromyography (EMG) sensor provides a measure of the muscle activity, i.e., contraction, in response to the neural stimulation action potential \cite{staudenmann2010methodological}. It works by measuring the difference between the electrical potential generated in the muscle fibres by employing two or more electrodes. There are two types of EMG sensors, the surface EMG and the intramuscular EMG. The former records the muscle activity from above the skin (therefore, noninvasive), while the latter measures the muscle activity by inserting needle electrodes into the muscle (intrusive). The main problem with EMG sensors, especially the surface one, is the low signal to noise ratio, which is the main barrier for a desirable performance \cite{chowdhury2013surface}. The EMG signals are used in the literature for teleoperating a robot or a prosthesis,
for estimation of the human effort and muscle forces \cite{staudenmann2010methodological}, or for estimation of the muscle stiffness. The EMG signals can anticipate human motions by measuring the muscle activities within a few milliseconds in advance of force generation; this could be exploited to anticipate the human operator's motion, enhancing the teleoperation.
Electroencephalography (EEG) sensors can be employed to identify the user mental state. They are most widely used in non-invasive brain-machine interface (BMI) and they monitor the brainwaves resulting from the neural activity of the brain.
The measurement is done by placing several electrodes on the scalp and measuring the small electrical signal fluctuations \cite{niedermeyer2005electroencephalography}.
Other sensors that could be employed in a telexistence scenario for an advanced estimation of the human psycho-physiological state include the Heart Rate Monitor sensors, which estimate the maximal uptake of the oxygen and the heart rate variability \cite{achten2003heart}, useful to measure the user's fatigue, Electro-Oculography (EOG) sensors or video-based eye-trackers, which estimate of gaze position based on the pupil or iris position \cite{sugano2015self}, important for the visual feedback given by Virtual Reality (VR) or Augmented Reality (AR) goggles; and capacitive thin-film humidity sensors, which measure the humidity of the gas-flow of the human skin \cite{ohhashi1998human}, a good indicator of the human emotional stimuli and stress level.
\subsection{Feedback Interfaces: Robot to Human}
\label{sec:FeedbackInterfaceDevices}
A crucial point in robot teleoperation is to sufficiently inform the human operator of the states of the robot and its work site, so that she/he can comprehend the situation or feel physically present at the site, producing effective robot behaviors.
TABLE \ref{tab:teleoperation} summarizes the different feedback devices that have been adopted for humanoid teleoperation.
\subsubsection{Visual feedback}
A conventional way to provide situation awareness to the human operator is through visual feedback. Visual information allows the user to localize themselves and other humans or objects in the remote environment.
Graphical User Interfaces (GUIs) were widely used by the teams participating at the DRC to remotely pilot their robots through the competition tasks~\cite{johnson2017team,cisneros2016,zucker2015}.
Not only the information coming from the RGB cameras of the robot but also other information such as depth, LIDAR, RADAR, and thermographic maps of the remote environment was displayed to the user.
An alternative way to give visual feedback to the human operator is through VR headsets, connected to the robot cameras.
Although this has been demonstrated to be effective in several robotic experiments \cite{kim2013,ishiguro2018,darvish2019,penco2019}, during locomotion the users often suffer from
motion sickness, since the images from the robot cameras are not stabilized, while the images perceived by the human eyes are automatically stabilized on the retina thanks to compensatory eye reflexes.
This aspect could be improved by adopting digital image stabilization techniques or AR \cite{2010ryueyes}.
Another issue concerning the visual feedback is related to the limited bandwidth of the communication link, which can delay the stream of information.
Also, human reaction time to visual input is inherently slow ($\approx$250-300 ms), so a higher delay in the stream of information can be perceived by the user and further aggravate the motion sickness. If also haptic feedback is streamed to the operator, then even lower delays can be disturbing. In fact, human reaction to haptic information is much faster (around 100-150ms).
\subsubsection{Haptic feedback}
Visual feedback is not often sufficient for many real-world applications, especially those involving power manipulation (with high forces) or interaction with other human subjects, where the dynamics of the robot, the contact forces with the environment, and the human-robot interaction forces are of crucial importance. In such scenarios also the haptic feedback is required to exploit the human operator's motor skills in order to augment the robot performance.
There are different technologies available in the literature to provide haptic feedback to the human. Force feedback, tactile and vibro-tactile feedback are the most used in teleoperation scenarios.
The interface providing kinesthetic force feedback can be similar to an exoskeleton \cite{Wang2015} or can be cable driven.
The latter only provides a tension force feedback, while the former provides the force feedback on different directions.
Dual-arm exoskeletons have been proposed to guide the teleoperated robot during manipulation tasks while receiving haptic feedback through the same actuated exoskeleton arms \cite{tachitelexistence, abi2018} and recently also a whole-body exoskeleton cockpit has been proposed to teleoperate the JAXON humanoid robot during heavy manipulation tasks and stepping on uneven terrains \cite{ishiguro2020bilateral}, getting a force feedback on the whole limbs.
To convey the sense of touch, tactile displays have been adopted in the literature \cite{wagner2004design}.
These can also provide temperature feedback to the user.
Other employed haptic feedbacks are vibrotactile and air pressure ones, used as a sensory substitution to transmit senses of touch, texture, forces, suggesting directions, or to catch the attention of the user \cite{brygo2014b}.
All these types of haptic feedback are combined in the telexistence system TELESAR V \cite{tachitelexistence}, which has been developed to provide complete cutaneous sensations to the human operator.
The idea is that different physical stimuli give rise to the same sensation in humans and are perceived as identical. This is due to the the fact that human skin has limited receptors and can perceive only force, vibration, and temperature, which in \cite{hapticcolors} are defined as \textit{haptic primary colors}. It is thus sufficient to combine these \textit{colors} in order to reproduce any cutaneous sensation without actually touching the real object.
\subsubsection{Balance feedback}
Haptic feedback can also be used to transmit to the operator a sense of the robot's balance.
The idea behind the balance feedback is to transfer to the operator the information about the \textit{effect of disturbances} over the robot dynamics or stability instead of directly mapping to the human the disturbance forces applied to the robot.
In \cite{brygo2014b}, Brygo \textit{et al.} proposed to provide the feedback of the robot's balance state by means of a vibro-tactile belt.
Also, Peternel and Babic~\cite{peternel2013} proposed a cable-driven haptic interface that maps the state of the robot's balance to the human demonstrator.
Alternatively, Abi-Farrajil \textit{et al.}~\cite{abi2018} introduced a task-relevant haptic feedback interface composed by two light-weight robotic arms that receives high-level informative haptic cues informing the user about the impact of her/his potential actions on the robot's balance.
These studies do not investigate the case of dynamic behaviors, but are rather limited to double support scenarios.
In~\cite{ramos2018} instead, simultaneous stepping is enabled via bilateral coupling between the human operator, wearing a Balance Feedback Interface (BFI), and the robot. The BFI is composed of an passive exoskeleton which captures human body posture and a parallel mechanism which applies feedback forces to the operator's torso.
\subsubsection{Auditory feedback}
Auditory feedback is another means of communication. It is mainly provided to the user through headphones, single or multiple speakers. Auditory information can be used for different purposes: to enable the user to communicate with others in the remote environment through the robot, to increase the user situational awareness, to localize the sound source by using several microphones, or to detect the collision of the robot links with the environment.
The user and the teleoperated robot may also communicate through the audio channel; e.g., for state transitions.
\subsection{Graphical User Interfaces (GUIs)}
GUIs are used in the literature to provide both feedback to the user and give commands to the robot.
In the DRC, operators were able to supervise the task execution through a task panel, using manual interfaces in case they needed to make corrections.
The main window consisted of a 2D and 3D visualization environment, the robot's current and goal states, motion plans, together with other perception sensor data such as hardware driver status~\cite{nakaoka2015task, zucker2015}.
Due to the limited robot cognitive skills, perception tasks were shared among the users and the robot.
A common approach adopted by the different teams was to guide the robot perception algorithms to fit the object models to the point cloud, used to estimate the 3D pose of objects of interest in the environment. For example, operators were annotating search regions by clicking on displayed camera images or by clicking on 3D positions in the point cloud.
Following that, markers were used to identify the goal pose of the robot arm end effectors \cite{johnson2017team}, the robot configuration, or with a higher autonomy level to define the goal pose of objects for manipulation tasks \cite{nakaoka2015task}.
In these cases, robots tried to find an obstacle-free path (related to Sec. \ref{sec:kinematic-retargeting::high-level}), and show the generated path to the operator for verification prior to execution~\cite{nakaoka2015task}.
Throughout this process, the robots' lower-body teleoperation was treated differently. When the robot's desired base/CoM goal or footsteps were marked by the user, an obstacle-free path (Sec.~\ref{sec:kinematic-retargeting::high-level}) and footsteps trajectories were automatically generated (Sec.~\ref{sec:kinematic-retargeting::low-level}). In this process, footsteps were adjusted to uneven terrain given the estimated height-field data~\cite{nakaoka2015task}.
GUIs have also been used to command frequent high-level tasks to the robot by encoding them as state machines or as task sequences~\cite{nakaoka2015task}.
In DRC open-source software tools, such as RViz and Choreonoid, were commonly used~\cite{zucker2015, nakaoka2015task}.
Custom functionalities were added to them using software plugins. Today, many of these functionalities can be integrated with VR and AR
devices.
\section{Robot Control}
\label{sec:RobotControl}
\begin{figure*}[!t]
\centering
\includegraphics[width=\textwidth]{Figures/section3/retargeting-control-architecture.pdf}
\caption{Flowchart of a humanoid robot \modify{retargeting and} control architecture (red color references are coming from \modify{human interfaces}).}
\label{fig:controller_architecture}
\end{figure*}
Humanoid robot control introduces many challenges to teleoperation scenarios.
For example, the robot model is nonlinear, hybrid, and underactuated with high degrees of freedom (DOFs).
The real robot dynamics also differs from its model in terms of link elasticity, joint backlash, and imperfect joint torque control tracking.
Moreover, even if we model the robot, the model of the environment (that the robot is interacting with) is not precise (partially known) and is subject to changes.
All these challenges entangled with the fact that the robot controller should run online, make the problem complicated.
Over the past decades, many control architectures have been proposed to teleoperate humanoid robots and to overcome the introduced challenges. Model-based optimal control architectures are the foremost technique used in the literature to address the problem of humanoid robot control. During DRC, most of the finalist control architectures converged to a similar design \cite{feng2015optimization, johnson2017team} where the robot trajectory planning and enhancing the robot stability is achieved by two different controller blocks. This architecture is conceptually demonstrated in Figure \ref{fig:controller_architecture}. This approach needs the humanoid's models (Section \ref{sec:Modeling}) as well as the references coming from the retargeting module.
Control algorithms are described in Section \ref{sec:controllerArchitecture}.
More recently, different researchers try to solve the trajectory planing and enhancing the robot stability together
in a single control problem (Section \ref{sec:ControlDesiderata}).
\subsection{Modeling}
\label{sec:Modeling}
\subsubsection{Notations and complete humanoid robot models}
The human and the humanoid robot are modeled as multi-body mechanical systems with $n+1$ rigid bodies, called links, connected with $n$ joints, each with one degree of freedom.
The configuration of a humanoid robot with $n$ joints depends on the robot shape, i.e., joint angles $\bm{s}\in \mathbb{R}^n$, and the position $^{\mathcal{I}}\bm{p}_{\mathcal{B}}\in \mathbb{R}^3$ and orientation $^{\mathcal{I}}{\bm{R}}_{\mathcal{B}}\in SO(3)$ of the floating base (usually associated with the pelvis, ${\mathcal{B}}$) relative to the inertial or world frame $\mathcal{I}$\footnote{ With the abuse of notation, we will drop $\mathcal{I}$ in formulas for simplicity.}. The robot configuration is indicated by $\bm{q} = (^{\mathcal{I}}\bm{p}_{\mathcal{B}}, ^{\mathcal{I}}{\bm{R}}_{\mathcal{B}}, \bm{s}) \in \mathbb{R}^3 \times SO(3) \times \mathbb{R}^n$. The pose of a frame attached to a robot link $\mathcal{A}$ is computed via $(^{\mathcal{I}}\bm{p}_{\mathcal{A}}, ^{\mathcal{I}}\bm{R}_{\mathcal{A}}) = \bm{\mathcal{H}}_A(\bm{q})$, where $\bm{\mathcal{H}}_A(\cdot) : \mathbb{R}^3 \times SO(3) \times \mathbb{R}^n \to \mathbb{R}^3 \times SO(3)$ is the geometrical forward kinematics.
The velocity vector $\bm{\nu} \in \mathbb{R}^{n+6}$ of the model is summarized in the vector $\bm{\nu}=(^{\mathcal{I}}\dot{\bm{p}}_{\mathcal{B}},^{\mathcal{I}}{\bm{\omega}}_{\mathcal{B}}, \dot{\bm{s}})$, where $^{\mathcal{I}}\dot{\bm{p}}_{\mathcal{B}}$, $^{\mathcal{I}}{\bm{\omega}}_{\mathcal{B}}$ are the base linear and rotational (angular) velocity of the base frame, and $\dot{\bm{s}}$ is the joints velocity vector of the robot.
To compute the velocity of a frame $\mathcal{A}$ attached to the robot, i.e., $ ^{\mathcal{I}}\bm{v}_{\mathcal{A}}= (^{\mathcal{I}}\dot{\bm{p}}_{\mathcal{A}},^{\mathcal{I}}{\bm{\omega}}_{\mathcal{A}}) \in \mathbb{R}^{3} \times \mathbb{R}^{3}$, we have
\begin{equation}
^{\mathcal{A}}\bm{v}_{\mathcal{I}}= \bm{\mathcal{J}}_A(\bm{q}) \bm{\nu}.
\label{eq:jacobian}
\end{equation}
$\bm{\mathcal{J}}_A(\bm{q}) \in \mathbb{R}^{6 \times (n+6)}$ is called the \textit{Jacobian} of $\mathcal{A}$ with the linear and angular parts.
Hence, the following affine relation between the robot velocity vector and centroidal momentum, the whole-body momentum of a robot about its CoM, holds:
\begin{equation}
\bm{h}=\bm{A}(\bm{q})\bm{\nu},
\label{eq:CMM}
\end{equation}
where $\bm{h} \in \mathbb{R}^{6}$. According to Newton-Euler's laws of motion and \eqref{eq:CMM}, the rate of change of centroidal momentum is \cite{orin2013centroidal}:
\begin{equation}
\begin{array}{c}
\dot{\bm{h}}
= \bm{A}\dot{\bm{\nu}}+\dot{\bm{A}}\bm{\nu} =\bm{f}^{g}+\sum_{i}\bm{\mathcal{J}}_{i}^{T}\bm{f}^{gr}_{i}+\sum_{j}\bm{\mathcal{J}}_{j}^{T}\bm{f}^{ext}_{i} \\
= \bm{f}^{g}+{\bm{\mathcal{J}}^{gr}}^{T}\bm{f}^{gr}+\sum_{i}\bm{f}^{ext}_{i}
\label{eq:eulermomentum}
\end{array}
\end{equation}
where $\bm{f}^g=(m\bm{g}^T, \bm{0}_{3\times{1}}^T)$
is the wrench due to gravity, $\bm{f}^{gr}_{i}$ and $\bm{f}^{ext}_{i}$ are the $i$'th ground reaction and other external wrenches applied to the robot, $\bm{\mathcal{J}}_{j}(\bm{q})$ the Jacobian matrix associated with the application point of the $k$-th external wrench.
All ground reaction wrenches $\bm{f}^{gr}$ have been written in a single matrix equation, with $\bm{\mathcal{J}}^{gr}$ being all the associated Jacobian matrices from the application points of the external ground reaction wrenches to the centroidal frame, the frame with the world orientation and the instantaneous CoM origin.
The $n+6$ equations of dynamical model of the robot, with all $n_c$ contact forces applied on the robot, is described by~\cite{cisneros2020inverse}:
\begin{equation}
\bm{M}(\bm{q}) \dot{\bm{\nu}} + \bm{C}(\bm{q},\bm{\nu})\bm{\nu} + \bm{g}(\bm{q}) = \bm{B}\bm{\tau} + \sum_{k=1}^{n_c}\bm{\mathcal{J}}_{k}^{T}(\bm{q})\bm{f}^{c}_{k},
\label{eq:dyn}
\end{equation}
with $\bm{M}(\bm{q})$
|
being the symmetric positive definite inertia matrix of the robot, $\bm{C}(\bm{q},\bm{\nu})$ the vector of Coriolis and centrifugal terms, $\bm{g}(\bm{q})$ the vector of gravitational terms, $\bm{B}=(\bm{0}_{n\times{6}},\bm{1}_n)^T$ a selector matrix, $\bm{\tau}$ the vector of actuator joint torques, and $\bm{f}^{c}_{k}$ the vector of the $k$-th contact wrenches acting on the robot.
More information about the humanoid robot modeling can be found in \cite{sugihara2020survey}.
\subsubsection{Simplified humanoid robot models}
Various simplified models are proposed in the literature in order to extract intuitive control heuristics and for real-time lower order motion planning.
Some of them include the Linear Inverted Pendulum Plus Flywheel \cite{pratt2006}, the Spring-Loaded Inverted Pendulum (SLIP) \cite{Wensing_SLIP}, the Cart-Table model \cite{Cart-Table}, and the centroidal dynamics model \cite{orin2013centroidal}.
The most well-known approximation of humanoid dynamics is the inverted pendulum model \cite{kajita20013d}.
In this model, the support foot (base joint) is connected through a variable length link to the robot center of mass (CoM). Assuming a constant height for the inverted pendulum \cite{kajita20013d}, one can derive the equation of motion of the \textit{Linear Inverted Pendulum Model} (LIPM) by:
\begin{equation}
\ddot{\bm{x}} = \frac{g}{z_0}({\bm{x} - \bm{x}_{b}}),
\label{eq:lipm}
\end{equation}
where $g$ is the gravitational acceleration constant, $z_0$ is the constant height, $\bm{x} \in \mathbb{R}^2$ is the CoM coordinate vector, and $\bm{x}_{b} \in \mathbb{R}^2$ is the base of the LIPM coordinate vector.
The base of the LIPM is often assumed to be the Zero Moment Point (ZMP) (equivalent to the center of pressure, CoP) of the humanoid robot \cite{vukobratovic1972stability}.
The LIPM dynamics can be divided into stable and unstable modes, where the unstable mode is referred as (instantaneous) Capture Point \cite{pratt2006}, Divergent Component of Motion (DCM) \cite{takenaka2009real, englsberger2015}, or Extrapolated Center of Mass \cite{hof2008extrapolated} in the literature. The stable mode is called Convergent Component of Motion (CCM) \cite{englsberger2015}. The DCM dynamics is characterised by a first-order system as:
\begin{equation}
{\bm{\xi}} = \bm{x} + b{\bm{\dot{x}}},
\label{eq:dcmCOM}
\end{equation}
where $\bm{\xi}$ and $b = \sqrt{\frac{z_0}{g}}$ are the DCM variable and the time constant. This equation shows that the CoM follows the DCM.
Differentiating \eqref{eq:dcmCOM} and replacing into \eqref{eq:lipm} results in:
\begin{equation}
\dot{\bm{\xi}} = \frac{1}{b} (\bm{\xi}- \bm{x}_b).
\label{eq:dcm}
\end{equation}
Equations \eqref{eq:dcmCOM} and \eqref{eq:dcm} together represent the dynamics of LIPM.
In the literature, the notion of DCM is used both for planning the footsteps and for stabilizing the robot motion.
\subsection{Cascade Control Architecture for Humanoid Robots}
\label{sec:controllerArchitecture}
The hierarchical control architecture is composed of \textit{Path and Footstep Planner}, \textit{Stabilizer}, \textit{Whole-Body Controller}, \textit{Low-level Joint Controller}, and \textit{State Estimator} as shown in Figure \ref{fig:controller_architecture}.
In general, as we go from left to right in the control pipeline, the robot model used in each block becomes more detailed, the receding horizon decreases, the frequency of the control loop increases, and lower-level input from the retargeting module is necessary.
The human operator can provide various reference inputs through the retargeting module (Section \ref{sec:Retargeting}) as shown in red in Figure \ref{fig:controller_architecture}.
As follows, each of these blocks is discussed.
\subsubsection{Path and Footstep Planner}
\label{sec:PathandFootstepPlanner}
The goal of this block is to find the alternate footstep locations, moments,
and allocating a given footstep to the left or right foot to follow the user's commands.
The retargeting module may provide different types of input to the planner, such as the user's desired footstep contacts, the CoM velocity and the base rotational velocity, or the desired goal pose (region) in the workspace.
While with the first two inputs, the user is in charge of the obstacle avoidance, in the last one the planner deals with finding the whole path leading to the desired goal.
We structured this part of the section according to the category of the user's input.
\paragraph{Input: desired path or trajectory}
The role of the footstep
planner is to find the sequence of foot locations and moments
given the CoM position or velocity, and the floating base orientation or angular velocity provided by \textit{Retargeting} module.
One possible approach to address this problem is based on the instantaneous capture point \cite{pratt2006}.
Given the reference CoM position and velocity, one can compute the next
foot contact point using the capture point relation in \eqref{eq:dcmCOM}.
In simple cases, the contact sequences can be preliminary identified by the user for example by a finite state machine, and the desired footstep locations are modified to the left or right side of the capture point according to the nature of the foot contact (left or right).
Another way to find the sequence of footsteps is to formulate an optimization problem, where the cost function is decided based on commonsense heuristics.
\iffalse
\textcolor{blue}{For example in \cite{dafarra2018control} a trade-off between the footstep distance and instance for flat terrain is considered, whereas in \cite{garimort2011humanoid} the cost function includes the footstep length, obstacle avoidance, and the number of steps.}
\fi
The footstep
moment can be found from the CoM velocity such that the total gait cycle duration corresponds to the average CoM velocity and gait length.
\paragraph{Input: desired goal region}
When the user provides the goal region, the humanoid robot not only should plan its footsteps but also should find a \textit{feasible} path for reaching the goal, if any exists. Different from wheeled mobile robots, feasible path here means an obstacle-free path or one where the humanoid robot can traverse the obstacles by stepping over.
Primary approaches for solving the integrated path and footstep planning problem are search-based methods and reactive methods.
The first step to find a feasible path is to perceive the workspace by means of the robot perception system, finding the regions where the robot can step. Following the identification of the workspace, a feasible path is planned.
Search-based algorithms try to find a path from the starting point to the goal region by searching a graph. The graph can be made using a grid map of the environment or by a random sampling of the environment.
Some of the methods used in the literature to perform footstep planning are A* \cite{chestnutt2005footstep, griffin2019footstep}, D* Lite \cite{garimort2011humanoid}, RRT variations \cite{perrin2011fast}, and dynamic programming techniques \cite{kuffner2001footstep}.
The \textit{completeness}, \textit{global optimality}, and the ability of real-time \textit{replanning} of the path in dynamic environments are the important features of these search-based algorithms when selecting.
These algorithms are not very efficient for real-time execution when an exhaustive search is done, hence, to enhance the efficiency a \textit{heuristic} is chosen in order to prune the search space and perform a greedy search.
The reactive methods for footstep planning problem can be addressed as an optimization problem or a dynamical system as well.
In \cite{herdt2010online} the problem has been devised as a Model Predictive Control (MPC) approach. It allows finding the foot poses as a continuous decision making problem by formulating it as a Quadratic Programming (QP) optimization problem.
However, when the footstep rotation or the obstacle-avoidance is added to the problem, the optimization problem becomes non-convex, therefore, there is no guarantee on completeness and global optimality \cite{deits2014footstep}. In order to relax the non-convex constraints, \cite{deits2014footstep} transformed footstep rotations into a set of approximated functions, and obstacle-avoidance constraints are transformed into a set of obstacle-free regions with convex shapes. As a result, they have formulated the optimization problem as a mixed-integer quadratic programming problem.
However, the main limitation of this approach is its high computational complexity.
In this context, the problem of path planning and obstacle avoidance for a humanoid robot can be viewed as a simplified dynamical system control approach, for example, by using potential fields \cite{fakoor2015revision}.
\subsubsection{Stabilizer}
\label{sec:Stabilizer}
The main goal of the stabilization layer is to implement a control law that enhances the stability of the centroidal dynamics of the robot. Because of the complexity of robot dynamics, classical approaches are limited to examine the stability of a closed-loop control system.
Therefore, other insights such as ZMP criteria and DCM dynamics are tailored in order to evaluate how far the robot is about to fall \cite{koolen2012}.
The \textit{stabilizer} gets as input the reference poses for the end-effectors such as head, arms, and feet, together with the desired CoM trajectory. These inputs are provided by the retargeted human motion and/or by the \textit{Path and Footstep Planner}.
However, these reference trajectories may destabilize the robot behavior, therefore the \textit{stabilizer} adapts the references according to the stability criteria.
Depending on the lower level whole-body controller, the output of the \textit{stabilizer} are modified references for the CoM pose, the end-effector poses, the contact points, the contact wrenches and/or the rate of change of the momenta.
Next, we provide an overview of controllers according to different stability criteria adopted so far.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{Figures/section4/StabilizerControllers.pdf}
\caption{Different stability criteria controllers. a) ZMP controller; b) DCM-ZMP controller; c) contact wrench controller.}
\label{fig:StabilizerControllers}
\end{figure}
\paragraph{ZMP controller}
These controllers are based on the idea that the robot's ZMP should remain inside the support polygon of the base \cite{vukobratovic1972stability, koolen2012}, as shown in Figure \ref{fig:StabilizerControllers}.
Given the footstep trajectories provided by the planner or the human, this module computes the desired ZMP.
While walking, in single support the desired ZMP remains in the middle of the support polygon (e.g., left foot), during double support the desired ZMP moves smoothly from the previous support leg to the middle of the new support leg polygon (e.g., from the left foot to the right foot).
Following that, a ZMP controller makes sure the real ZMP tracks the desired ZMP trajectory.
In order to converge the ZMP error to zero, the desired CoM motion (using \eqref{eq:lipm}) or the desired foot force distribution is computed \cite{choi2007, kajita2010}.
The ZMP controller can be based on PID, pole placement, optimal controllers, or other control approaches.
The real ZMP is estimated using the Force/Torque sensors located at the robot heel.
To avoid the phase delay of the ZMP tracking due to the mechanical compliance and control \cite{kajita2010}, one can model the delay and consider it in the ZMP controller.
Nevertheless, the ZMP controller introduces several limitations.
If the robot is subject to high disturbances, it does not adapt online to footstep locations to avoid the robot from falling.
Moreover, this approach hardly extends to non-flat terrain or when the support foot rotates or slides.
\paragraph{DCM-ZMP controller}
Another approach that is often employed with the simplified model control is the DCM \cite{takenaka2009real}, which can be viewed as an extension of the capture point concept to the three-dimensional case for uneven terrain \cite{englsberger2015}.
Equation \eqref{eq:dcm} shows that the DCM dynamics is unstable, i.e., it has a strictly positive eigenvalue, and using equation of LIPM \eqref{eq:lipm} it can be shown that the CoM dynamics converges to the DCM value.
Therefore, the main goal of this controller is to implement a control law that stabilizes the unstable DCM dynamics \eqref{eq:dcm} as well as the ZMP as mentioned previously.
To stick to the formulation presented in \ref{sec:Modeling}, the DCM-ZMP controller is explained with flat pavement assumption, however, for the extension of the controller to 3D, one can check \cite{englsberger2015}.
The block diagram of the DCM-ZMP stabilizer is shown in Figure \ref{fig:StabilizerControllers}.
Based on the desired footstep, the desired DCM and its derivative are extracted. Later, we close the loop on the desired DCM in order to stabilize its dynamics by:
\begin{equation}
\dot{\bm{\xi}} -\dot{\bm{\xi}}_{ref} = - k_{\xi} ({\bm{\xi}} -{\bm{\xi}}_{ref}),
\label{eq:dcmController}
\end{equation}
where $k_{\xi}>0$.
\iffalse
\textcolor{blue}{so that the error on DCM error converges to zero asymptotically.}
\fi
By replacing \eqref{eq:dcmController} into equation \eqref{eq:dcm}, the desired ZMP is found by:
\begin{equation}
\bm{zmp}_{ref}= \bm{\xi} - b \dot{\bm{\xi}}_{ref} + b k_{\xi} ({\bm{\xi}} -{\bm{\xi}}_{ref}),
\label{eq:dcmZMPController}
\end{equation}
and the external forces acting on the robot can be found by \eqref{eq:lipm}. Finally, the desired values are provided to the ZMP Controller.
One can formulate both the DCM and ZMP controllers together in a single optimal control problem as well \cite{cisneros2020inverse}.
DCM-ZMP controller has been deployed for whole-body retargeting in \cite{ishiguro2018}.
In order to enhance the stability of the system, the authors have introduced the predicted support region where the robot time-delayed DCM is kept inside that region.
\paragraph{Contact wrench controller}
This controller finds the desired contact wrenches at each contact point such that it enhances the robot stability. A schematic block diagram of this controller is shown in Figure \ref{fig:StabilizerControllers}.
This controller has been initially introduced in \cite{ott2011} and later considered by \cite{hirukawa2006universal} as a stability augmentation criteria when introducing the contact constraints. This approach is similar to the momentum-based controller and will be explained in more detail later. However, to be thorough, we have mentioned it here as well.
\iffalse
Given the reference CoM position $\bm{x}_\mathcal{COM}^{d}$ and velocity $\dot{\bm{x}}_\mathcal{COM}^{d}$, and the reference base $\mathcal{B}$ orientation and angular velocity, \cite{ott2011},
a PD compliance control finds the net desired external wrench vector $\bm{w}_{ext}^{d}$ by:
\begin{equation}
\begin{array}{c}
\bm{w}_{ext}^{d}= \begin{bmatrix}
m\bm{g}\\
\bm{0}
\end{bmatrix}
- \bm{K}_D
\begin{bmatrix}
\dot{\bm{x}}_\mathcal{COM} -\dot{\bm{x}}_\mathcal{COM}^{d} \\
\tau_D( \bm{\omega}_\mathcal{B}^{d} ,\bm{\omega}_\mathcal{B} )
\end{bmatrix}
\\
- \bm{K}_P
\begin{bmatrix}
{\bm{x}}_\mathcal{COM} -{\bm{x}}_\mathcal{COM}^{d} \\
{\tau_P({ {\bm{R}}_\mathcal{B}^{d},{\bm{R}}_\mathcal{B})}}
\end{bmatrix},
\label{eq:ExternalWrenchDesired}
\end{array}
\end{equation}
where $\bm{K}_P$ and $\bm{K}_D$ are the symmetric positive semi-definite proportional and damping gain matrices; $\tau_P$ and $\tau_D$ functions compute the error on the rotational term of the base orientation error.
In \eqref{eq:ExternalWrenchDesired}, the first term compensates gravity.
Given the contact points, the desired net wrenches at the robot CoM are employed to compute the wrench at each contact point.
An approach to compute the desired contact wrenches based on the frictional grasping method is proposed by \cite{ott2011} where an optimization problem is defined.
Here without the loss of generality, only the forces at the contact points are considered.
A \textit{contact map} matrix $\bm{G}_{c} \in \mathbb{R}^{6 \times 3c}$ is identified to map $c$ contact force vectors $\bm{f}_c$ to the net wrench at CoM by:
\begin{equation}
\bm{w}_{ext} = \bm{G}_c \bm{f}_c.
\label{eq:contactMap}
\end{equation}
Using this equation and given the $\bm{w}^d_{ext}$ from \eqref{eq:ExternalWrenchDesired}, an optimization problem is defined to compute the contact forces:
\begin{equation}
\begin{array}{c}
\text{arg~}\underset{\bm{f}_c}{\text{min}}
~( \bm{Q}_\mathcal{COM} \| \bm{F}^d_{\mathcal{COM},ext}-\bm{G}_{c, f} \bm{f}_c \|^{2}+\\
\bm{Q}_\mathcal{B} \| \bm{\tau}^d_{\mathcal{B},ext}-\bm{G}_{c,\tau} \bm{f}_c \|^{2}+
\bm{R} \| \bm{f}_c \|^{2}), \\
s.t. ~~ {f}_{c,z} \ge 0,\\
({f}_{c,x})^2 + ({f}_{c,y})^2 \le \mu({f}_{c,z})^2,
\end{array}
\label{eq:contactOptimization}
\end{equation}
the first term in this equation tracks the COM position with the highest priority, the second term tracks of the base rotational motion, and the last term with the lowest priority is the regularization term.
$\bm{G}_{c, f}$ and $\bm{G}_{c,\tau}$ in \eqref{eq:contactOptimization} are related to the linear and rotational terms of $\bm{G}_{c}$.
The optimization is subject to unilateral friction and the friction cone constraints.
These constraints have been considered as a stability criterion in \cite{ hirukawa2006universal}.
The desired contact forces are later passed to the whole-body controller in order to compute the joint torques.
\fi
\subsubsection{Whole-Body control layer (WBC)}
This block provides the reference joint angles, velocities, accelerations and/or torques to the low-level joint controller.
It gets as input the retargeted human references together with the planner references, both corrected by the stabilizer.
In the following, different approaches of the whole-body control are presented.
\paragraph{Whole-Body inverse instantaneous velocity kinematics control}
The problem of inverse instantaneous or velocity kinematics is to find the configuration state velocity vector $\bm{\nu}(t)$ for a given set of task space velocities using the Jacobian relation as in \eqref{eq:jacobian}.
One common approach is to formalize the controller as a constrained QP problem with inequality and equality constraints.
The problem can be formulated as:
\begin{equation}
\begin{array}{c}
\text{arg~}\underset{\bm{\nu}}{\text{min}} \underset{k}{\sum} w_k( \| \bm{A}_{k}\bm{\nu}-\bm{b}_{k}\|^{2})+ \epsilon\| \bm{\nu}\|^{2}, \\
s.t.
\quad \bm{d}_{min} \le \bm{D}\bm{\nu} \le \bm{d}_{max}, \\
\quad\quad\quad \bm{\nu}_{min} \le \bm{\nu} \le \bm{\nu}_{max}, \\
\end{array}
\label{eq:ivkqp}
\end{equation}
where $\bm{\nu}$ is the joint velocities control input vector; $\bm{A}_k$ is the equivalent Jacobian matrix of the task $k$ governed by \eqref{eq:jacobian}; $\bm{b}_k$ is the reference velocity value for the task, $\epsilon$ is a regularization factor used to handle singularities.
$\bm{\nu}_{min}$ and $\bm{\nu}_{max}$ are the control input limits, and $~\bm{d}_{min} \le \bm{D}\bm{\nu} \le \bm{d}_{max}~$ are other equality and inequality constraints, e.g., collision avoidance or contact related constraints.
Conventional solutions of redundant inverse kinematics are founded on the pseudo-inverse of the Jacobian matrix \cite{kanoun2011}.
\paragraph{Whole-Body inverse kinematics control}
The problem of Inverse Kinematics (IK) is to find the configuration space vector $\bm{q}(t)$ given the reference task space poses.
This problem can be solved analytically, however, this solution is not scalable to different robots and for a humanoid robot with a high degree of freedom is not feasible.
Differently from inverse instantaneous velocity kinematics, to solve the IK problem a nonlinear constrained optimization problem is defined as:
\begin{equation}
\begin{array}{c}
\text{arg~}\underset{\bm{q}}{\text{min}} \underset{k}{\sum} w_k~ dist(\bm{\mathcal{H}}_{k}(q),\bm{x}^d_{k})^{2}+ \epsilon\| \bm{q}\|^{2}, \\
s.t.
\quad \bm{d}_{min} \le \bm{D}\bm{q} \le \bm{d}_{max}, \\
\quad\quad\quad \bm{q}_{min} \le \bm{q} \le \bm{q}_{max}, \\
\end{array}
\label{eq:ikqp}
\end{equation}
which uses the geometrical forward kinematics relation $\bm{\mathcal{H}}_{k}(q)$ of task $k$ to minimize the distance with respect to the task goal pose $\bm{x}^d_k$.
Residual function $dist(.)$ in this equation is defined as Euclidean distance for position task and inverse skew-symmetric vector of $\bm{\mathcal{H}}_{k}(q)^{T} {}^{\mathcal{I}}\bm{R}_{k}$ for the rotational task.
Solving this problem might be time-consuming and the results can be discontinuous as well.
To solve this problem, a common approach in the literature is to transform the whole-body IK into a whole-body inverse velocity kinematics problem.
\iffalse
\textcolor{blue}{In order to compute the reference value $\bm{b}_k$ in \eqref{eq:ivkqp}, a dynamical system is defined as $\bm{b}_k={\bm{v}}^d_k+\lambda\bm{e}_k$ value for the $k$-th task, with ${\bm{v}}^d_k$ being the feed-forward reference velocity term, $\lambda$ the convergence error gain, and $\bm{e}_k$ the residual vector between current and goal poses \cite{rapetti2019model}.}
\fi
\paragraph{Whole-Body inverse dynamics control}
The Inverse Dynamics (ID) refers to the problem of finding the joint torques of the robot to achieve the desired motion given the robot's constraints such as joint torque limits and feasible contacts.
There are different techniques to solve the ID problem in the literature \cite{righetti2012quadratic}.
A common situation is to minimize the torques and joint accelerations of the robot for achieving the desired references,
without violating the robot constraints.
To reach the desired end-effector poses and contact wrenches, one can realize it based on the following optimization problem:
\begin{subequations} \label{eq:idqp}
\begin{align}
& \text{arg~}\underset{\bm{\tau}}{\text{min}}~ \underset{k}{\sum} w_k (\|\ddot{\bm{x}}_{k}-\ddot{\bm{x}}^d_{k}\|^{2})+ \epsilon\| \bm{\tau}\|^{2}\nonumber\\
& ~~~~~~ + \underset{i}{\sum} w'_i (\|{\bm{f}_{C,i}}-{\bm{f}}^d_{C,i}\|^{2}),\label{eq:idqp_min}\\
& s.t.~~~\bm{\tau}_{min} \le \bm{\tau} \le \bm{\tau}_{max}, \label{eq:idqp_TConst}\\
& \bm{d'}_{min} \le \bm{D'}\dot{\bm{\nu}} \le \bm{d'}_{max}, \label{eq:idqp_AccConst}\\
& \bm{J}_C\dot{\bm{\nu}}+\dot{\bm{J}}_C\bm{\nu}=0, \label{eq:idqp_fixedFoot}\\
& \bm{f}_{C} \ge 0, \label{eq:idqp_contact}
\end{align}
\end{subequations}
where $\ddot{\bm{x}}^{d}_{k}$ is the reference value for the motion task expressed in accelerations and ${\bm{f}^{d}_{C,i}}$ is the desired contact wrench given the dynamics equation of motion of the robot in \eqref{eq:dyn}, \eqref{eq:idqp_TConst} is the torque limits, \eqref{eq:idqp_AccConst} is the general constraints such as velocity and joint angle limits, and collision avoidance limits expressed with joint acceleration vector, \eqref{eq:idqp_fixedFoot} is the non-sliding contact constraint, and \eqref{eq:idqp_contact} is the constraint of foot contacts.
The details of the constraints can be found~\cite{difava2016, righetti2012quadratic}.
\iffalse
This controller finds the desired contact wrenches at each contact point such that it enhances the robot stability. A schematic block diagram of this controller is shown in Figure \ref{fig:StabilizerControllers}.
Given the reference CoM position $\bm{x}_\mathcal{COM}^{d}$ and velocity $\dot{\bm{x}}_\mathcal{COM}^{d}$, and the reference base $\mathcal{B}$ orientation and angular velocity, \cite{ott2011},
a PD compliance control finds the net desired external wrench vector $\bm{w}_{ext}^{d}$ by:
\begin{equation}
\begin{array}{c}
\bm{w}_{ext}^{d}= \begin{bmatrix}
m\bm{g}\\
\bm{0}
\end{bmatrix}
- \bm{K}_D
\begin{bmatrix}
\dot{\bm{x}}_\mathcal{COM} -\dot{\bm{x}}_\mathcal{COM}^{d} \\
\tau_D( \bm{\omega}_\mathcal{B}^{d} ,\bm{\omega}_\mathcal{B} )
\end{bmatrix}
\\
- \bm{K}_P
\begin{bmatrix}
{\bm{x}}_\mathcal{COM} -{\bm{x}}_\mathcal{COM}^{d} \\
{\tau_P({ {\bm{R}}_\mathcal{B}^{d},{\bm{R}}_\mathcal{B})}}
\end{bmatrix},
\label{eq:ExternalWrenchDesired}
\end{array}
\end{equation}
where $\bm{K}_P$ and $\bm{K}_D$ are the symmetric positive semi-definite proportional and damping gain matrices; $\tau_P$ and $\tau_D$ functions compute the error on the rotational term of the base orientation error.
In \eqref{eq:ExternalWrenchDesired}, the first term compensates gravity.
Given the contact points, the desired net wrenches at the robot CoM are employed to compute the wrench at each contact point.
An approach to compute the desired contact wrenches based on the frictional grasping method is proposed by \cite{ott2011} where an optimization problem is defined.
Here without the loss of generality, only the forces at the contact points are considered.
A \textit{contact map} matrix $\bm{G}_{c} \in \mathbb{R}^{6 \times 3c}$ is identified to map $c$ contact force vectors $\bm{f}_c$ to the net wrench at CoM by:
\begin{equation}
\bm{w}_{ext} = \bm{G}_c \bm{f}_c.
\label{eq:contactMap}
\end{equation}
Using this equation and given the $\bm{w}^d_{ext}$ from \eqref{eq:ExternalWrenchDesired}, an optimization problem is defined to compute the contact forces:
\begin{equation}
\begin{array}{c}
\text{arg~}\underset{\bm{f}_c}{\text{min}}
~( \bm{Q}_\mathcal{COM} \| \bm{F}^d_{\mathcal{COM},ext}-\bm{G}_{c, f} \bm{f}_c \|^{2}+\\
\bm{Q}_\mathcal{B} \| \bm{\tau}^d_{\mathcal{B},ext}-\bm{G}_{c,\tau} \bm{f}_c \|^{2}+
\bm{R} \| \bm{f}_c \|^{2}), \\
s.t. ~~ {f}_{c,z} \ge 0,\\
({f}_{c,x})^2 + ({f}_{c,y})^2 \le \mu({f}_{c,z})^2,
\end{array}
\label{eq:contactOptimization}
\end{equation}
the first term in this equation tracks the COM position with the highest priority, the second term tracks of the base rotational motion, and the last term with the lowest priority is the regularization term.
$\bm{G}_{c, f}$ and $\bm{G}_{c,\tau}$ in \eqref{eq:contactOptimization} are related to the linear and rotational terms of $\bm{G}_{c}$.
The optimization is subject to unilateral friction and the friction cone constraints.
These constraints have been considered as a stability criterion in \cite{ hirukawa2006universal}.
The desired contact forces are later passed to the whole-body controller in order to compute the joint torques.
\fi
\paragraph{Momentum-based control} This controller finds the configuration space acceleration and ground reaction forces such that the robot follows the given desired rate of change of whole-body momentum \cite{orin2013centroidal, cisneros2020inverse}.
Using \eqref{eq:eulermomentum}, one can formulate the following QP optimization problem:
\begin{equation}
\begin{array}{c}
\text{arg}~\underset{\dot{\bm{\nu}}_d,\bm{\rho}}{\text{min}}~(\bm{A}\dot{\bm{\nu}}_d-\bm{b})^T\bm{C}_h(\bm{A}\dot{\bm{\nu}}_d-\bm{b}) \\
+ {\bm{f}^{gr}}^T\bm{C}_{gr}\bm{f}^{gr} + \dot{\bm{\nu}}_d^T\bm{C}_{\dot{\nu}}\dot{\bm{\nu}}_d\\
s.t.
\quad\quad\quad \bm{f}^{gr}_{min} \le \bm{f}^{gr} \le \bm{f}^{gr}_{max}
\end{array}
\label{eq:QPmomentum}
\end{equation}
where $\bm{C}_{h}$, $\bm{C}_{gr}$ and $\bm{C}_{\dot{\nu}}$ are weighting matrices and $\bm{b}=\dot{\bm{h}}_d-\dot{\bm{A}\bm{\nu}}$, with $\dot{\bm{h}}_d$ being a desired rate of change of centroidal momentum. Finally, the \textit{pre-specified} external wrenches $\bm{f}^{ext}_{i}$, the \textit{computed} $\bm{f}^{gr}{i}$ ground reaction wrenches, and the \textit{computed} joint accelerations $\dot{\bm{\nu}}_d$ are used with an inverse dynamics algorithm to compute the robot joint torques.
\paragraph{Other control formalisms}
The whole-body control problem can be formalized with different cost functions and solving a QP problem at each control cycle to compute the optimal control inputs is not the only viable solution. Indeed, other approaches such as Linear Quadratic Regulator (LQR) \cite{mason2014, kuindersma2016optimization} and MPC \cite{ kelly2017introduction, posa2014direct}
have been utilized as well.
\iffalse
\textcolor{blue}{
\textit{Linear Quadratic Regulator (LQR)}. An alternative can be to rely on a full-state LQR, using the robot dynamics \cite{mason2016}. LQR is essentially an automated way of finding an appropriate state-feedback controller. The robot model in (\ref{eq:dyn}) is linearized by determining a linear time-invariant (LTI) or linear time-varying system.
LQR leads to an optimization problem in the state space in the form of:
\begin{equation}
\begin{array}{c}
\underset{\bm{u}}{\text{arg~min}}~\mathlarger{\int_{0}^{\infty}}(\bm{x}^T\bm{Q}\bm{x}+ \bm{u}^T\bm{R}\bm{u})dt, \\
\end{array}
\label{eq:LQR}
\end{equation}
where $\bm{x}$ and $\bm{u}$ are the system state and input vector, and $\bm{Q}$ and $\bm{R}$ are the regularization gains for the states and inputs.
The solution to LQR is:
\begin{equation}
\begin{array}{c}
\bm{\tau}=\bm{\tau}_0-\bm{K}\bm{x} ,
\end{array}
\label{eq:LQRsol}
\end{equation}
where $\bm{\tau}_0$ is the vector of joint torques that compensates the dynamics of the robot around the linearized state and $K$ is found by solving the continuous-time Riccati differential equation combined with the system dynamics. The second term in \eqref{eq:LQRsol} is associated with the input vector $\bm{u}$ which minimizes the feedback control law.
Some examples of the LQR method to control humanoid robots can be found in \cite{mason2014}, \cite{kuindersma2016optimization}.}
\textcolor{blue}{
\textit{Model Predictive Control (MPC)}.
It is a suite of optimization-based control techniques applied to dynamical systems. MPC controllers optimize an objective function based on a predicted evolution of the system states in the future \cite{borrelli2017}.
In a linear case, the prediction model is:
\begin{equation}
\
|
1/\sum_1^L \alpha_i \gamma_i \\
x_i &\triangleq& \cases{
\gamma_i x &\quad $i=1,\dots,L$ \cr
0 &\quad $i=L+1$ \cr
}.
\end{eqnarray}
\\
\textbf{Cache Placement Strategy:} First, split each file $W_n$ into $L$ sub-files
\begin{eqnarray}
\nonumber W_n=\left(W_n^i : i=1,\dots,L\right),
\end{eqnarray}
where $W_n^i$ is of size $\alpha_i x_i F$. Then, split each sub-file $W_n^i$ into $\alpha_i$ equal-sized mini-files:
\begin{eqnarray}
\nonumber W_n^i=\left(W_{n,\tau_i}^i: \tau_i \subseteq [K], |\tau_i|=p_i-1 \right).
\end{eqnarray}
Finally, split each mini-file $W_{n,\tau_i}^i$ into $\gamma_i$ equal-sized pico-files of size $xF$ bits:
\begin{eqnarray}
\nonumber W_{n,\tau_i}^i=\left( W_{n,\tau_i}^{i,j}: j=1,\dots,\gamma_i \right),
\end{eqnarray}
where $\gamma_i$ is an integer number. For each user $k$, we cache pico-file $W_{n,\tau_i}^{i,j}$ if $k \in \tau_i$, for all possible $i,j,n$. Then, the required memory size for each user is:
\begin{eqnarray}
\nonumber M&=&\frac{1}{F} N \left( \sum_{i=1}^L{{K-1 \choose p_i-2} \gamma_i x F} \right) \\ \nonumber
&=& N \frac{\sum_{i=1}^L{ {K-1 \choose p_i-2} \gamma_i }}{\sum_{i=1}^L{ {K \choose p_i-1} \gamma_i }} \\
&=& \frac{N}{K} \frac{ \sum_{i=1}^L {\frac{p_i(p_i-1)}{K-p_i+1}} } { \sum_{i=1}^L {\frac{p_i}{K-p_i+1}} },
\end{eqnarray}
which is consistent with the assumptions of Theorem \ref{Thm_flexible}.
\\
\\
\textbf{Content Delivery Strategy:} Define $P_1^i,\dots,P_{{K \choose p_i}}^i$ to be the collection of all $p_i$-subsets of $[K]$ for all $i=1,\dots L+1$. The delivery phase consists of $\frac{K!}{p_1!\dots p_{L+1}!}$ transmit slots. Each transmit slot is in one-to-one correspondence with one $(p_1,\dots,p_{L+1})$-partition of $[K]$. Consider the transmit slot associated with the partition
\begin{eqnarray}
\nonumber \left\{ P^1_{\theta_1},\dots,P^{L+1}_{\theta_{L+1}} \right\},
\end{eqnarray}
where $\theta_i \in \left\{1,\dots,{K \choose p_i}\right\}$. Then, the server $i$ sends
\begin{eqnarray}
\nonumber {+}_{r \in P^i_{\theta_i}}W_{d_r,P^i_{\theta_i} \backslash \{r\}}^{i,N(P^i_{\theta_i})}
\end{eqnarray}
to the subset of users $P^i_{\theta_i}$, interference-free from other servers, where the sum is in $\mathbb{F}_q$ and is over all $r \in P^i_{\theta_i}$. Since we have assumed a flexible network, simultaneous transmissions by all servers is feasible. Also, the index $N(P^i_{\theta_i})$ is chosen such that each new transmission consists of a fresh (not transmitted earlier) pico-file. Obviously, the virtual server $L+1$ does not transmit any packet.
Since each pico-file consists of $x\frac{F}{m}$ symbols, at each transmission slot we should send a block of size $L$-by-$x\frac{F}{m}$ by the servers. Also, since this action should be performed for all $\frac{K!}{p_1!\dots p_{L+1}!}$ slots, the delay of this scheme will be:
\begin{eqnarray}
\nonumber T_c&=&\frac{K!}{p_1!\dots p_{L+1}!} \times x\frac{F}{m} \\
&=&\frac{1}{\sum_1^L{\frac{p_i}{K-p_i+1}}} \frac{F}{m},
\end{eqnarray}
as stated in Theorem \ref{Thm_flexible}. Consequently, if we show that through the aforementioned number of transmit slots all users will be able to recover their requested files, the proof is complete.
\\
\textbf{Correctness Proof:} The main theme of this scheme is to divide each file into $L$ sub-files, and to assign each sub-file to a single server. Then, each server's task is to deliver the assigned sub-files to the desired users (see Fig. \ref{Fig_Flexible_Proof}).
Consider server $i$. This server handles sub-files $W_n^i, n \in [N]$ though the following delivery tasks:
\begin{eqnarray}
\nonumber & & W_{d_1}^i \hspace{2mm}
\stackrel{\textrm{server i} }{\Longrightarrow} \hspace{2mm} \textrm{User 1} \\ \nonumber
& & W_{d_2}^i \hspace{2mm} \stackrel{\textrm{server i} }{\Longrightarrow} \hspace{2mm} \textrm{User 2} \\ \nonumber
& & \vdots \\ \nonumber
& & W_{d_K}^i \hspace{2mm} \stackrel{\textrm{server i} }{\Longrightarrow} \hspace{2mm} \textrm{User K}
\end{eqnarray}
The above formulation leads to a single server problem \cite{Maddah-Ali_Fundamental_2014} with files of size $F_i= \alpha_i x_i F$ bits. It can be easily verified that the proposed cache placement strategy for each sub-file mimics that of \cite{Maddah-Ali_Fundamental_2014} for single-server problems. Therefore, if we demonstrate that this server is able to send a common message of size $x_i\frac{F}{m}$ symbols to all $p_i$-subsets of users, then this server can handle this single-server problem successfully. However, in the above scheduling scheme, the server benefits each $p_i$-subset of the users by a common message of size $x\frac{F}{m}$ symbols (a pico-file size), $\gamma_i$ times. Consequently, the total volume of common message that this server is able to send to each $p_i$-subset is $\gamma_i \cdot x\frac{F}{m}=x_i\frac{F}{m}$ symbols.
Since by proper scheduling scheme in flexible networks all servers can perform the same task simultaneously, all requested portions of files will be delivered. It should be noted that the portion of each file assigned to the virtual server is $x_{L+1}=0$. Algorithm 1 presents the pseudo-code of the procedure described above.
\begin{figure}
\begin{center}
\includegraphics[width=0.7\textwidth]{Felexible_Proof.eps}
\end{center}
\caption{Flexible network file distribution for proof of Theorem \ref{Thm_flexible}. \label{Fig_Flexible_Proof}}
\end{figure}
\begin{algorithm}\label{Alg_Main_Flexible}
\caption{Multi-Server Coded Caching - Flexible Networks}
\begin{algorithmic}[1]
\Procedure{PLACEMENT}{$W_1,\dots,W_N,p_1,\dots,p_{L+1}$}
\State $\alpha_i \gets {K \choose p_i-1}$, $i=1,\dots,L$
\State $\gamma_i \gets ((K-p_i)!p_i!)/(p_1!\dots,p_{L+1}!)$, $i=1,\dots,L+1$
\State $x \gets 1/(\sum_1^L{\alpha_i \gamma_i})$
\State $x_i \gets \gamma_i x$, $i=1,\dots,L$
\State $x_{L+1} \gets 0$
\ForAll{$n \in [N]$}
\State split $W_n$ into $(W_n^i: i=1,\dots,L)$, where $|W_n^i|=\alpha_i x_i$
\ForAll{$i=1,\dots,L$}
\State split $W_n^i$ into $(W_{n,\tau_i}^i: \tau_i \subset [K], |\tau_i|=p_i-1)$ of equal size
\ForAll{$\tau_i \subset [K], |\tau_i|=p_i-1$}
\State split $W_{n,\tau_i}^i$ into $(W_{n,\tau_i}^{i,j}: j=1,\dots,\gamma_i)$ of equal size
\EndFor
\EndFor
\EndFor
\ForAll{$k \in [K]$}
\ForAll{$i=1,\dots,L$}
\State $Z_k \gets (W_{n,\tau_i}^{i,j}: \tau_i \subset [K], |\tau_i|=p_i-1, k \in \tau_i, j=1,\dots,\gamma_i, n \in [N])$
\EndFor
\EndFor
\EndProcedure
\\
\Procedure{DELIVERY}{$W_1,\dots,W_N$, $d_1,\dots,d_K$, $p_1,\dots,p_{L+1}$}
\ForAll{$i=1,\dots,L$}
\ForAll{$j=1,\dots,{K \choose p_i}$}
\State $N({P}^i_j) \gets 1 $
\EndFor
\EndFor
\ForAll{partitions of $[K]$ with sizes $p_1,\dots,p_{L+1}$, ($p_i \geq 2, i=1,\dots,L$)}
\State $\{P^1_{\theta_1},\dots,P^{L+1}_{\theta_{L+1}}\} \gets $ selected partition
\State \textbf{transmit} $ \mathbf{X}(\{P^1_{\theta_1},\dots,P^{L+1}_{\theta_{L+1}}\}) = \left[ {\begin{array}{c}
{+}_{r \in P^1_{\theta_1}}W_{d_r,P^1_{\theta_1} \backslash \{r\}}^{1,N(P^1_{\theta_1})} \Rightarrow P^1_{\theta_1} \\
\vdots \\
{+}_{r \in P^L_{\theta_L}} W_{d_r,P^L_{\theta_L} \backslash \{r\}}^{L,N(P^L_{\theta_L})} \Rightarrow P^L_{\theta_L} \\
\end{array} } \right] $
\ForAll{$i=1,\dots,L$}
\State $N(P^i_{\theta_i}) \leftarrow N(P^i_{\theta_i}) +1$
\EndFor
\EndFor
\EndProcedure
\end{algorithmic}
\end{algorithm}
To prove Theorem \ref{Thm_Order_Optimal}, we first state the following lemma:
\begin{lem}\label{Lem_Converse}
The coding delay for a general network with $L$ servers is lower bounded by
\begin{equation}\label{Eq_Converse_Theorem}
D^*(M) \geq \max_{s \in \{1,\dots,K\}}\frac{1}{\min(s,L)}\left(s-\frac{s}{\lfloor \frac{N}{s} \rfloor}M\right) \frac{F}{m}.
\end{equation}
\end{lem}
\begin{proof}
See Appendix A for the proof.
\end{proof}
The above lemma can be used to prove optimality of the proposed scheme in some range of parameters. The following corollary states the result.
\begin{cor}\label{Cor_Converse}
All $(M-T_C)$ pairs in Theorem \ref{Thm_flexible} corresponding to $Q=0$ are optimal. Thus, the converse line $\left(1-\frac{M}{N}\right)\frac{F}{m}$ is achieved for $M^*\leq M \leq N$, where
\begin{equation}
M^*=\min_{p_1+\dots+p_L=K}\frac{N}{K}\frac{\sum_1^L{\frac{p_i(p_i-1)}{K-p_i+1}}}{\sum_1^L{\frac{p_i}{K-p_i+1}}}.
\end{equation}
\end{cor}
\begin{proof}
Theorem \ref{Thm_flexible} states that all the $(M-T_C)$ pairs in (\ref{Eq_Main_Resuls_Flexile_Th}) are achievable. By some simple calculations one can show that for these achievable pairs we have:
\begin{eqnarray}\label{Eq_Flexible_Converse_Cor_1}
\left(1-\frac{M}{N}\right)\frac{F}{m}=\left(1-\frac{Q}{K}\right) T_C.
\end{eqnarray}
Therefore, if we put $Q=0$ in Theorem \ref{Thm_flexible}, all the corresponding $(M-T_C)$ pairs satisfy
\begin{eqnarray}
\nonumber T_C=\left(1-\frac{M}{N}\right)\frac{F}{m}.
\end{eqnarray}
On the other hand, by considering the case of $s=1$ in Lemma \ref{Lem_Converse} we know that the optimal coding delay satisfies:
\begin{eqnarray}
\nonumber D^*(M) \geq \left(1-\frac{M}{N}\right)\frac{F}{m},
\end{eqnarray}
which is matched to our achievable coding delay . Therefore, by setting $Q=0$ in Theorem \ref{Thm_flexible}, for all
\begin{eqnarray}
\nonumber M=\frac{N}{K}\frac{\sum_1^L{\frac{p_i(p_i-1)}{K-p_i+1}}}{\sum_1^L{\frac{p_i}{K-p_i+1}}}, p_1+\dots,p_L=K, p_i \geq 2,
\end{eqnarray}
the achievable coding delay is optimum. By minimizing the cache size, over all partitionings satisfying $p_1+\dots,p_L=K, p_i \geq 2$, the proof is complete.
\end{proof}
There is an interesting intuition behind Eq. (\ref{Eq_Flexible_Converse_Cor_1}). In the proposed scheme for flexible networks, we assigned a subset of $Q$ users to the virtual server, and all the other $K-Q$ users benefited from other servers. Thus, through each transmission, the ratio $\frac{K-Q}{K}$ of users will be real users. This is exactly the coefficient that shows how close is the achieved delay to the optimal curve $(1-M/N)F/m$.
Finally, we are ready to prove Theorem \ref{Thm_Order_Optimal}. We consider two regimes for cache sizes. First , we let
\begin{eqnarray}
\nonumber M^*=\frac{N}{K}\left(\frac{K}{L}-1\right).
\end{eqnarray}
In the first regime where $M \geq M^*$, using Theorem \ref{Thm_flexible} with $Q=0$ and $p_1,\dots,p_L=\frac{K}{L}$, we obtain:
\begin{eqnarray}
\nonumber T_C = \left(1-\frac{M}{N}\right)\frac{F}{m}.
\end{eqnarray}
As Corollary \ref{Cor_Converse} states, for this case the optimal curve is achieved.
For the second regime where $M<M^*$ (such that $KM/N \in \mathbb{N}$), set
\begin{eqnarray}
\nonumber & &Q=K-\left(\frac{KM}{N}+1\right)L \\ \nonumber
& & p_1,\dots,p_L=\frac{K-Q}{L} = \left(\frac{KM}{N}+1\right).
\end{eqnarray}
Then, we obtain:
\begin{eqnarray}
\nonumber T_C = \frac{1}{L}\frac{K(1-M/N)}{1+KM/N}.
\end{eqnarray}
On the other hand, from Lemma \ref{Lem_Converse} we have:
\begin{eqnarray}
\nonumber D^* &\geq& \max_{s \in \{1,\dots,K\}}\frac{1}{\min(s,L)}\left(s-\frac{s}{\lfloor \frac{N}{s} \rfloor}M\right) \frac{F}{m} \\ \nonumber
&\geq& \max_{s \in \{1,\dots,K\}}\frac{1}{L}\left(s-\frac{s}{\lfloor \frac{N}{s} \rfloor}M\right) \frac{F}{m} \\ \nonumber
&\geq& \frac{1}{L}\frac{1}{12}\frac{K(1-M/N)}{1+KM/N}, \\
&\geq& \frac{1}{12} T_C,
\end{eqnarray}
where the last inequality follows from \cite{Maddah-Ali_Fundamental_2014}. This concludes the proof of Theorem \ref{Thm_Order_Optimal}.
\section{Linear Networks: Details}\label{Sec_Linear}
In order to explain the main concepts behind the coding strategy proposed for linear networks, we will first present a simple example:
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{Multi_Server_3_3_2_Example.eps}
\end{center}
\caption{Example \ref{Examp_Linear_1} ($L=2, K=3, N=3$): Lower and upper bounds on the coding delay.\label{Fig_Multi_Server_3_3_2_Example}}
\end{figure}
\begin{examp}[$L=2, K=3, N=3$]\label{Examp_Linear_1}
In this example, we consider a network consisting of $L=2$ servers, $K=3$ users, and a library of $N=3$ files, namely $W_1=A$, $W_2=B$, and $W_3=C$. By definition of linear networks the input-output relation of this network is characterized by a $3$-by-$2$ random matrix $\mathbf{H}$. Lower and upper bounds for the coding delay of this setting are shown in Fig. \ref{Fig_Multi_Server_3_3_2_Example}. The lower bound is due to Lemma \ref{Lem_Converse} as follows:
\begin{equation}
D^* \geq \max\left(1-\frac{M}{3},\frac{3-3M}{2} \right)\frac{F}{m}.
\end{equation}
The upper bound is due to Theorem \ref{Thm_linear} except the achievable pair $(M,T_C)=(\frac{1}{3},1)$, which will be discussed later. We have also exploited the fact that the straight line connecting every two achievable points on the $M-T_C$ curve is also achievable, as shown in \cite{Maddah-Ali_Fundamental_2014}. In order to get a glimpse of the ideas of the coding strategy behind Theorem \ref{Thm_linear}, next we discuss the achievable $(M,T_C)$ pair $(1,\frac{2}{3})$. In this case, as we will show, we can benefit both from the local/global caching gain (provided by cache of each user), and the multiplexing gain (provided by multiple servers in the network). The question is how to design an scheme so that we can exploit both gains simultaneously. In what follows we provide the solution:
Suppose that (without loss of generality) in the second phase, the first, second, and third users request files $A$, $B$, and $C$ respectively. Assume that the cache content placement is similar to that of \cite{Maddah-Ali_Fundamental_2014}: First, divide each file into three equal-sized non-overlapping sub-files:
\begin{eqnarray}
\nonumber A&=&[A_1,A_2,A_3] \\ \nonumber
B&=&[B_1,B_2,B_3] \\ \nonumber
C&=&[C_1,C_2,C_3].
\end{eqnarray}
Then, put the following contents in the cache of users:
\begin{eqnarray}
\nonumber Z_1&=&[A_1,B_1,C_1] \\ \nonumber
Z_2&=&[A_2,B_2,C_2] \\ \nonumber
Z_3&=&[A_3,B_3,C_3].
\end{eqnarray}
Let $L(x_1,\dots,x_m)$ be a random linear combination of $x_1,\dots,x_m$ as defined earlier. Consequently, in this strategy, the two servers send the following transmit block:
\begin{equation}\label{Eq_L2_K3_N3_M1_transmit}
\mathbf{X}=[\mathbf{h}_1^{\perp}L_1^1(C_2,B_3)+\mathbf{h}_2^{\perp}L_2^1(A_3,C_1)+\mathbf{h}_3^{\perp}L_3^1(A_2,B_1), \mathbf{h}_1^{\perp}L_1^2(C_2,B_3)+\mathbf{h}_2^{\perp}L_2^2(A_3,C_1)+\mathbf{h}_3^{\perp}L_3^2(A_2,B_1) ].
\end{equation}
where the random linear combination operator $L(\cdot,\cdot)$ operates on sub-files, in an element-wise manner, and $\mathbf{h}_i^{\perp}$ is an orthogonal vector to $\mathbf{h}_i$ (i.e. $\mathbf{h}_i.\mathbf{h}_i^{\perp}=0$). Let us focus on the first user who will receive:
\begin{eqnarray}
\nonumber \mathbf{h}_1.\mathbf{X}&=&[(\mathbf{h}_2^{\perp} . \mathbf{h}_1) L_2^1(A_3,C_1)+(\mathbf{h}_3^{\perp} . \mathbf{h}_1)L_3^1(A_2,B_1), (\mathbf{h}_2^{\perp} . \mathbf{h}_1)L_2^2(A_3,C_1)+(\mathbf{h}_3^{\perp} . \mathbf{h}_1)L_3^2(A_2,B_1) ] \\
&=& [L^1(A_2,A_3,C_1,B_1),L^2(A_2,A_3,B_1,C_1)].
\end{eqnarray}
As the first user has already cached $B_1$ and $C_1$ in the first phase, by subtracting the effect of interference terms, the first user can recover:
\begin{eqnarray}
\nonumber [L(A_2,A_3),L'(A_2,A_3)],
\end{eqnarray}
which consists of two independent (with high probability for large field size $q$) linear combinations of $A_2$ and $A_3$. By
|
solving these independent linear equations, such user can decode $A_2$ and $A_3$, and with the help of $A_1$ cached at the first phase, he can recover the whole requested file $A$. It can easily be verified that other users can also decode their requested files in a similar fashion. The transmit block size indicated in (\ref{Eq_L2_K3_N3_M1_transmit}) is $2$-by-$\frac{2F}{3m}$, resulting in $T_C=\frac{2F}{3m}$ time slots.
Let us forget about one of the servers for a moment and assume we have just one server. Then, the scheme proposed in \cite{Maddah-Ali_Fundamental_2014} only benefits two users per transmission through pure global caching gain. Also, in the case of two servers and no cache memory (the aforementioned case of $M=0$), we could design an scheme which benefited only two users through pure multiplexing gain. However, through the proposed strategy, we have designed an scheme which exploited both the global caching and multiplexing gains such that all the three users could take advantage from each transmission.
Finally, let us discuss the achievable pair $(M,T_C)=(\frac{1}{3},1)$, where we need to adopt a different strategy. Assume we divide each of files $A$, $B$ and $C$ into three equal parts and fill the caches as follows:
\begin{eqnarray}
\nonumber Z_1&=&[A_1+B_1+C_1] \\ \nonumber
Z_2&=&[A_2+B_2+C_2] \\ \nonumber
Z_3&=&[A_3+B_3+C_3].
\end{eqnarray}
Consequently, the servers transmit the following vectors:
\begin{eqnarray}
\nonumber \mathbf{X}_1&=&\frac{\mathbf{h}_3^{\perp}}{\mathbf{h}_1.\mathbf{h}_3^{\perp}}B_1 + \frac{\mathbf{h}_2^{\perp}}{\mathbf{h}_1.\mathbf{h}_2^{\perp}}C_1 \\ \nonumber
\mathbf{X}_2&=&\frac{\mathbf{h}_3^{\perp}}{\mathbf{h}_2.\mathbf{h}_3^{\perp}}A_2 + \frac{\mathbf{h}_1^{\perp}}{\mathbf{h}_2.\mathbf{h}_1^{\perp}}C_2 \\
\mathbf{X}_3&=&\frac{\mathbf{h}_2^{\perp}}{\mathbf{h}_3.\mathbf{h}_2^{\perp}}A_3 + \frac{\mathbf{h}_1^{\perp}}{\mathbf{h}_3.\mathbf{h}_1^{\perp}}B_3.
\end{eqnarray}
It can be easily verified that the first user receives $A_2$, $A_3$, and $B_1+C_1$. So, with the help of its cache content, it can decode the whole file $A$. Similarly, the other users can decode their requested files. As each block $\mathbf{X}_i$ is a $2$-by-$\frac{F}{3m}$ matrix of symbols, the total delay required to fulfill the users' demands is $T_C=\frac{F}{m}$ time slots.
\end{examp}
Example \ref{Examp_Linear_2}, also, discusses the coding delay for a linear network with four users. The details of the coding strategy of Example \ref{Examp_Linear_2}, which are provided at Appendices B and C, further clarify the basic ideas behind the proposed scheme. However, in the rest of this section, we provide the formal proof of Theorem \ref{Thm_linear}.
\\ \\
\textbf{Cache Placement Strategy:} The cache content placement phase is identical to \cite{Maddah-Ali_Fundamental_2014}: Define $t\triangleq MK/N$, and divide each file into ${K \choose t}$ non-overlapping sub-files as\footnote{It should be noted that the definition of sub-files and mini-files here differs from that of flexible networks.}:
\begin{eqnarray}
\nonumber W_n=\left(W_{n,\tau}: \tau \subset [K], |\tau|=t\right), n=1,\dots,N,
\end{eqnarray}
where each sub-file consists of $F/{K \choose t}$ bits. In the first phase, we store the sub-file $W_{n,\tau}$ in the cache of user $k$ if $k \in \tau$. Therefore, the total amount of cache each user needs for this placement is:
\begin{eqnarray}
\nonumber N\frac{F}{{K \choose t}}{K-1 \choose t-1} = MF
\end{eqnarray}
bits.
We further divide each sub-file into ${K-t-1 \choose L-1}$ non-overlapping equal-sized mini-files as follows:
\begin{eqnarray}
\nonumber W_{n,\tau}=\left(W_{n,\tau}^j: j=1,\dots,{K-t-1 \choose L-1}\right).
\end{eqnarray}
Thus, each mini-file consists of $F/\left({K \choose t} {K-t-1 \choose L-1}\right)$ bits.
\\
\\
\textbf{Content Delivery Strategy:} Consider an arbitrary $(t+L)$-subset of users denoted by $S$ (i.e. $S \subseteq [K], |S|=t+L$). For this specific subset $S$ denote all $(t+1)$-subsets of $S$ by $T_i, i=1,\dots,{t+L \choose t+1}$ (i.e. $T_i \subseteq S, |T_i|=t+1$). First, we assign a $L$-by-$1$ vector $\mathbf{u}_{S}^{T_i}$ to each $T_i$ such that
\begin{eqnarray}\label{Eq_Vector_Constraints}
\nonumber \mathbf{u}_{S}^{T_i} &\perp& \mathbf{h}_j \hspace{5mm} \mathrm{for \hspace{2mm} all} \hspace{5mm} j\in S \backslash T_i \\
\mathbf{u}_{S}^{T_i} &\not \perp& \mathbf{h}_j \hspace{5mm} \mathrm{for \hspace{2mm} all} \hspace{5mm} j\in T_i.
\end{eqnarray}
The following lemma specifies the required field size such that the aforementioned condition is met with high probability:
\begin{lem}
If the elements of the network transfer matrix $\mathbf{H}$ are uniformly and independently chosen from $\mathbb{F}_q$, then we can find vectors which satisfy (\ref{Eq_Vector_Constraints}) with high probability if:
\begin{equation}
q \gg (t+1) {K \choose t+L}{t+L \choose t+1}.
\end{equation}
\end{lem}
\begin{proof}
First, since the set $S \backslash T$ has $L-1$ elements, we require $\mathbf{u}_{S}^{T_i}$ to be orthogonal to $L-1$ arbitrary vectors, which is feasible in an $L$ dimensional space of any field size.
Second, the total number of non-orthogonality constraints in (\ref{Eq_Vector_Constraints}) for all possible subsets $S$ is $(t+1) {K \choose t+L}{t+L \choose t+1}$. On the other hand, it can be easily verified that the probability that two uniformly chosen random vectors in $\mathbb{F}_q$ are orthogonal is $1/q$. Thus, by using the union bound, the probability that at least one non-orthogonality constraint in (\ref{Eq_Vector_Constraints}) is violated is upper bounded by
\begin{eqnarray}
\nonumber \frac{(t+1) {K \choose t+L}{t+L \choose t+1}}{q} \ll 1,
\end{eqnarray}
which concludes the proof.
\end{proof}
For each $T_i$ define:
\begin{equation}\label{Eq_Linear_Proof_G_T}
G(T_i)=L_{r \in T_i}\left(W_{d_r,T_i \backslash \{r\}}^j\right),
\end{equation}
where $W_{d_r,T_i \backslash \{r\}}^j$ is a mini-file which is available in the cache of all users in $T_i$, except $r$, and is required by user $r$. Also $L_{r \in T_i}$ represents a random linear combination of the corresponding mini-files for all $r \in T_i$. Note that the index $j$ is chosen such that such mini-files have not been observed in the previous $(t+L)$-subsets. Thus, if we define $N(r,T \backslash \{r\})$ as the index of the next fresh mini-file required by user $r$, which is present in the cache of users $T \backslash \{r\}$, then we can rewrite:
\begin{equation}\label{Eq_Linear_General_GTi_1}
G(T_i)=L_{r \in T_i}\left(W_{d_r,T_i \backslash \{r\}}^{N(r,T_i \backslash \{r\})}\right),
\end{equation}
Subsequently, we make the following definition for such $(t+L)$-subset $S$:
\begin{equation}\label{Eq_Linear_Proof_X_S}
\mathbf{X}(S)=\sum_{T \subseteq S, |T|=t+1}{\mathbf{u}_{S}^{T}G(T)}.
\end{equation}
We repeat the above procedure ${t+L-1 \choose t}$ times for the given $(t+L)$-subset $S$ in order to derive different independent versions of $\mathbf{X}_\omega(S), \omega=1,\dots,{t+L-1 \choose t}$. In other words, $\mathbf{X}_\omega(S)$'s only differ in the random coefficients chosen for calculating the linear combinations in (\ref{Eq_Linear_General_GTi_1}), which makes them independent linear combinations of the corresponding mini-files, with high probability. Thus, to distinguish between these different versions notationally we define:
\begin{equation}\label{Eq_Linear_General_GTi_2}
G_\omega(T_i)=L_{r \in T_i}^\omega\left(W_{d_r,T_i \backslash \{r\}}^{N(r,T_i \backslash \{r\})}\right), \mathbf{X}_\omega(S)=\sum_{T \subseteq S, |T|=t+1}{\mathbf{u}_{S}^{T}G_\omega(T)}.
\end{equation}
Subsequently, for this $(t+L)$-subset $S$, the servers transmit the block
\begin{equation}
\left[\mathbf{X}_1(S),\dots,\mathbf{X}_{{t+L-1 \choose t}}(S)\right],
\end{equation}
and we update $N(r, T \backslash \{r\})$ for those mini-files which have appeared in the linear combinations in (\ref{Eq_Linear_General_GTi_1}). When the above procedure for this specific subset $S$ is completed, we consider another $(t+L)$-subset of users and do the above procedure for that subset, and repeat this process until all $(t+L)$-subsets of $[K]$ have been taken into account.
Next, let us calculate the coding delay of this scheme, after which we prove the correctness of this content delivery strategy. For a fixed $(t+L)$-subset $S$ each $\mathbf{X}_\omega(S)$ is a $L$-by-$\frac{F/m}{{K \choose t} {K-t-1 \choose L-1}}$ block of symbols. Thus, the transmit block for $S$, i.e. $\left[\mathbf{X}_1(S),\dots,\mathbf{X}_{{t+L-1 \choose t}}(S)\right]$, is a $L$-by-$\frac{F/m}{{K \choose t} {K-t-1 \choose L-1}} {t+L-1 \choose t}$ block. Since this transmission should be repeated for all ${K \choose t+L}$ $(t+L)$-subsets of users, the whole transmit block size will be
\begin{eqnarray}
\nonumber L \mathrm{-by-} \frac{ {t+L-1 \choose t}}{{K \choose t} {K-t-1 \choose L-1}} {K \choose t+L}\frac{F}{m}= L \mathrm{-by-} \frac{K(1-M/N)}{L+MK/N} \frac{F}{m},
\end{eqnarray}
which will result in the coding delay of
\begin{equation}
T_C=\frac{K(1-M/N)}{L+MK/N}\frac{F}{m}
\end{equation}
time slots. Algorithm 2 shows the pseudo-code of the aforementioned procedure for linear networks. \\
\textbf{Correctness Proof}: Suppose the user $k$, who is interested in acquiring the file $W_{d_k}$. This file is partitioned into two parts: 1- The part already cached in this user at the first phase and constitutes of sub-files:
\begin{eqnarray}
\left(W_{d_k,\tau}: \tau \subseteq [K], |\tau|=t, k \in \tau\right).
\end{eqnarray}
2- Those parts which should be delivered to this user through the content delivery strategy, which constitutes of sub-files:
\begin{eqnarray}
\left(W_{d_k,\tau}: \tau \subseteq [K], |\tau|=t, k \not \in \tau\right).
\end{eqnarray}
Thus, since due to the following Lemma \ref{Lem_Linear_Proof_2}, the sub-files in the second category are successfully delivered to this user through the content delivery strategy, this user will decode the requested file. Moreover, since this user was arbitrarily chosen, all users will similarly decode their requested files.
Before proving Lemma \ref{Lem_Linear_Proof_2} we need another lemma which is proved first:
\begin{lem}\label{Lem_Linear_Proof_1}
Suppose an arbitrary subset $T \subseteq [K]$ such that $|T|=t+1$, and $k \in T$. Then, through the above content placement and delivery strategy, user $k$ will be able to decode the sub-file $W_{d_k, T \backslash \{k\}}$.
\end{lem}
\begin{proof}
Consider those transmissions which are assigned to the $(t+L)$-subsets which contain $T$. There exist ${K-t-1 \choose L-1}$ of such subsets. Let us focus on one of them, namely $S$. Corresponding to $S$, the following transmit block is sent by the servers:
\begin{eqnarray}
\left[\mathbf{X}_1(S),\dots,\mathbf{X}_{{t+L-1 \choose t}}(S)\right],
\end{eqnarray}
and subsequently, user $k$ receives:
\begin{eqnarray}\label{Eq_Linear_Proof_Recieve_k_1}
\mathbf{h}_k .\left[\mathbf{X}_1(S),\dots,\mathbf{X}_{{t+L-1 \choose t}}(S)\right].
\end{eqnarray}
Let's focus on $\mathbf{h}_k .\mathbf{X}_1(S)$:
\begin{eqnarray}\label{Eq_Linear_Proof_Recieve_k_2}
\nonumber \mathbf{h}_k .\mathbf{X}_1(S)&\stackrel{(a)}=&\mathbf{h}_k . \sum_{T \subseteq S, |T|=t+1}{\mathbf{u}_{S}^{T}G_1(T)} \\ \nonumber
&\stackrel{(b)}=&\sum_{T \subseteq S, |T|=t+1, k \in T}{\left(\mathbf{h}_k .\mathbf{u}_{S}^{T}\right)G_1(T)} \\
&\stackrel{(c)}=&\sum_{T \subseteq S, |T|=t+1, k \in T}{\left(\mathbf{h}_k .\mathbf{u}_{S}^{T}\right)L_{r \in T}^1(W_{d_r,T \backslash \{r\}}^j)},
\end{eqnarray}
where (a) follows from (\ref{Eq_Linear_Proof_X_S}), (b) follows from the fact that
\begin{eqnarray}
\mathbf{u}_{S}^{T} &\perp& \mathbf{h}_k \hspace{5mm} \mathrm{for \hspace{2mm} all} \hspace{5mm} k\in S \backslash T,
\end{eqnarray}
and (c) is due to (\ref{Eq_Linear_Proof_G_T}). In (\ref{Eq_Linear_Proof_Recieve_k_2}), user $k$ can extract $W_{d_k,T \backslash \{k\}}^j$ from the linear combination $L_{r \in T}^1(W_{d_r,T \backslash \{r\}}^j)$, since all the other interference terms are present at his cache. Thus, by removing interference terms, user $k$ can carve the following linear combination from (\ref{Eq_Linear_Proof_Recieve_k_2}):
\begin{eqnarray}
\nonumber L_{T \subseteq S, |T|=t+1, k \in T}^1\left(W_{d_k,T \backslash \{k\}}^j\right),
\end{eqnarray}
which is a random linear combination of ${t+L-1 \choose t}$ mini-files desired by user $k$. However, since in (\ref{Eq_Linear_Proof_Recieve_k_1}) user $k$ receives ${t+L-1 \choose t}$ independent random linear combinations of these mini-files, he can recover the whole set of mini-files:
\begin{eqnarray}
\nonumber \left(W_{d_k,T \backslash \{k\}}^j: T \subseteq S, |T|=t+1, k \in T\right).
\end{eqnarray}
Thus, for the $T$ specified in this lemma, he can recover the mini-file $W_{d_k,T \backslash \{k\}}^j$. Now, since there exist a total of ${K-t-1 \choose L-1}$ $(t+L)$-subsets containing this specific $T$, by considering the transmissions corresponding to each, this user will recover ${K-t-1 \choose L-1}$ \emph{distinct} mini-files of form $W_{d_k,T \backslash \{k\}}^j$. The distinctness is guaranteed by the appropriate updating of the index $N(\cdot,\cdot)$. These mini-files will recover the sub-file $W_{d_k,T \backslash \{k\}}$ and the proof is concluded.
\end{proof}
\begin{lem}\label{Lem_Linear_Proof_2}
Through the above content delivery strategy an arbitrary user $k$ will be able to decode all the sub-files:
\begin{eqnarray}
\left(W_{d_k,\tau}: \tau \subseteq [K], |\tau|=t, k \not \in \tau\right).
\end{eqnarray}
\end{lem}
\begin{proof}
Consider an arbitrary $\tau \subseteq [K]$ such that $|\tau|=t, k \not \in \tau$. Define $T=\tau \cup \{k\}$. Then, since to Lemma \ref{Lem_Linear_Proof_1}, user $k$ is able to decode $W_{d_k,\tau}$. Since $\tau$ was chosen arbitrarily, the proof is complete.
\end{proof}
\begin{algorithm}\label{Alg_Main}
\caption{Multi-Server Coded Caching - Linear Networks}
\begin{algorithmic}[1]
\Procedure{PLACEMENT}{$W_1,\dots,W_N$}
\State $t \gets MK/N$
\ForAll{$n \in [N]$}
\State split $W_n$ into $(W_{n,\tau}: \tau \subset [K], |\tau|=t)$ of equal size
\ForAll{$\tau \subset [K], |\tau|=t$}
\State split $W_{n,\tau}$ into $(W_{n,\tau}^j: j=1,\dots,{K-t-1 \choose L-1})$ of equal size
\EndFor
\EndFor
\ForAll{$k \in [K]$}
\State $Z_k \gets (W_{n,\tau}^j: \tau \subset [K], |\tau|=t, k \in \tau, j=1,\dots,{K-t-1 \choose L-1}, n \in [N])$
\EndFor
\EndProcedure
\\
\Procedure{DELIVERY}{$W_1,\dots,W_N$, $d_1,\dots,d_K$}
\State $t \gets MK/N$
\ForAll{$T \subseteq [K], |T|=t+1$}
\ForAll{$r \in T$}
\State $N(r,T\backslash\{r\}) \gets 1 $
\EndFor
\EndFor
\ForAll{$S \subseteq [K], |S|=t+L$}
\ForAll{$T \subseteq S, |T|=t+1$}
\State Design $\mathbf{u}_{S}^{T}$ such that: for all $j \in S$, $\mathbf{h}_j \perp \mathbf{u}_{S}^{T}$ if $j \not\in T$ and $\mathbf{h}_j \not \perp \mathbf{u}_{S}^{T}$ if $j \in T$
\EndFor
\ForAll {$\omega=1,\dots,{t+L-1 \choose t}$}
\ForAll{$T \subseteq S, |T|=t+1$}
\State $G_\omega(T) \gets L_{r \in T}^\omega\left( W_{{d_r},T\backslash\{r\}}^{N(r,T\backslash\{r\})}\right)$
\EndFor
\State $\mathbf{X}_\omega(S) \gets \sum_{T \subseteq S, |T|=t+1} {\mathbf{u}_{S}^{T} G_\omega(T)}$
\EndFor
\State \textbf{transmit} $\mathbf{X}(S)=\left[\mathbf{X}_1(S),\dots,\mathbf{X}_{{t+L-1 \choose t}}(S)\right]$
\ForAll{$T \subseteq S, |T|=t+1$}
\ForAll{$r \in T$}
\State $N(r,T\backslash\{r\}) \gets N(r,T\backslash\{r\}) + 1$
\EndFor
\EndFor
\EndFor
\EndProcedure
\end{algorithmic}
\end{algorithm}
\section{Conclusions}\label{Sec_Conclusions}
In this paper, we investigated coded caching in a multi-server network where servers are connected to multiple cache-enabled clients. Based on the topology of the network, we defined three types of networks, namely, dedicated, flexible, and linear networks. In dedicated and flexible networks, we assume that the internal nodes are aware of the network topology, and accordingly route the data. In linear networks, we assume no topology knowledge at internal nodes, and thus, internal nodes perform random linear network coding. We have shown that knowledge of type of network topology plays a key role in design of proper caching mechanisms in such networks. Our results show that all network types can benefit from both caching and multiplexing gains. In fact, in dedicated and linear networks the global caching and multiplexing gains appear in additive form. However, in flexible networks they appear in multiplicative form, leading to an order-optimal solution in terms of coding delay.
\newpage
\bibliographystyle{ieeetr}
|
\section{Introduction}
Extensive neutron-scattering experiments clarified a close relation between the spin correlations and the superconductivity in high-{\it T}$_c$ superconductors\cite{Kastner}.
In the prototypical high-{\it T}$_c$ superconductor La$_{2-x}$Sr$_{x}$CuO$_{4}$ (LSCO), in which the doping-dependence of physical properties can be easily investigated by changing Sr concentration, following results were obtained:
(i) the incommensurate (IC) spin correlations were observed in a whole superconducting (SC) phase with 0.055$\leqslant$$x$$\leqslant$0.30\cite{Birgeneau}.
(ii) the direction of the IC wavevector in the SC phase differs by 45$^{\circ}$ in angle from that observed one in the spin-glass phase for $x$$<$0.055\cite{Matsuda}.
(iii) the incommensurability ($\delta$) is proportional to {\it T}$_c$ in underdoped region\cite{Yamada}.
These experimental facts naturally suggest that the origin of IC spin correlations are important to be clarified for the understanding of role of magnetism in the mechanism of high-{\it T}$_c$ superconductivity.
On the other hand, it is known that the distortion of CuO$_2$ planes can affect the superconductivity and the spin correlations.
Systematic neutron-scattering of La$_{1.875}$Ba$_{0.125-x}$Sr$_{x}$CuO$_{4}$, in which the crystal structure varies from low-temperature-tetragonal (LTT) phase to low-temperature-orthorhombic (LTO) phase upon increasing $x$, clarified the enhancement of low-energy component of spin fluctuations and the stability of charge stripe order in the LTT phase, where the $T_c$ is well suppressed\cite{Fujita}.
Furthermore, a resistivity measurement performed under high pressures showed the strong correlation between the corrugation of CuO$_2$ planes and $T_c$ in the LTO phase and suggests a higher $T_c$ in the structure with flat CuO$_2$ planes \cite{Nakamura}.
Therefore, it is interesting to study the relationship between the spin correlation and the corrugation of CuO$_2$ planes in order to extract an intrinsic interplay between the superconductivity and the spin correlation.
Motivated by the above reason, we have performed neutron-scattering experiment on \\La$_{1.94-x}$Sr$_{x}$Ce$_{0.06}$CuO$_{4}$ (LSCCO), which is co-doped system of Sr$^{2+}$ and Ce$^{4+}$ ions into La$_2$CuO$_4$.
The corrugation of CuO$_2$ planes is relaxed with increasing Sr concentration\cite{Fleming}, and Ce doping is considered to reduce the total hole concentration without a remarkable change in the corrugation of CuO$_2$ planes.
Thereby, we can prepare less-corrugated CuO$_2$ planes in LSCCO compared with LSCO system at similar hole concentration ($p$).
\section{Experimental details}
\begin{figure}[t]
\begin{center}
\includegraphics[width=6.4cm]{fig1.eps}
\end{center}
\caption{The magnetic excitation at $\omega$=4meV and $\omega$=8meV observed in LSCCO (a) $x$=0.14, (b) $x$=0.18, and (c) $x$=0.24 (TOPAN). The vertical scale is normalized by neutron counts monitor. Total counting time corresponds to $\sim$5.9 minutes ($\omega$=4meV) and $\sim$5.4 minutes ($\omega$=8meV). No clear magnetic signal is observed on $x$=0.18 and $x$=0.24, in contrast, well-defined incommensurate peak is observed on $x$=0.14 at $\omega$=4meV
}
\label{f1}
\end{figure}
For the experiment, we prepared the single crystals of LSCCO with $x$=0.14 ($p$$\sim$0.10), $x$=0.18 ($p$$\sim$0.14), and $x$=0.24 ($p$$\sim$0.20), corresponding to underdoped, optimally-doped, and slightly-overdoped sample, respectively.
The crystals grown by using the traveling-solvent floating-zone method with the typical length of $\sim$100 mm and diameter of $\sim$8 mm were annealed in O$_{2}$ gas flow and cut into $\sim$35mm-length pieces.
Two of cut samples were assembled so that the CuO$_2$ planes in the horizontal scattering plane, and rest part of samples are further sliced for the magnetic susceptibility measurement.
\begin{figure}[t]
\begin{center}
\includegraphics[width=6.4cm]{fig2.eps}
\end{center}
\caption{
The magnetic spectra (a) elastic, (b) $\omega$=0.5meV, and (c) $\omega$=1.0meV observed in LSCCO $x$=0.14 (HER). All magnetic spectra show well-defined incommensurate signals.}
\label{f2}
\end{figure}
Inelastic neutron-scattering measurements for 2 meV$\leqslant$$\omega$$\leqslant$12 meV were performed on the thermal neutron triple-axis spectrometer TOPAN installed in the reactor of JRR-3 at Japan Atomic Energy Agency (JAEA).
We fixed the final neutron energy ($E_f$) at 13.5meV and used a typical horizontal collimation sequence of 50$^{\prime}$-100$^{\prime}$-Sample-60$^{\prime}$-180$^{\prime}$.
To reduce the flux of high energy neutrons and cut the higher-order wavelengths neutrons, sapphire crystal and the pyrolitic graphite were placed before and after sample, respectively.
Elastic and low-energy ($\omega$$\leqslant$2 meV) inelastic neutron-scattering measurements with high experimental resolution were done on LSCCO $x$=0.14 by using the cold neutron triple-axis spectrometer HER installed in the guide-hall of JRR-3.
In the measurement at HER, $E_f$ of 5meV and the horizontal collimation sequence of 32$^{\prime}$-40$^{\prime}$-S-80$^{\prime}$(180$^{\prime}$)-80$^{\prime}$(180$^{\prime}$) for elastic (inelastic) measurement were selected.
In the elastic measurement, Be-filter was put before the sample.
In this paper, crystallographic indexes are denoted as ($h$ $k$ 0) in the tetragonal I4/mmm notation.
In order to determine the superconducting transition temperature {\it T}$_c$, we measured the magnetic susceptibility with using a superconducting quantum interference device magnetometer.
Evaluated {\it T}$_c$'s were $\sim$30K ($x$=0.14), $\sim$35K ($x$=0.18), and $\sim$25K ($x$=0.24), respectively.
We, furthermore, determined $T_{\rm c}$'s for several crystals with the $x$-range from 0.10 to 0.30 to make a phase diagram for the LSCCO system.
From a comparison of phase diagram of the present LSCCO and LSCO systems, we concluded that 6\% Ce doping reduces $\sim$4\% hole concentration.
In addition, it was confirmed by our neutron-scattering measurement that the Ce-doping does not affect both degree of in-plane orthorhombic distortion at low temperature and the structural transition temperature from high-temperature-tetragonal (HTT) to LTO $T_{\rm d1}$.
Therefore, the superconductivity occurs on the less-corrugated CuO$_2$ planes in LSCCO compared with LSCO with similar hole concentration.
Detailed results will be presented elsewhere\cite
|
{enk}.
\section{Results and Discussion}
Figure 1 shows the constant-$\omega$ spectra measured at TOPAN.
Well-defined IC peaks were observed in all samples at $\omega$=8 meV (closed circles).
In contrast, no clear magnetic signal was observed in the spectra measured at $\omega$=4 meV (open circles) for the $x$=0.18 and $x$=0.24,
while the $x$=0.14 sample shows sharp IC peaks at this energy.
The incommensurability ($\delta$), corresponding to the half distance between the two peaks, is $\sim$ 0.12 r.l.u. in all samples. (1 r.l.u. corresponds to 1.67 ${\rm \AA}$$^{-1}$. )
Thus, $\delta$ is independent on $x$ in the measured concentration range.
Absence of well-defined magnetic signal at low-energy in the $x$=0.18 and 0.24 samples is consistent with the opening of a spin-gap, which is reported for the optimally-doped LSCO.
On the other hand, the persistence of magnetic signal with $\omega$ down to 2 meV in the $x$=0.14 sample is rather analogous to the results for the underdoped LSCO exhibiting a short range magnetic order at low temperature.
We next pay our attention to the low-energy spin fluctuations below 2 meV and the possible static magnetic order.
Elastic magnetic signal was searched for all samples using the thermal neutron spectrometer and the signal was observed only in the $x$=0.14 sample.
Then we investigated the low-energy and the static spin correlations in the $x$=0.14 with higher experimental resolution ($\Delta$$\omega$$\sim$0.2meV and $\Delta$$Q$$\sim$0.005${\rm \AA}^{-1}$) at the cold triple-axis spectrometer.
Figure 2 shows the elastic and the representative inelastic spectra.
Existence of IC magnetic peaks at $\omega$=0.5meV suggests a gap-less spin excitation spectrum in the $x$=0.14 sample (Fig.2 (b)).
Furthermore, as seen in Fig. 2(a), the elastic magnetic intensity was reconfirmed to appear at low temperatures.
From the temperature-dependence of peak-intensity, the onset temperature for the appearance of intensity ($T_m$) was determined to be $\sim$20 K.
\begin{figure}[t]
\begin{center}
\includegraphics[width=7cm]{fig4.eps}
\end{center}
\caption{ The excitation energy-dependence of $\chi$$^{\prime\prime}$($\omega$) on (a)$x$=0.14, (b)$x$=0.18, and (c)$x$=0.24. Filled circle and open circle represents using the data on TOPAN and HER, respectively. Clear gap structure exist in spin excitation on $x$=0.18 and $x$=0.24. Otherwise, $x$=0.14 has gap-like structure in the magnetic excitation.}
\label{f4}
\end{figure}
For the evaluation of local spin susceptibility $\chi$$^{\prime\prime}$($\omega$), which corresponds to $Q$-integrated intensity after correcting the thermal factor, we take into account four isotropic peaks at (0.5$\pm$$\delta$ 0.5 0) and (0.5 0.5$\pm$$\delta$ 0) in the analysis.
Then the following Gaussian function was fitted to the observed spectra with convoluting experimental resolution,
\begin{displaymath}
\frac{{\chi}^{\prime\prime}(\omega)} {1-exp(-\hbar\omega/{\it k}_{B}{\it T})}
exp(-ln(2)(\mbox{\boldmath $Q$}_{AF}\pm\mbox{\boldmath $q$}_{\delta})^2/\kappa^2),
\end{displaymath}
where {\boldmath $Q$}$_{AF}$ and {\boldmath $q$}$_{\delta}$ represent AF wavevectors of (0.5 0.5 0) and IC wavevectors of ($\delta$ 0 0)/(0 $\delta$ 0), respectively, and
$\kappa$ is the peak-width (half-width at half-maximum).
$\omega$-dependence of $\chi$$^{\prime\prime}$($\omega$) for three samples is plotted in Fig. 3.
The results obtained for the $x$=0.14 sample at thermal and cold spectrometers are normalized with the values at 2 meV.
The magnetic excitation spectra in the $x$=0.18 and $x$=0.24 samples show a clear gap-structure, while $\chi$$^{\prime\prime}$($\omega$) in the $x$=0.14 sample has finite values in the all measured $\omega$-range below 12 meV.
In the latter, though the decrease of $\chi$$^{\prime\prime}$($\omega$) below $\sim$ 7 meV is resemble to the gap-structure observed in the $x$=0.18 and 0.24 samples, the intensity start to develop below 2 meV with decreasing $\omega$.
Here, we discuss the structural effects on the spin correlations from the comparison between the results for LSCCO and LSCO with comparable hole concentrations.
First, we consider the gap-structure in the LSCCO with $x$=0.18 and 0.24 samples.
The energy spectra for both samples are nearly the same with that for the optimally-doped LSCO, although the in-plane lattice distortion is smaller in the LSCCO system\cite{Lee03}.
Therefore, the gap-structure near the optimally doped region is not affected by a small lattice distortion.
We note that the appearance of the energy-gap by Ce doping into the Sr- overdoped sample is one of strong evidence of the reduction of hole concentration by the formation of Ce$^{4+}$ state. We hence emphasize that the absence of spin-gap in the overdoped LSCO ($x$$\sim$0.25)\cite{Lee00,Wakimoto} is not an extrinsic chemical disorder effect of dopants but an inherent property in overdoped region.
Second, the gap-like structure observed in the underdoped LSCCO with $x$=0.14 is contrastive to the gapless structure in the LSCO with $x$=0.10, although the hole concentration in these two samples is comparable.
Difference in the $\chi$$^{\prime\prime}$($\omega$) between the two samples is prominent in the low-energy part below around 7 meV.
Therefore the absence of spin-gap in the underdoped LSCO sample can be ascribed by the slowing down of spin fluctuations.
We speculate carrier localization on the corrugated CuO$_2$ planes is one of the main reasons for the slowing down of spin fluctuations. The stronger carrier localization occurs when the carrier concentration is smaller.
When spin and charge stripes are formed, the carrier localization corresponds to the pinning of stripes by the lattice potential on the corrugated CuO$_2$ planes.
Pinning of stripes could take place even in the LTO phase, although the pinning effect in the LTO phase is weaker than that in the LTT phase.
In this context, the observed gap-like structure in $\chi$$^{\prime\prime}$($\omega$) for the LSCCO with $x$=0.14 sample suggests that spin-gap states intrinsically exist even in the underdoped La214 system, which easily becomes unclear due to the structural effect.
The low-energy spin fluctuations below 2 meV and the static short range order observed in the present LSCCO with the $x$=0.14 sample would be induced by residual corrugation of CuO$_{2}$ planes and/or disorder of large amount of doped Sr and Ce ions\cite{Goko}.
Evidence for the existence of the low-eneryg spin fluctuations below the gap energy in the LSCO system is recently reported from the inelastic neutron scattering measurement done under magnetic fields\cite{Chang}.
To clarify the intrinsic spin correlations in the high-$T_c$ superconductors, further systematic neutron-scattering experiments are needed.
|
\section{Introduction}
As the title of this school indicates, a consistent quantum theory
of gravity is eventually needed to solve the fundamental cosmological
questions. These concern in particular the role of initial
conditions and a deeper understanding of processes such as inflation.
The presence of the singularity theorems in general relativity
prevents the formulation of viable initial conditions in the classical
theory. Moreover, the inflationary scenario can be successfully
implemented only if the cosmological no-hair conjecture is
imposed -- a conjecture which heavily relies on assumptions
about the physics at sub-Planckian scales.
It is generally assumed that a quantum theory of gravity can
cure these problems. This is not a logical necessity, though,
since there might exist classical theories which could achieve
the same. As will be discussed in my contribution, however,
one can put forward many arguments in favour of the quantisation
of gravity, which is why classical alternatives will not be considered
here.
Although a final quantum theory of gravity is still elusive,
there exist concrete approaches which are mature enough
to discuss their impact on cosmology. Here I shall focus on
conceptual, rather than technical, issues that one might expect
to play a role in any quantum theory of the gravitational field.
In fact, most of the existing approaches leave the basic structures
of quantum theory, such as its linearity, untouched.
Two aspects of quantum cosmology must be distinguished.
The first is concerned with the application of quantum theory
to the Universe as a whole and is independent of any particular
interaction. This raises such issues as the interpretation
of quantum theory for closed systems, where no external
measuring agency can be assumed to exist. In particular,
it must be clarified how and to what extent classical properties
emerge. The second aspect deals with the peculiarities
that enter through quantum aspects of the gravitational interaction.
Since gravity is the dominant interaction on the largest scales,
this is an important issue in cosmology. Both aspects will be
discussed in my contribution.
Since many features in quantum cosmology arise from
the application of standard quantum theory to the Universe
as a whole, I shall start in the next section with a
dicussion of the lessons that can be learnt from
ordinary quantum theory. In particular, the
central issue of the quantum-to-classical transition
will be discussed at some length. Section~3 is then devoted
to full quantum cosmology: I start with giving precise
arguments why one must expect that the gravitational field
is of a quantum nature at the most fundamental level. I then
discuss the problem of time and related issues such as the
Hilbert-space problem. I also
devote some space to the central question of how to
impose boundary conditions properly in quantum cosmology.
The last section will then be concerned with the emergence
of a classical Universe from quantum cosmology.
I demonstrate how an approximate notion of
a time parameter can be recovered
from ``timeless'' quantum cosmology through some semiclassical
approximation. I then discuss at length the emergence of
a classical spacetime by decoherence. This is important for both
the consistency of the inflationary scenario as well
as for the classicality of primordial fluctuations which
can serve as seeds for galaxy formation and which can be
observed in the anisotropy spectrum of the cosmic
microwave background.
\section{Lessons from quantum theory}
\subsection{Superposition principle and ``measurements''}
The superposition principle lies at the heart of quantum theory.
{}From a conceptual point of view, it is appropriate to separate
it into a kinematical and a dynamical version (Giulini et al.~1996):
\begin{itemize}
\item {\em Kinematical version}: If $\Psi_1$ and $\Psi_2$ physical
states, then $\alpha\Psi_1+\beta\Psi_2$, where $\alpha$ and
$\beta$ are complex numbers, is again a physical state.
\item {\em Dynamical version}: If $\Psi_1(t)$ and $\Psi_2(t)$
are solutions of the Schr\"odinger equation, then
$\alpha\Psi_1(t)+\beta\Psi_2(t)$ is again a solution of the
Schr\"odinger equation.
\end{itemize}
These features give rise to the {\em nonseparability} of quantum
theory. If interactions between systems are present, the
emergence of entangled states is unavoidable. As
Schr\"odinger (1935) put it:
\begin{quote}
I would not call that {\em one} but rather {\em the}
characteristic trait of quantum mechanics, the one that enforces
its entire departure from classical lines of thought. By the
interaction the two representatives (or $\psi$-functions)
have become entangled. \ldots Another way of expressing the
peculiar situation is: the best possible knowledge of a
{\em whole} does not necessarily include the best possible
knowledge of all its {\em parts}, even though they may be
entirely separated \ldots
\end{quote}
Because of the superposition principle, quantum states
which mimic classical states (for example, by being localised),
form only a tiny subset of all possible states. Up to now,
no violation of the superposition principle has been observed
in quantum-mechanical experiments, and the only question is
why we observe classical states at all. After all, one would
expect the superposition principle to have unrestricted
validity, since also macroscopic objects are composed
of atoms.
The power of the superposition principle was already
noted by von Neumann in 1932 when he tried to describe
the measurement process consistently in quantum terms.
He considers an interaction between a system and a (macroscopic)
apparatus (cf. Giulini et al.~1996).
Let the states of the measured system which are
discriminated by the apparatus be denoted by $|n\rangle$, then an
appropriate interaction Hamiltonian has the form
\begin{equation} H_{int} =\sum_n|n\rangle\langle n| \otimes\hat{A}_n\ . \end{equation}
The operators $\hat{A}_n$, acting on the states of the apparatus, are
rather
arbitrary, but must of course depend on the ``quantum number" $n$.
Note that the measured ``observable" is dynamically defined by
the system-apparatus interaction and there is no reason to introduce
it
axiomatically (or as an additional concept).
If the
measured system is initially in the state $|n\rangle$ and the device
in some initial state $|\Phi_0\rangle$,
the evolution according to the Schr\"odinger equation
with Hamiltonian (1) reads
\begin{eqnarray} |n\rangle|\Phi_0\rangle \stackrel{t}{\longrightarrow}
\exp\left(-\mbox{i} H_{int}t\right)|n\rangle|\Phi_0\rangle
&=& |n\rangle\exp\left(-\mbox{i} \hat{A}_nt\right)|\Phi_0\rangle\nonumber
\\
&=:& |n\rangle|\Phi_n(t)\rangle\ . \end{eqnarray}
The resulting apparatus states $|\Phi_n(t)\rangle$ are usually called
``pointer positions''.
An analogy to (2) can also be
written down in classical physics. The essential new quantum features
come into play when we consider a {\em superposition} of different
eigenstates (of the measured ``observable'') as initial state. The
linearity of time evolution immediately leads to
\begin{equation} \left(\sum_n c_n|n\rangle\right)|\Phi_0\rangle
\stackrel{t}\longrightarrow\sum_n c_n|n\rangle
|\Phi_n(t)\rangle\ . \label{sup} \end{equation}
This state does not, however, correspond to a definite
measurement result -- it contains a ``weird'' superposition
of macroscopic pointer positions! This motivated von Neumann
to introduce a ``collapse'' of the wave function, because he
saw no other possibility to adapt the formalism to experience.
There have been only rather recently attempts to
give a concrete dynamical formulation of this collapse
(see, e.g., Chap.~8 in Giulini et al. (1996)). However, none
of these collapse models has yet been experimentally confirmed.
In the following I shall review a concept that enables one
to reconcile quantum theory with experience
without introducing an explicit collapse; strangely enough, it is
the superposition principle itself that leads to
classical properties.
\subsection{Decoherence: Concepts, examples, experiments}
The crucial observation is that macroscopic objects
cannot be considered as being isolated -- they are
unavoidably coupled to ubiquitous degrees of freedom of
their einvironment, leading to quantum entanglement.
As will be briefly discussed in the
course of this subsection, this gives rise to classical
properties for such objects -- a process known
as {\em decoherence}. This was first discussed by Zeh in the
seventies and later elaborated by many authors;
a comprehensive treatment is given by Giulini et al. (1996),
other reviews include Zurek (1991), Kiefer and Joos (1999),
see also the contributions to the volume Blanchard et al. (1999).
Denoting the environmental states with $\vert{\cal E}_n\rangle$,
the interaction with system and apparatus yields instead of
(\ref{sup}) a superposition of the type
\begin{equation}
\left(\sum_n c_n|n\rangle\right)|\Phi_0\rangle|{\cal E}_0\rangle
\stackrel{t}\longrightarrow\sum_n c_n|n\rangle
|\Phi_n\rangle|{\cal E}_n\rangle\ . \label{supe} \end{equation}
This is again a macroscopic superposition, involving
a tremendous number of degrees of freedom. The crucial point now is,
however, that most of the environmental degrees of freedom
are not amenable to observation. If we ask what can be seen
when observing only system and apparatus, we need --
according to the quantum rules -- to calculate the reduced density
matrix $\rho$ that is obtained from (\ref{supe}) upon tracing
out the environmental degrees of freedom.
If the environmental states are approximately orthogonal
(which is the generic case),
\begin{equation} \langle{\cal E}_m|{\cal E}_n\rangle \approx \delta_{mn}\ , \end{equation}
the density
matrix becomes approximately diagonal in the ``pointer basis'',
\begin{equation} \rho_S \approx \sum_n|c_n|^2|n\rangle\langle n|
\otimes |\Phi_n\rangle\langle\Phi_n|
\ . \end{equation}
Thus, the result of this interaction is a density matrix which seems
to describe an ensemble of different outcomes $n$ with the
respective probabilities. One must be careful in analysing its
interpretation,
however: This density matrix only corresponds to an {\it apparent}
ensemble,
not a genuine ensemble of quantum states.
What can safely be stated is the fact, that interference terms
(non-diagonal elements) are absent locally, although
they are still present in the total system, see (\ref{supe}).
The coherence present in the initial system state in (3) can no longer
be observed;
it is {\it delocalised} into the larger system. As is well known, any
interpretation of a superposition as an ensemble
of components can be disproved experimentally
by creating interference effects. The same is true for
the situation described in (3). For
example, the evolution could {\it in principle} be reversed. Needless
to say that such a reversal is experimentally extremely difficult, but
the interpretation and consistency of a physical theory must not depend
on our present technical abilities. Nevertheless, one often finds
explicit
or implicit statements to the effect that the above processes are
equivalent to the collapse of the wave function (or even solve
the measurement problem). Such statements are certainly unfounded. What
can safely be said, is that coherence between the subspaces of the
Hilbert space spanned by $|n\rangle$ can no longer be observed
in the system considered, {\it if} the
process described by (3) is practically irreversible.
The essential implications are twofold: First, processes of the kind
(3) do happen
frequently and unavoidably for all macroscopic objects. Second, these
processes are irreversible in practically all
realistic situtations. In a normal measurement process, the
interaction and the state of the apparatus are controllable to some
extent (for example, the initial state of the apparatus is known to the
experimenter). In the case of decoherence, typically the initial state
is not known in detail (a standard example is interaction with
thermal radiation), but the consequences for the local density
matrix are the same: If the environment is described by an ensemble,
each member of this ensemble can act in the way described above.
A complete treatment of realistic cases has to include the Hamiltonian
governing the evolution of the system itself (as well as that of the
environment). The exact dynamics of a subsystem is hardly manageable
(formally it is given by a complicated integro-differential
equation, see Chap.~7 of Giulini et al. 1996).
Nevertheless, we can find important approximate solutions
in some simplifying cases. One example is concerned with
localisation through scattering processes
and will be briefly discussed in the following.
My treatment will closely follow Kiefer and Joos (1999).
Why do macroscopic objects always appear localised in space? Coherence
between macroscopically different positions is destroyed {\it very}
rapidly because
of the strong influence of scattering processes. The formal description
may proceed as follows. Let $|x\rangle$ be the position eigenstate
of a macroscopic object, and $|\chi\rangle$ the state of the
incoming particle.
Following the von Neumann scheme, the scattering of
such particles off an object located at position $x$ may be written as
\begin{equation} |x\rangle|\chi\rangle \stackrel{t}{\longrightarrow}
|x\rangle|\chi_x\rangle=|x\rangle S_x|\chi\rangle\ , \end{equation}
where the scattered state may conveniently be calculated by means of
an appropriate S-matrix. For the more general initial state of a wave
packet we have then
\begin{equation} \int\mbox{d}^3x\ \varphi(x)|x\rangle|\chi\rangle
\stackrel{t}{\longrightarrow}\int\mbox{d}^3x\ \varphi(x)|x\rangle
S_x|\chi\rangle\ , \end{equation}
and the reduced density matrix describing our object changes
into
\begin{equation} \rho(x,x')=\varphi(x)\varphi^*(x')
\left\langle\chi|S_{x'}^{\dagger}S_x|\chi\right\rangle\ .
\label{trace} \end{equation}
These steps correspond to the general steps discussed above.
Of course, a single scattering process will usually not resolve a small
distance, so
in most cases the matrix element on the right-hand side
of (\ref{trace}) will be close to unity.
But if we add the contributions of many scattering processes, an
exponential damping of spatial coherence results:
\begin{equation} \rho(x,x',t)= \rho(x,x',0)\exp\left\{-\Lambda t(x-x')^2\right\}\ .
\label{dm} \end{equation}
The strength of this effect is described by a single parameter $\Lambda$
which may be called the ``localisation rate" and is given by
\begin{equation} \Lambda= \frac{k^2Nv\sigma_{eff}}{V}\ .\end{equation}
Here, $k$ is the wave number of the incoming particles, $Nv/V$
the flux, and $\sigma_{eff}$ is of the order of the total cross section
(for details see Joos and Zeh 1985 or
Sect. 3.2.1 and Appendix 1 in Giulini et al. 1996).
Some values of $\Lambda$ are given in the Table.
\begin{table}[htb]
\caption[ ]{Localisation rate $\Lambda$ in $\mbox{cm}^{-2}
\mbox{s}^{-1}$ for three sizes of ``dust particles" and various
types of scattering processes (from Joos and Zeh~1985).
This quantity measures how fast interference between different
positions disappears as a function of distance in the course of
time, see (\ref{dm}).}
\begin{flushleft}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{l|lll}
\hline
{} & \ $a=10^{-3}\mbox{cm}$ &\ $a=10^{-5}\mbox{cm}$ &\
$a=10^{-6}\mbox{cm}$\\
{} & \ dust particle &\ dust particle &\ large molecule
\\ \hline
Cosmic background radiation &\ $10^{6}$& $10^{-6}$ &$10^{-12}$ \\
300 K photons &\ $10^{19}$ & $10^{12}$ & $10^6$ \\
Sunlight (on earth) &\ $10^{21}$ & $10^{17}$ & $10^{13}$ \\
Air molecules & \ $10^{36}$ & $10^{32}$ & $10^{30}$ \\
Laboratory vacuum & \ $10^{23}$ & $10^{19}$ & $10^{17}$\\
($10^3$ particles/$\mbox{cm}^3$) & {}&{}&\\
\hline
\end{tabular}
\renewcommand{\arraystretch}{1}
\end{flushleft}\end{table}
Most of the numbers in the table are quite large, showing the extremely
strong coupling of macroscopic objects, such as dust particles, to their
natural environment. Even in intergalactic space, the 3K background
radiation cannot be neglected.
In a general treatment one must combine the decohering influence
of scattering processes with the internal dynamics of the system.
This leads to master equations for the reduced density matrix,
which can be solved explicitly in simple cases. Let me mention
the example where the internal dynamics is given by the
free Hamiltonian and consider the coherence length, i.e.
the non-diagonal part of the density matrix.
According to the Schr\"odinger equation, a free wave packet would
spread,
thereby increasing its size and extending its coherence properties over
a larger region of space. Decoherence is expected to counteract this
behaviour and reduce the coherence length. This can be seen in the
solution shown in Fig. 1, where the time dependence of the coherence
length (the width of the density matrix in the off-diagonal direction)
is
plotted for a truly free particle
(obeying a Schr\"odinger equation) and also for increasing strength of
decoherence. For large times the spreading of the
wave packet no longer occurs and the coherence length always decreases
proportional to $1/\sqrt{\Lambda t}$. More details and
more complicated examples can be found in Giulini et al. (1996).
\begin{figure}
\begin{center}
\includegraphics[width=1.0\textwidth]{fig35.eps}
\end{center}
\caption[]{Time dependence of coherence length. It is a measure of
the spatial extension over which the object can show interference
effects. Except for zero coupling ($\Lambda=0$), the coherence
length always decreases for large times. From Giulini et al. (1996).}
\end{figure}
Not only the centre-of-mass position of dust particles becomes
``classical'' via
decoherence. The spatial structure of molecules represents another most
important example. Consider a simple model of a chiral molecule (Fig.
2).
\begin{figure}
\begin{center}
\includegraphics[width=1.0\textwidth]{fig315.eps}
\end{center}
\caption[]{Typical structure of an optically active, chiral molecule.
Both versions are mirror-images of each other and are not connected
by a proper rotation, if the four elements are different.}
\end{figure}
Right- and left-handed versions both have a rather well-defined spatial
structure, whereas the ground state is - for symmetry reasons -
a superposition of both chiral states. These chiral configurations are
usually separated by a tunneling barrier (compare Fig. 3) which is so
high that under
normal circumstances tunneling is very improbable, as was already
shown by Hund in 1929. But this alone does not explain why chiral
molecules are never found in energy eigenstates! Only the
interaction with the environment can lead to the localisation
and the emergence of a spatial structure. We shall encounter
a similar case of ``symmetry breaking'' in the case of
quantum cosmology, see Sect.~4.2 below.
\begin{figure}
\begin{center}
\includegraphics[width=1.0\textwidth]{fig316.eps}
\end{center}
\caption[]{Effective potential for the inversion coordinate in a model
for a chiral molecule and the two lowest-lying eigenstates. The
ground state is symmetrically distributed over the two wells. Only
linear combinations of the two lowest-lying states are localised and
correspond to a classical configuration.}
\end{figure}
I want to emphasise that decoherence should not be confused
with thermalisation, although they sometimes occur together.
In general, decoherence and relaxation have drastically different
timescales -- for a typical macroscopic situation decoherence
is faster by forty orders of magnitude.
This short decoherence timescale leads to the impression
of discontinuities, e.g. ``quantum jumps'',
although the underlying dynamics, the Schr\"odinger equation,
is continuous. Therefore, to come up with a precise
experimental test of decoherence, one must spend
considerable effort to bring the decoherence timescale into
a regime where it is comparable with other timescales of
the system. This was achieved by a quantum-optical experiment
that was performed in Paris in 1996, see Haroche (1998) for a review.
What is done in this experiment? The role of the system is
played by a rubidium atom and its states $|n\rangle$ are two
Rydberg states $|+\rangle$ and $|-\rangle$. This atom is sent into a
high-Q cavity and brought into interaction with an
electromagnetic field. This field plays the role of the
``apparatus'' and its pointer states $|\Phi_n\rangle$ are coherent
states $|\alpha_+\rangle$ and $|\alpha_-\rangle$ which are
correlated with the system states $|+\rangle$ and
$|-\rangle$, respectively. The atom is brought into a superposition
of $|+\rangle$ and $|-\rangle$ which it imparts on the
coherent states of the electromagnetic field; the latter is
then in a superposition of $|\alpha_+\rangle$
and $|\alpha_-\rangle$, which
resembles a Schr\"odinger-cat state. The role of the
environment is played by mirror defects and the corresponding
environmental states are correlated with the respective
components of the field superposition. One would thus expect
that decoherence turns this superposition locally into a
mixture. The decoherence time is calculated to be
$t_D\approx t_R/\bar{n}$, where $t_R$ is the relaxation time
(the field-energy decay time) and $\bar{n}$ is the average photon number
in the cavity. In the experiment $t_R$ is about 160 microseconds,
and $\bar{n}\approx 3.3$. These values enable one to
monitor the process of decoherence as a process in time.
The decay of field coherence is measured
by sending a second atom with different
delay times into the cavity, playing the
role of a ``quantum mouse''; interference fringes are observed
through two-atom correlation signals. The experimental results
are found to be in complete agreement with the theoretical
prediction. If a value of $\bar{n}\approx 10$ is chosen,
decoherence is already so rapid that no coherence can be seen.
This makes it obvious why decoherence for
macroscopic objects happens ``instantaneously'' for all practical purposes.
\subsection{On the interpretation of quantum theory\protect
\footnote{This is adapted from Sect.~4 of
Kiefer and Joos (1999).}}
It would have been possible to study
the emergence of classical properties by decoherence
already in the early
days of quantum mechanics and, in fact, the contributions of
Landau, Mott, and Heisenberg at the end of the twenties can be
interpreted
as a first step in this direction. Why did one not go further
at that time? One major reason was certainly the advent of the
``Copenhagen doctrine'' that was sufficient to apply the formalism
of quantum theory on a pragmatic level. In addition, the imagination
that objects can be isolated from their environment was so deeply
rooted since the time of Galileo, that the {\em quantitative} aspect
of decoherence was largely underestimated. This quantitative
aspect was only borne out by detailed calculations, some of which
I have reviewed above. Moreover, direct experimental verification
was only possible quite recently.
What are the achievements of the decoherence mechanism?
Decoherence can certainly explain why and how {\em within}
quantum theory certain objects (including fields) {\em appear}
classical to ``local'' observers. It can, of course, not explain
why there are such local observers at all. The classical properties
are defined by the {\em pointer basis} for the object,
which is distinguished by the interaction with the environment
and which is sufficiently stable in time. It is important to
emphasise that classical properties are {\em not} an a priori
attribute of objects, but only come into being through the
interaction with the environment.
Because decoherence acts, for macroscopic systems,
on an extremely short time scale, it appears to act discontinuously,
although in reality decoherence is a smooth process.
This is why ``events'', ``particles'', or ``quantum jumps'' are
observed. Only in the special arrangement of experiments,
where systems are used that lie at the border between microscopic
and macroscopic, can this smooth nature of decoherence be observed.
Since decoherence studies only employ the standard formalism of
quantum theory, all components characterising macroscopically
different situations are still present in the total quantum state
which includes system {\em and} environment, although they cannot be
observed locally. Whether there is a
real dynamical ``collapse'' of the total
state into one definite component or not (which would lead to an
Everett interpretation)
is at present an undecided question.
Since this may not experimentally be decided in the near future,
it has been declared a ``matter of taste'' (Zeh~1997).
The most important feature of decoherence besides its ubiquity
is its {\em irreversible} nature. Due to the interaction with the
environment, the quantum mechanical entanglement {\em increases}
with time. Therefore, the local entropy for subsystems increases, too,
since information residing in correlations is locally unobservable.
A natural prerequisite for any such irreversible behaviour,
most pronounced in the Second Law of thermodynamics, is a special
initial condition of very low entropy. Penrose has
demonstrated convincingly
that this is due to the extremely special nature of the
big bang. Can this peculiarity be explained in any satisfactory way?
Convincing arguments have been put forward that this can only be
achieved within a quantum theory of gravity (Zeh~1999).
This leads directly into the realm of quantum cosmology
which is the topic of the following sections.
\section{Quantum cosmology}
\subsection{Why spacetime cannot be classical}
Quantum cosmology is the application of quantum theory
to the Universe as a whole. Is such a theory possible or
even -- as I want to argue here -- needed for consistency?
In the first section I have stressed the importance of the
superposition principle and the ensuing quantum entanglement
with environmental degrees of freedom. Since the environment is
in general also coupled to another environment, this leads
ultimately to the whole Universe as the only closed quantum system
in the strict sense. Therefore one must take quantum cosmology
seriously. Since gravity is the dominant interaction
on the largest scales, one faces the problem of quantising
the gravitational field. In the following I shall list
some arguments that can be put forward in support of
such a quantisation, cf. Kiefer (1999):
\begin{itemize}
\item {\em Singularity theorems of general relativity}:
Under very general conditions, the occurrence of a singularity,
and therefore the breakdown of the theory,
is unavoidable.
A more fundamental theory is therefore needed to
overcome these shortcomings, and the general expectation is that this
fundamental theory is a quantum theory of gravity.
\item {\em Initial conditions in cosmology}: This is related to the
singularity theorems,
since they predict the existence of a
``big bang'' where the known laws of physics break down.
To fully understand the evolution of our Universe, its initial
state must be amenable to a physical description.
\item {\em Unification}: Apart from general relativity, all known
fundamental theories are {\em quantum} theories. It would thus seem
awkward if gravity, which couples to all other fields,
should remain the only classical entity in a fundamental
description. Moreover, it seems that classical fields cannot
be coupled to quantum fields without leading to inconsistencies
(Bohr-Rosenfeld type of analysis).
\item {\em Gravity as a regulator}: Many models indicate that the consistent
inclusion of gravity in a quantum framework automatically
eliminates the divergences that plague ordinary quantum
field theory.
\item {\em Problem of time}: In ordinary quantum theory, the presence
of an external time parameter $t$ is crucial for the interpretation
of the theory: ``Measurements'' take place at a certain time,
matrix elements are evaluated at fixed times, and the norm
of the wave function is conserved {\em in} time.
In general relativity,
on the other hand, time
as part of spacetime is a dynamical quantity. Both concepts of time
must therefore be modified at a fundamental level.
This will be discussed in some detail in the next subsection.
\end{itemize}
The task of quantising gravity has not yet been accomplished,
but approaches exist within which sensible questions can be asked.
Two approaches are at the centre of current research:
Superstring theory (or M-theory) and canonical quantum gravity.
Superstring theory is much more ambitious and aims at a unification
of all interactions within a single quantum framework
(a recent overview is Sen~1998).
Canonical quantum gravity, on the other hand, attempts to
construct a consistent, non-perturbative, quantum theory of the
gravitational field on its own. This is done through the application
of standard quantisation rules to the general theory of relativity.
The fundamental length scales that are connected with these theories
are the Planck length, $l_p=\sqrt{G\hbar/c^3}$, or the
string length, $l_s$. It is generally assumed that the string length
is somewhat larger than the Planck length. Although not fully
established in quantitative detail, canonical quantum gravity
should follow from superstring theory for scales $l\gg l_s>l_p$.
One argument for this derives directly from the kinematical nonlocality
of quantum theory: Quantum effects are not a priori restricted
to certain scales. For example, the rather large mass of a dust grain
cannot by itself be used as an argument for classicality. Rather,
the process of decoherence through the environment can explain
{\em why} quantum effects are negligible for this object, see the
discussion in Sect.~2.2, in particular the quantitative
aspects as they manifest themselves in the Table.
Analogously, the smallness of $l_p$ or $l_s$ cannot by itself be
used to argue that quantum-gravitational effects are small. Rather, this
should be an emergent fact to be justified by decoherence
(see Sect.~4). Since for scales larger than $l_p$ or $l_s$
general relativity is an excellent approximation, it must be
clear that the canonical quantum theory must be an excellent
approximation, too. The canonical theory might or might not
exist on a full, non-perturbative level, but it should definitely
exist as an effective theory on large scales.
It seems therefore sufficient to base the following discussion
on canonical quantum gravity, although I want to emphasise that
the same conceptual issues arise in superstring theory.
Depending on the choice of the canonical variables,
the canonical theory can be subdivided into the following
approaches:
\begin{itemize}
\item {\em Quantum geometrodynamics}: This is the traditional
approach that uses the three-dimensional metric as
its configuration variable.
\item {\em Quantum connection dynamics}: The configuration variable
is a non-abelian connection that has many similarities
to gauge theories.
\item {\em Quantum loop dynamics}: The configuration variable is
the trace of a holonomy with respect to a loop, analogous
to a Wilson loop.
\end{itemize}
There exists a connection between the last two approaches,
whereas their connection to the first approach is less clear.
For the above reason one should, however,
expect that a relation between all approaches exists
at least on a semiclassical level. Here, I shall restrict myself to quantum
geometrodynamics, since this seems to be the most
appropriate language for a discussion of the conceptual issues.
However, most of this discussion should find its pendant
in the other approaches, too. A thorough discussion
of these other approaches can be found in many
contributions to this volume, see also Ashtekar (1999).
\subsection{Problem of time}
``Quantisation'' is a set of heuristic recipes which allows one
to guess the structure of the quantum theory from the
underlying classical theory.
In the canonical approach, the first step is to identify the
canonical variables, the configuration and momentum variables
of the classical theory. Their Poisson brackets are then
translated into quantum operators. As a well-known theorem
by Groenewald and van~Hove states, such a translation is not
possible for most of the other variables.
Details of the canonical formalism for general relativity
can be found in Isham (1992), Kucha\v{r} (1992), and the
references therein, and I shall give here only a brief
introduction. For the definition of the canonical momenta,
a time coordinate has to be distinguished. This spoils the
explicit four-dimensional covariance of general relativity --
the theory is reformulated to give a formulation for the
dynamics of {\em three}-dimensional hypersurfaces. It is then not
surprising that the configuration variable is the {\em three}-dimensional
metric, $h_{ab}({\vec x})$, on such hypersurfaces. The three-metric
has six independent degrees of freedom. The remaining four
components of the spacetime metric play the role of non-dynamical
Lagrange multipliers called lapse function, $N^{\perp}({\vec x})$,
and shift vector, $N^a({\vec x})$ -- they parametrise, respectively, the
way in which consecutive hypersurfaces are chosen and
how the coordinates are selected {\em on} a hypersurface.
The momenta canonically conjugated to the three-metric,
$p^{ab}({\vec x})$,
form a tensor which is linearly related to the second fundamental form
associated with a hypersurface -- specifying the way in which
the hypersurface is embedded into the fourth dimension.
In the quantum theory, the canonical variables are formally
turned into operators obeying the commutation relations
\begin{equation}
[\hat{h}_{ab}({\vec x}),\hat{p}^{ab}({\vec y})]
=\mbox{i}\hbar\delta^c_{(a}\delta^d_{b)}\delta({\vec x},{\vec y})\ .
\end{equation}
In a (formal) functional Schr\"odinger representation, the
canonical operators act on wave functionals $\Psi$ depending
on the three-metric,
\begin{eqnarray}
\hat{h}_{ab}({\vec x}) \Psi[h_{ab}({\vec x})]&=&
h_{ab}({\vec x}) \Psi[h_{ab}({\vec x})]\\
\hat{p}^{cd}({\vec x}) \Psi[h_{ab}({\vec x})] &=&
\frac{\hbar}{\mbox{i}}\frac{\delta}{\delta h_{cd}({\vec x})}
\Psi[h_{ab}({\vec x})]\ .
\end{eqnarray}
A central feature of canonical gravity is the existence of
constraints. Because of the four-dimensional diffeomorphism
invariance of general relativity, these are four constraints
per space point, one Hamiltonian constraint,
\begin{equation}
\hat{\cal H}_{\perp}\Psi=0\ , \label{ham}
\end{equation}
and three diffeomorphism constraints,
\begin{equation}
\hat{\cal H}_a\Psi=0\ . \label{diffeo}
\end{equation}
The total Hamiltonian is obtained by integration\footnote{In
the following I shall restrict myself to closed compact spaces;
otherwise, the Hamiltonian has to be augmented by surface terms
such as the ADM energy.},
\begin{equation}
\hat{H}=\int\mbox{d}^3x\ (N^{\perp}\hat{\cal H}_{\perp}
+N^a\hat{\cal H}_a), \ \label{totham}
\end{equation}
where $N^{\perp}$ and $N^a$ denote again lapse function and
shift vector, respectively. The constraints then enforce that
the wave functional be annihilated by the total Hamiltonian,
\begin{equation}
\hat{H}\Psi=0\ . \label{wdw}
\end{equation}
The {\em Wheeler-DeWitt} equation (\ref{wdw}) is the central equation
of canonical quantum gravity. This also holds for quantum connection
dynamics and quantum loop dynamics, although the configuration
variables are different.
The Wheeler-DeWitt equation (\ref{wdw}) possesses the remarkable
property that it does not depend on any external time parameter --
the $t$ of the time-dependent Schr\"odinger equation has totally
disappeared, and (\ref{wdw}) looks like a stationary zero-energy
Schr\"odinger equation. How can this be understood?
In classical canonical gravity, a spacetime can be represented
as a ``trajectory'' in configuration space -- the space of all
three-metrics. Although time coordinates have no intrinsic meaning
in classical general relativity either, they can nevertheless
be used to parametrise this trajectory in an essentially arbitrary
way. Since no trajectories exist anymore in quantum theory,
no spacetime exists at the most fundamental, and therefore also
no time coordinates to parametrise any trajectory.
A simple analogy is provided by the relativistic particle:
In the classical theory there is a trajectory which can be
parametrised by some essentially arbitrary parameter, e.g.
the proper time. Reparametrisation invariance
leads to one constraint, $p^2+m^2=0$. In the quantum theory,
no trajectory exists anymore, the wave function obeys
the Klein-Gordon equation as an analogue of (\ref{wdw}),
and any trace of a classical time parameter is lost
(although, of course, for the relativistic particle the
background Minkowski spacetime is present, which is
not the case for gravity).
Since the presence of an external time parameter is very
important in quantum mechanics -- giving rise to such important
notions as the unitarity of states --, it is a priori not clear
how to interpret a ``timeless'' equation of the form (\ref{wdw}),
cf. Barbour (1997) and Kiefer (1997). This is called the {\em problem of time}.
A related issue is the {\em Hilbert-space problem}: What is the appropriate
inner product that encodes the probability interpretation
and that is conserved in time?
Before discussing some of the options, it is very useful to first
have a look at the explicit structure of (\ref{ham}) and
(\ref{diffeo}). Introducing the Planck mass $m_p=(16\pi G)^{-1/2}$
and setting $\hbar=1$, the constraint equations read
\begin{eqnarray}
&&\mbox{\vphantom{$\left\{\frac{L^{L^{L^a}}_{L}}
{L^L_L g}\right\}$}}^{\prime\prime}\!
\left\{-\frac 1{2 m_p^2}
G_{ab,cd}\frac{\delta^2}
{\delta h_{ab}\delta h_{cd}}-m_p^2\,\sqrt{h}\,{}^3\!R +
\hat{H}{}_{\perp}^{\rm mat}\right\}^{\prime\prime}
|{\mbox{\boldmath$\Psi$}}[h_{ab}]\big>=0, \label{ham1}
\\
&&\mbox{\vphantom{$\left\{\frac{L^{L^{L^a}}_{L}}
{L^L_L g}\right\}$}}^{\prime\prime}\!
\left\{-\frac{2}\mbox{i} h_{ab}\nabla_c
\frac{\delta}{\delta h_{bc}}+\hat{H}{}_a^{\rm mat}
\right\}^{\prime\prime}
|{\mbox{\boldmath$\Psi$}}[h_{ab}]\big>=0. \label{diffeo1}
\end{eqnarray}
The inverted commas indicate that these are formal equations and that
the factor ordering and regularisation problem have not been
addressed. In these equations, $^3\!R$ and $\sqrt{h}$ denote the
three-dimensional Ricci scalar and the square root of the
determinant of the
three-metric, respectively, and a cosmological term has not been
considered here. The quantity $G_{ab,cd}=h^{-1/2}
(h_{ac}h_{bd}+h_{ad}h_{bc}-h_{ab}h_{cd})$ plays the role of
a metric in configuration space (``DeWitt metric''), and $\nabla_c$
denotes the covariant spatial derivative.
The matter parts of the constraints, $\hat{H}_{\perp}^{\rm mat}$ and
$\hat{H}_a^{\rm mat}$, depend on the concrete choice of matter action
which we shall not specify here. Its form can be strongly constrained
from general principles such as ultralocality (Teitelboim~1980).
A tilde denotes a quantum operator in the standard Hilbert space
of matter fields, while the bra and ket notation refers to
the corresponding states.
The second equation (\ref{diffeo1}) expresses the fact that
the wave functional is invariant with respect to three-dimensional
diffeomorphisms (``coordinate transformations''). It is
for this reason why one often writes $\Psi[^3\!{\cal G}]$,
where the argument denotes the coordinate-invariant
three-{\em geometry}. Since there is, however, no explicit
operator available which acts directly on $\Psi[^3\!{\cal G}]$,
this is only a formal representation, and in concrete
discussions one has to work with (\ref{ham1}) and
(\ref{diffeo1}). It must also be remarked that this invariance
holds only for diffeomorphisms that are connected with the
identity; for ``large'' diffeomeorphism, a so-called
$\theta$-structure may arise, similarly to the $\theta$-angle
in QCD, see e.g. Kiefer (1993).
The kinetic term in (\ref{ham1}) exhibits an interesting
structure: The DeWitt metric $G_{ab,cd}$ has locally
the signature $\mbox{diag}(-,+,+,+,+,+)$, rendering the kinetic
term {\em indefinite}. Moreover, the one {\em minus sign} in
the signature suggests that the corresponding degree
of freedom plays the role of an ``intrinsic time''
(Zeh~1999). In general this does not, however,
render (\ref{ham1}) a hyperbolic equation, since even after
dividing out the diffeomorphisms -- going to
the {\em superspace} of all three-geometries -- there
remains in general an infinite number of minus signs.
In the special, but interesting, case of perturbations around
closed Friedmann cosmologies, however, one global
minus sign remains, and one is left with a truly
hyperbolic equation (Giulini~1995). A Cauchy problem
with respect to intrinsic time may then be posed.
The minus sign in the DeWitt metric can be associated with
the local scale part, $\sqrt{h}$, of the three-metric.
The presence of the minus sign in the DeWitt metric has
an interesting interpretation: It reflects the fact
that gravity is {\em attractive} (Giulini and Kiefer~1994).
This can be investigated by considering the most general
class of ultralocal DeWitt metrics which are characterised
by the occurrence of some additional parameter $\alpha$:
\begin{equation}
G_{ab,cd}^{\alpha}=h^{-1/2}(h_{ac}h_{bd}+
h_{ad}h_{bc}-2\alpha h_{ab}h_{cd})\ ,
\end{equation}
where $\alpha=0.5$ is the value corresponding to general relativity.
One finds that there exists a critical value,
$\alpha_c=1/3$, such that for $\alpha<\alpha_c$ the DeWitt metric
would become positive definite. One also finds that for
$\alpha<\alpha_c$ gravity would become repulsive in the
following sense: First, the second time derivative of the
total volume $V=\int \mbox{d}^3x\sqrt{h}$ (for lapse equal to one)
would become, for positive three-curvature, positive instead of
negative, therefore leading to an acceleration. Second,
in the coupling to matter the sign of the gravitational constant
would change. From the observed amount of helium one can
infer that $\alpha$ must lie between 0.4 and 0.55.
Standard quantum theory employs the mathematical structure
of a Hilbert space which is needed for the probability
interpretation. Does such a structure also exist in
quantum gravity? On a kinematical level, for wave functionals
which are not yet necessarily solutions of the constraint
equations, one can try to start with the standard
Schr\"odinger-type inner product
\begin{equation}
\int{\cal D}h_{ab}\Psi^*[h_{ab}({\vec x})]
\Psi[h_{ab}({\vec x})]\equiv (\Psi,\Psi)_S\ . \label{sp}
\end{equation}
For wave functionals which satisfy the diffeomorphism constraints
(\ref{diffeo1}), this would yield divergencies since the
integration runs over all ``gauge orbits''. In the connection
representation, a preferred measure exists with respect to which
the wave functionals are square integrable functions on the
space of connections, see the contributions by Ashtekar,
Lewandowski, and Rovelli to this volume. The construction
is possible because the Hilbert space can be viewed as a limit
of Hilbert spaces with finitely many degrees of freedom.
It leads to interesting results for the spectra of geometric
operators such as the area operator. However, no such
product is known in geometrodynamics.
Since physical wave functionals have to obey (\ref{ham1})
{\em and} (\ref{diffeo1}), it might be sufficient if a Hilbert-space
structure existed on the space of solutions, not necessarily
on the space of all functionals such as in (\ref{sp}).
Since (\ref{ham1}) has locally the form of a Klein-Gordon equation,
one might expect to use the inner product
\begin{equation}
\mbox{i}\int\Pi_{\vec x}\mbox{d}\Sigma^{ab}({\vec x})\Psi^*[h_{ab}]
\left(G_{ab,cd}\stackrel{\rightarrow}{\frac{\delta}
{\delta h_{cd}}}-\stackrel{\leftarrow}{\frac{\delta}
{\delta h_{cd}}}G_{ab,cd}\right)\Psi[h_{ab}]\equiv
(\Psi,\Psi)_{KG}\ . \label{kg}
\end{equation}
The (formal) integration runs over a five-dimensional
hypersurface at each space point, which is spacelike
with respect to the DeWitt metric. The product (\ref{kg})
is invariant with respect to deformations of this
hypersurface and therefore independent of ``intrinsic time''.
Similar to the situation with the relativistic particle,
however, the inner product (\ref{kg}) is {\em not}
positive definite. For the {{\em free} relativistic particle
one can perform a consistent restriction to a
``positive-frequency sector'' in which the analogue of (\ref{kg})
is manifestly positive, {\em provided} the spacetime background
and the potential (which must be positive) are stationary, i.e.,
if there exists a time-like Killing vector which also preserves
the potential. Otherwise, ``particle production'' occurs
and the one-particle interpretation of the theory cannot be maintained.
It has been shown that such a restriction to ``positive
frequencies'' is {\em not} possible in quantum
geometrodynamics (Kucha\v{r}~1992), the reason being that
the Hamiltonian is not stationary. As I shall describe in Sect.~4,
one can make, at least for certain states in the ``one-loop level''
of the semiclassical approximation, a consistent
restriction to a positive-definite sector of (\ref{kg}).
For the relativistic particle one leaves the one-particle
sector and proceeds to a field-theoretic setting, if
one has to address situations where the restriction to
positive frequencies is no longer possible. One then arrives
at wave {\em functionals} for which a Schr\"odinger-type of inner
product can be formulated.
Can one apply
a similar procedure for the Wheeler-DeWitt equation?
Since quantum geometrodynamics is already a field theory,
this would mean performing the transition to a ``third-quantised''
theory in which the state in (\ref{wdw}) is itself turned into
an operator. The formalism for such a theory is still in
its infancy and will not be presented here
(see e.g. Kucha\v{r}~1992). In a sense, superstring
theory can be interpreted as providing such a framework.
All these problems could be avoided if it were possible
to ``solve'' the constraints classically and make a
transition to the physical degrees of freedom,
upon which the standard Schr\"odinger inner product
could be imposed. This would
correspond to the choice of a time variable before quantisation.
Formally, one would have to perform the canonical transformation
\begin{equation}
(h_{ab},p^{cd}) \longrightarrow (X^A,P_A;\phi^i,p_i)\ , \label{can}
\end{equation}
where $A$ runs from 1 to 4, and $i$ runs from 1 to 2.
$X^A$ and $P^A$ are the kinematical ``embedding variables'',
while $\phi^i$ and $p_i$ are the dynamical, physical, degrees
of freedom. Unfortunately, such a reduction can only be performed
in special situations, such as weak gravitational waves, but not
in the general case, see Isham (1992) and Kucha\v{r} (1992).
The best one can do is to choose the so-called
``York time'', but the corresponding reduction cannot be
performed explicitly. Again, only on the one-loop level
of the semiclassical approximation (see Sect.~4) can the
equivalence of the Schr\"odinger product for the reduced
variables and the Klein-Gordon inner product for the constrained
variables be shown.
The problems of time and Hilbert space are thus not yet resolved
at the most fundamental level. It is thus not clear, for example,
whether (\ref{wdw}) can sensibly be interpreted only as an eigenvalue
equation for eigenvalue zero.
Thus the options that will be discussed
in the rest of my contribution are
\begin{itemize}
\item to study a semiclassical approximation and to aim at
a consistent treatment of conceptual issues at that level.
This is done in Sect.~4. Or
\item to look for sensible boundary conditions for the
Wheeler-DeWitt equation and to discuss directly solutions
to this equation. This is done in the rest of this section.
\end{itemize}
\subsection{Role of boundary conditions}
Boundary conditions play a different role in quantum mechanics
and quantum cosmology. In quantum mechanics (more generally,
quantum field theory with an external background), boundary
conditions can be imposed with respect to the external
time parameter: Either as a condition on the wave function
at a given time, or as a condition on asymptotic states
in scattering situations. On the other hand, the
Wheeler-DeWitt equation (\ref{wdw}) is a ``timeless'' equation
with a Klein-Gordon type of kinetic term.
What is the role of boundary conditions in quantum cosmology?
Since the time of Newton one is accustomed to distinguish
between dynamical laws and initial conditions. However,
this is not a priori clear in quantum cosmology, and it might
well be that boundary conditions are part of the
dynamics. Sometimes quantum cosmology is even called
a {\em theory of initial
|
conditions} (Hartle~1997). Certainly, ``initial''
can here have two meanings: On the one hand, it can refer
to initial condition of the classical Universe. This
presupposes the validity of a semiclassical approximation
(see Sect.~4) and envisages that particular solutions
of (\ref{wdw}) could {\em select} a subclass of classical solutions
in the semiclassical limit. On the other hand, ``initial''
can refer to boundary conditions being imposed directly
on (\ref{wdw}). Since (\ref{wdw}) is fundamentally timeless,
this cannot refer
to any classical time parameter but only to intrinsic variables
such as ``intrinsic time''. In the following I shall briefly
review some boundary conditions that have been suggested
in quantum cosmology; details and additional references
can be found in Halliwell (1991).
Let me start with the no-boundary proposal by Hartle and
Hawking (1983). This does not yield directly boundary conditions
on the Wheeler-DeWitt equation, but specifies the wave function
through an integral expression -- through a path integral
in which only a subclass of all possible ``paths'' is
being considered. This subclass comprises all spacetimes that
have (besides the boundary where the arguments of the wave
function are specified) no other boundary. Since the full
quantum-gravitational path integral cannot be evaluated
(probably not even be rigorously defined), one must resort
to approximations. These can be semiclassical or minisuperspace
approximations or a combination of both. It becomes clear
already in a minisuperspace approximation that integration
has to be performed over {\em complex} metrics to guarantee
convergence. Depending on the nature of the saddle point
in a semiclassical limit, the wave function can then refer to
a classically allowed or forbidden situation.
Consider the example of a Friedmann Universe with a
conformally coupled scalar field. After an appropriate
field redefinition, the Wheeler-DeWitt equation assumes
the form of an indefinite harmonic oscillator,
\begin{equation}
\left(\frac{\partial^2}{\partial a^2}- \frac{\partial^2}
{\partial\phi^2}-a^2+\phi^2\right)
\psi(a,\phi)=0 \ .
\end{equation}
The implementation of the no-boundary condition in this
simple minisuperspace model selects the following solutions
(cf. Kiefer~1991)
\begin{eqnarray}
\psi_1(a,\phi) &=& \frac{1}{2\pi}K_0\left(\frac{|\phi^2-a^2|}{2}
\right)\ , \\
\psi_2(a,\phi) &=& \frac{1}{2\pi}I_0\left(\frac{\phi^2-a^2}{2}
\right)\ ,
\end{eqnarray}
where $K_0$ and $I_0$ denote Bessel functions.
It is interesting to note that these solutions do not
reflect the classical behaviour of the system (the classical
solutions are Lissajous ellipses confined to a rectangle in
configuration space, see Kiefer~1990) -- $I_0$ diverges for
large arguments, while $K_0$ diverges for vanishing argument
(``light cone'' in configuration space). Such features cannot
always be seen in a semiclassical limit.
Another boundary condition is the so-called
tunneling condition (Vilenkin 1998). It is also formulated
in general terms -- superspace should contain ``outgoing
modes'' only. However, as with the no-boundary proposal,
a concrete discussion can only be made within approximations.
Typically, while the no-boundary proposal leads to {\em real}
solutions of the Wheeler-DeWitt equation, the tunneling proposal
predicts {\em complex} solutions. This is most easily seen in
the semiclassical approximation (see Sect.~4), where the former
predicts $\cos S$-type of solutions, while the latter predicts
$\exp\mbox{i} S$-type of solutions. (The name ``tunneling proposal''
comes from the analogy with situations such as $\alpha$-decay
in nuclear physics where an outgoing wave is present
after tunneling from the nucleus.) A certain danger is connected
with the word ``outgoing'' because it has a temporal connotation
although (\ref{wdw}) is timeless. A time parameter emerges only in
a semiclassical approximation, see the next section.
A different type of boundary condition is the SIC proposal
by Conradi and Zeh (1991). It demands that the wave function
be simple for small scale factors, i.e. that it does not
depend on other degrees of freedom. The explicit expressions
exhibit many similarities to the no-boundary wave function,
but since the boundary condition is directly imposed
on the wave function without use of path integrals,
it is much more convenient for a discussion of models
which correspond to a classically recollapsing universe.
What are the physical applications that one could
possibly use to distinguish between the various boundary
conditions? Some issues are the following:
\begin{itemize}
\item {\em Probability for inflation}: It is often assumed that the
Universe underwent a period of exponential expansion at
an early stage (see also Sect.~4.3). The question therefore
arises whether quantum cosmology can predict how ``likely''
the occurrence of inflation is. Concrete calculations
address the question of the probability distribution for
the initial values of certain fields that are responsible
for inflation. Since such calculations necessarily involve
the validity of a semiclassical approximation (otherwise
the notion of inflation would not make sense), I shall
give some more details in the next section.
\item {\em Primordial black-hole production}: The production of primordial
black holes during an inflationary period can in principle
also be used to discriminate between boundary conditions, see
e.g. Bousso and Hawking (1996).
\item {\em Cosmological parameters}: If the wave function is peaked
around definite values of fundamental fields, these values
may appear as ``constants of Nature'' whose values can thereby be
predicted. This was tentatively done for the
cosmological constant (Coleman~1988). Alternatively, the
anthropic principle may be invoked to select amongst the
values allowed by the wave function.
\item {\em Arrow of time}: Definite conclusions about the
arrow of time in the Universe (and the interior of black
holes) can be drawn from solutions to the Wheeler-DeWitt
equation, see Kiefer and Zeh (1995).
\end{itemize}
Quantum cosmology is of course not restricted to quantum
general relativity. It may also be discussed within
effective models of string theory, see e.g. D\c{a}browski
and Kiefer (1997), but I shall not discuss this here.
\section{Emergence of a classical world}
As I have reviewed in Sect.~3, there is no notion of
spacetime at the full level of quantum cosmology. This was
aleady anticipated by Lema\^{\i}tre (1931) who wrote:
\begin{quote}
If the world has begun with a single quantum, the notions
of space and time would altogether fail to have any meaning
at the beginning \ldots If this suggestion is correct,
the beginning of the world happened a little before
the beginning of space and time.
\end{quote}
It is not clear what ``before'' means in an atemporal
situation, but it is obvious that the emergence of the
usual notion of spacetime within quantum cosmology needs
an explanation. This is done in two steps: Firstly,
a semiclassical approximation to quantum gravity must
be performed (Sect.~4.1). This leads to the recovery
of an {\em approximate} Schr\"odinger equation of non-gravitational
fields with respect to the semiclassical background.
Secondly, the emergence of classical properties
must be explained (Sect.~4.2). This is achieved through the
application of the ideas presented in Sect.~2.2.
A more technical review is Kiefer (1994),
see also Brout and Parentani (1999).
A final subsection is devoted to the emergence of classical
fluctuations which can serve as seeds for the
origin of structure in the Universe.
\subsection{Semiclassical approximation to quantum gravity}
The starting point is the observation that there occur
different scales in the fundamental equations (\ref{ham1})
and (\ref{diffeo1}):
The Planck mass $m_p$ associated with the gravitational part,
and other scales contained implicitly in
$\hat{H}{}^{\rm mat}_{\perp}$. Even for ``grand-unified
theories'' the relevant particle scales are at least
three orders of magnitude smaller than $m_p$. For this reason
one can apply Born-Oppenheimer type of techniques that are
suited to the presence of different scales. In molecular
physics, the large difference between nuclear mass and
electron mass leads to a slow motion for the nuclei
and the applicability of an adiabatic approximation.
A similar method is also applied in the nonrelativistic
approximation to the Klein-Gordon equation,
see Kiefer and Singh (1991).
In the lowest order of the semiclassical approximation,
the wave functional appearing in (\ref{ham1}) and (\ref{diffeo1})
can be written in the form
\begin{eqnarray}
|{\mbox{\boldmath$\Psi$}}[h_{ab}]\big>=
\mbox{e}^{\,{}^{\textstyle{\mbox{i} m_p^2
{\mbox{\boldmath$S$}[h_{ab}]}}}}|\Phi [h_{ab}]\big>\ , \label{semi}
\end{eqnarray}
where ${\mbox{\boldmath$S$}[h_{ab}]}$ is a purely gravitational
Hamilton-Jacobi function. This is a solution of the vacuum
Einstein-Hamilton-Jacobi equations -- the gravitational constraints
with the Hamilton-Jacobi values of momenta (gradients
of ${\mbox{\boldmath$S$}[h_{ab}]}$).
Substitution of (\ref{semi}) into (\ref{ham1}) and (\ref{diffeo1}) leads
to new equations for the state vector of matter fields $|\Phi[h_{ab}]\big>$
depending parametrically on the spatial metric
\begin{eqnarray}
&&\left \{\frac 1\mbox{i} G_{ab,cd}
\frac{\delta{\mbox{\boldmath$S$}}}
{\delta h_{ab}} \frac{\delta}
{\delta h_{cd}}+\hat{H}_{\perp}^{\rm mat}(h_{ab})\right.\nonumber\\
&&\,
\left.+\frac 1{2\mbox{i}}\,
\mbox{\vphantom{$\left\{\frac{L^{L^{L^a}}_{L}}
{L^L_L g}\right\}$}}^{\prime\prime}\!\!
G_{ab,cd}
\frac{\delta^2{\mbox{\boldmath$S$}}}
{\delta h_{ab}\delta h_{cd}}^{\prime\prime}
-\frac 1{2m_p^2}\,\mbox{\vphantom{$\left\{\frac{L^{L^{L^a}}_{L}}
{L^L_L g}\right\}$}}^{\prime\prime}\!\!
G_{ab,cd}\frac{\delta^2}
{\delta h_{ab}\delta h_{cd}}^{\prime\prime}
\right\}
|\Phi[h_{ab}]\big>=0, \label{hsemi}\\
&&\left\{\mbox{\vphantom{$\left\{\frac{L^{L^{L^a}}_{L}}
{L^L_L g}\right\}$}}^{\prime\prime}\!\!
-\frac{2}\mbox{i} h_{ab}\nabla_c
\frac{\delta}{\delta h_{bc}}^{\prime\prime}+
\hat{H}_a^{\rm mat}(h_{ab})\right\}|\Phi[h_{ab}]\big>=0. \label{dsemi}
\end{eqnarray}
It should be emphasised that on a formal level the factor
ordering can be {\em fixed} by demanding the equivalence
of various quantisation schemes, see Al'tshuler and Barvinsky (1996)
and the references therein.
The conventional derivation of the Schr\"odinger equation from the
Wheeler-DeWitt equation consists in the assumption of {\it small
back reaction of quantum matter on the metric background} which
at least heuristically allows one to discard the third and the fourth
terms in (\ref{hsemi}). Then one considers $|\Phi[h_{ab}]\big>$ on the
solution of classical vacuum Einstein equations $h_{ab}({\bf x},t)$
corresponding to the Hamilton-Jacobi function
${\mbox{\boldmath$S$}[h_{ab}]}$,
$|\Phi(t)\big>=|\Phi[h_{ab}({\bf x},t)]\big>$.
After a certain {\em choice} of lapse and shift functions $(N^{\perp},N^a)$,
this solution satisfies the canonical equations with the momentum
$p^{ab}=\delta{\mbox{\boldmath$S$}}/\delta h_{ab}$, so that the quantum
state $|\Phi(t)\big>$ satisfies the evolutionary equation obtained by using
\begin{eqnarray}
\frac{\partial}{\partial t}\,|\Phi(t)\big>=
\int \mbox{d}^3 x \,\dot{h}_{ab}({\bf x})\,
\frac{\delta}{\delta h_{ab}({\bf x})}
|\Phi[h_{ab}]\big>
\end{eqnarray}
together with the truncated version of equations (\ref{hsemi})
-- (\ref{dsemi}).
The result is the Schr\"odinger equation of quantised matter fields in the
external classical gravitational field,
\begin{eqnarray}
&&\mbox{i}\frac{\partial}{\partial t}\,
|\Phi(t)\big>=\hat{H}{}^{\rm mat}|\Phi(t)\big>, \label{sch}\\
&&\hat{H}{}^{\rm mat}=
\int \mbox{d}^3 x \left\{N^{\perp}({\bf x})
\hat{H}{}^{\rm mat}_{\perp}({\bf x})+
N^a({\bf x})\hat{H}{}^{\rm mat}_a({\bf x})\right\}. \label{Hmat}
\end{eqnarray}
Here, $\hat{H}{}^{\rm mat}$ is a matter field Hamiltonian in the Schr\"odinger
picture, parametrically depending on (generally nonstatic) metric
coefficients of the curved spacetime background. In this way,
the Schr\"odinger equation for non-gravi\-ta\-tio\-nal fields
has been recovered from quantum gravity as an approximation.
A derivation similar to the above can already be performed
within ordinary quantum mechanics if one assumes that the
total system is in a ``timeless'' energy eigenstate,
see Briggs and Rost (1999). In fact, Mott (1931) had already
considered a time-independent Schr\"odinger equation
for a total system consisting of an $\alpha$-particle
and an atom. If the state of the $\alpha$-particle can be
described by a plane wave (corresponding in this case to
high velocities), one can make an ansatz similar to (\ref{semi})
and derive a time-dependent Schr\"odinger equation for the
atom alone, in which time is {\em defined} by the $\alpha$-particle.
In the context of quantum gravity, it is most
interesting to continue the semiclassical approximation
to higher orders and to {\em derive} quantum-gravitational
correction terms to (\ref{sch}). This was done in
Kiefer and Singh (1991) and, giving a detailed interpretation
in terms of a Feynman diagrammatic language, in
Barvinsky and Kiefer (1998). I shall give a brief description
of these terms and refer
the reader to Barvinsky and Kiefer (1998) for all details.
At the next order of the semiclassical expansion, one obtains
corrections to (\ref{sch}) which are proportional
to $m_p^{-2}$. These terms can be added to the matter Hamiltonian,
leading to an {\em effective} matter Hamiltonian at this order.
It describes the back-reaction effects of quantum matter on
the dynamical gravitational background as well as proper
quantum effects of the gravitational field itself.
Most of these terms are nonlocal in character:
they contain the gravitational potential generated by the
back reaction of quantum matter as well as the gravitational
potential generated by the one-loop stress tensor of
vacuum gravitons. In cases where the matter energy density
is much bigger than the energy density of graviton vacuum
polarisation, the dominant correction term is given by the
kinetic energy of the gravitational radiation produced
by the back reaction of quantum matter sources.
A possible observational test of these correction terms
could be provided by the anisotropies in the cosmic
microwave background (Rosales~1997). The temperature fluctuations
are of the order $10^{-5}$ reflecting within inflationary models
the ratio $m_I/m_p\approx 10^{-5}$, where $m_I$ denotes
the mass of the scalar field responsible for inflation
(the ``inflaton'').
The correction terms would then be $(m_I/m_p)^2\approx 10^{-10}$
times a numerical constant, which could in principle be large enough
to be measurable with future satellite experiments such as
MAP or PLANCK.
Returning to the ``one-loop order'' (\ref{semi}) of the
semiclassical approximation, it is possible to address the
issue of probability for inflation that was mentioned in
Sect.~3.3, see Barvinsky and Kamenshchik (1994).
In this approximation, the inner products
(\ref{sp}) and (\ref{kg}) are equivalent and positive
definite, see Al'tshuler and Barvinsky (1996). They can
therefore be used to calculate quantum-mechanical
probabilities in the usual sense.
To discuss this probability, the reduced density matrix for
the inflaton,
$\varphi$, should be investigated. This density matrix
is calculated from the full quantum state upon integrating out
all other degrees of freedom (here called $f$),
\begin{equation} \rho_t(\varphi,\varphi')=\int{\cal D}f\
\psi_t^*(\varphi',f)\psi_t(\varphi,f)\ , \label{rho} \end{equation}
where $\psi_t$ denotes the quantum state (\ref{semi})
after the parameter $t$ from (\ref{sch}) has been used.
To calculate the probability one has to set $\varphi'=\varphi$.
In earlier work, the saddle-point approximation was only performed
up to the highest, tree-level, approximation.
This yields
\begin{equation} \rho(\varphi,\varphi)=\exp[\pm I(\varphi)]\ , \label{tree} \end{equation}
where $I(\varphi)=-3m_p^4/8V(\varphi)$ and
$V(\varphi)$ is the inflationary poential. The lower sign corresponds
to the no-boundary condition, while the upper sign corresponds to
the
tunneling condition. The problem with (3) is that $\rho$ is not
normalisable: mass scales bigger than $m_p$ contribute significantly
and results
based on tree-level approximations can thus not be trusted.
The situation is improved considerably if
loop effects are taken into account (Barvinsky and Kamenshchik~1994).
They are incorporated by the loop effective action $\Gamma_{loop}$
which is calculated on
De-Sitter space. In the limit of large $\varphi$
(that is relevant for investigating normalisability)
this yields in the one-loop approximation
\begin{equation} \Gamma_{loop}(\varphi)\vert_{H\to\infty}
\approx Z\ln\frac{H}{\mu}\ , \end{equation}
where $\mu$ is a renormalisation mass parameter, and $Z$ is the
anomalous scaling.
Instead of (\ref{tree}) one has now
\begin{eqnarray} \rho(\varphi,\varphi)&\approx& \ H^{-2}(\varphi)
\exp\left(\pm I(\varphi)-\Gamma_{loop}(\varphi)\right)
\nonumber\\
&\approx& \ \exp\left(\pm\frac{3m_p^4}{8V(\varphi)}\right)
\varphi^{-Z-2} \ . \end{eqnarray}
This density matrix is normalisable provided $Z>-1$.
This in turn leads to reasonable constraints on the
particle content of the theory, see Barvinsky and Kamenshchik (1994).
It turns out that the tunneling wave function (with an
appropriate particle content) can predict the occurrence
of a sufficient amount of inflation.
In earlier tree-level calculations the use of an anthropic principle
was needed to get a sensible result from a non-normalisable wave
function through {\em conditional} probabilities, see
e.g. Hawking and Turok (1998). This is no longer the case here.
\subsection{Decoherence in quantum cosmology\protect
\footnote{This
and the next subsection are adapted from Kiefer (1999).}}
As in ordinary quantum mechanics, the semiclassical limit
is not yet sufficient to understand classical behaviour.
Since the superposition principle is also valid in
quantum gravity, quantum entanglement will easily occur,
leading to superpositions of ``different spacetimes''.
It is for this reason that the process of decoherence
must be invoked to justify the emergence of a classical
spacetime.
Joos (1986) gave a heuristic
example within Newtonian (quantum) gravity, in which the superposition
of different metrics is suppressed by the interaction with
ordinary particles. How does decoherence work in quantum cosmology?
In particular, what constitutes system and environment in a case
where nothing is external to the Universe? The question is
how to divide the degrees of freedom in the configuration
space in a sensible way. It was suggested by Zeh (1986)
to treat global degrees of freedom such as the scale factor
(radius) of the Universe or an inflaton field as ``relevant''
variables that are decohered by ``irrelevant'' variables
such as density fluctuations, gravitational waves, or other
fields. Quantitative calculations can be found, e.g., in
Kiefer (1987,1992).
Denoting the ``environmental'' variables collectively again by
$f$, the reduced density matrix for e.g. the scale factor $a$
is found in the usual way by integrating out the $f$-variables,
\begin{equation}
\rho(a,a')=\int{\cal D}f\ \Psi^*(a',f)\Psi(a,f)\ . \label{nondiag}
\end{equation}
In contrast to the discussion following (\ref{rho}),
the {\em non-diagonal} elements of the density matrix must be
calculated.
The resulting terms are ultraviolet-divergent and must therefore
be regularised. This was investigated in detail for the
case of bosons (Barvinsky et al.~1999c) and fermions
(Barvinsky et al.~1999a). A crucial point is that standard
regularisation schemes, such as dimensional regularisation
or $\zeta$-regularisation, do not work -- they lead to
$\mbox{Tr}\rho^2=\infty$, since the sign in the
exponent of the Gaussian density matrix is changed
from minus to plus by regularisation.
These schemes therefore spoil one of the
important properties that a density matrix must obey.
This kind of problem has not been noticed before, since
these regularisation schemes had not been applied
to the calculation of reduced density matrices.
How, then, can (\ref{nondiag}) be regularised? In Barvinsky et al. (1999a,c)
we put forward the principle that there should be no decoherence
if there is no particle creation -- decoherence is an
irreversible process. In particular, there should be no decoherence
for static spacetimes. This has led to the use of a certain
conformal reparametrisation for bosonic fields and
a certain Bogoliubov transformation for fermionic fields.
As a concrete example, we have calculated the reduced density matrix
for a situation where the semiclassical background
is a De~Sitter spacetime, $a(t)=H^{-1}\cosh(Ht)$,
where $H$ denotes the Hubble parameter. This is the
most interesting example for the early Universe, since it is
generally assumed that there happened such an exponential,
``inflationary'', phase of the Universe, caused by an effective
cosmological constant. Taking various ``environments'', the following
results are found for the main contribution to
(the absolute value of) the decoherence
factor, $|D|$, that multiplies the reduced density matrix for
the ``isolated'' case:
\begin{itemize}
\item {\em Massless conformally-invariant field}: Here,
\[ |D|=1\ , \]
since no particle creation and therefore no decoherence effect
takes place.
\item {\em Massive scalar field}: Here,
\[ |D|\approx\exp\left(-\frac{\pi m^3a}{128}(a-a')^2\right)\ , \]
and one notices increasing decoherence for increasing $a$.
\item {\em Gravitons}: This is similar to the previous case, but
the mass $m$ is replaced by the Hubble parameter $H$,
\[ |D|\approx\exp\left(-CH^3a(a-a')^2\right)\ , \; C>0\ . \]
\item {\em Fermions}:
\[ |D|\approx\exp\left(-C'm^2a^2H^2(a-a')^2\right)\ , \; C'>0\ . \]
For high-enough mass, the decoherence effect by fermions
is thus smaller than the corresponding influence of bosons.
\end{itemize}
It becomes clear from these examples that the Universe
acquires classical properties after the onset of the
inflationary phase. ``Before'' this phase, the Universe
was in a timeless quantum state which does not possess any
classical properties. Viewed backwards, different semiclassical
branches would meet and interfere to form this timeless
quantum state (Barvinsky et al.~1999b).
For these considerations it is of importance that
there is a discrimination between the various degrees of freedom.
On the fundamental level of full superstring theory, for example,
such a discrimination is not possible and one would therefore
not expect any decoherence effect to occur at that level.
In general one would expect not only one semiclassical
component of the form (\ref{semi}), but also many superpositions of such
terms. Since (\ref{wdw}) is a real equation, one would in particular
expect to have a superposition of (\ref{semi}) with its complex conjugate.
The no-boundary state in quantum cosmology has, for example,
such a form. Decoherence also acts between such semiclassical
branches, although somewhat less effective than within
one branch (Barvinsky et al.~1999c). For a macroscopic Universe,
this effect is big enough to warrant the consideration of
only one semiclassical component of the form (\ref{semi}).
This constitutes a symmetry-breaking effect similar to the
symmetry breaking for chiral molecules: While in the former case
the symmetry with respect to complex conjugation is broken,
in the latter case one has a breaking of parity invariance
(compare Figures~2 and 3 above).
It is clear that decoherence can only act if there is
a peculiar, low-entropy, state for the very early Universe.
This lies at the heart of the arrow of time in
the Universe.
A simple initial condition like the one in Conradi and Zeh (1991)
can in principle lead to a quantum state describing
the arrow of time, see also Zeh (1999).
\subsection{Classicality of primordial fluctuations}
According to the inflationary scenario of the early Universe,
all structure in the Universe (galaxies, clusters of galaxies)
arises from {\em quantum} fluctuations of scalar fields and
scalar fluctuations of the metric. Because also fluctuations
of the metric are involved, this constitutes an effect
of (linear) quantum gravity.
These early fluctuations manifest themselves as anisotropies
in the cosmic microwave background radiation and have been
observed both by the COBE satellite and earth-based telescopes.
Certainly, these observed fluctuations are classical stochastic
quantities. How do the quantum fluctuations become classical?
It is clear that for the purpose of this discussion
the global gravitational degrees of freedom can already
by considered as classical, i.e. the decoherence process
of Sect.~4.2 has already been effective. The role of the gravitational
field is then twofold: firstly, the expanding Universe
influences the dynamics of the quantum fluctuations.
Secondly, linear fluctuations of the gravitational field
are themselves part of the quantum system.
The physical wavelength of a mode with wavenumber $k$
is given by
\begin{equation} \lambda_{phys}=\frac{2\pi a}{k}\ .
\end{equation}
Since during the inflationary expansion the Hubble parameter $H$
remains constant, the physical wavelength of the modes
leaves the particle horizon, given by $H^{-1}$, at a certain
stage of inflation, provided that inflation does not
end before this happens. Modes that are outside the
horizon thus obey
\begin{equation} \frac{k}{aH}\ll 1\ . \end{equation}
It turns out that the dynamical behaviour of these modes
lies at the heart of structure formation. These modes re-enter
the horizon in the radiation-and matter-dominated phases
which take place after inflation.
For a quantitative treatment, the Schr\"odinger equation (\ref{sch})
has to be solved for the fluctuations in the inflationary
Universe. The easiest example, which nevertheless exhibits the
same features as a realistic model, is a massless scalar field.
It is, moreover, most convenient to go to Fourier space and to
multiply the corresponding variable with $a$. The resulting
fluctuation variable is called $y_k$, see Kiefer and Polarski (1998)
for details. Taking as a natural initial state the ``vacuum state'',
the solution of the Schr\"odinger equation (\ref{sch}) for the
(complex) variables $y_k$ reads\footnote{Since there is no
self-interaction of the field,
different modes $y_k$ decouple, which is why I shall suppress
the index $k$ in the following.}
\begin{equation}
\chi(y,t) = \left(\frac{1}{\pi|f|^2}\right)^{1/2}
\exp\left(-\frac{1-2\mbox{i}F}{2|f|^2}|y|^2\right)\; , \label{gauss}
\end{equation}
where
\begin{eqnarray}
|f|^2 &=& (2k)^{-1}(\cosh 2r+\cos 2\varphi\sinh 2r), \\
F &=& \frac{1}{2} \sin 2\varphi\sinh 2r\ ,
\end{eqnarray}
and explicit expressions can be given for the time-dependent
functions $r$ and $\varphi$.
The Gaussian state (\ref{gauss}) is nothing but a {\em squeezed state},
a state that is well known from quantum optics. The parameters
$r$ and $\varphi$ have the usual interpretation as
squeezing parameter and squeezing angle, respectively.
It turns out that during the inflationary expansion
$r\to\infty$, $|F|\gg 1$, and $\varphi\to 0$ (meaning here
a squeezing in momentum). In this limit, the state (\ref{gauss})
becomes also a WKB state par excellence. As a result of this extreme
squeezing, this state cannot be distinguished within
the given observational capabilities from a classical
stochastic process, as thought experiments demonstrate
(Kiefer and Polarski~1998, Kiefer et al.~1998a).
In the Heisenberg picture, the special properties of the state (\ref{gauss})
are reflected in the fact that the field operators commute
at different times, i.e.
\begin{equation}
[\hat{y}(t_1),\hat{y}(t_2)]\approx 0\ . \label{qnd}
\end{equation}
(Kiefer et al.~1998b). In the language of quantum optics,
this is the condition for a quantum-nondemolition
measurement: An observable obeying (\ref{qnd}) can repeatedly
be measured with great accuracy. It is important to
note that these properties remain valid after the modes
have reentered the horizon in the radiation-dominated
phase that follows inflation (Kiefer et al.~1998a).
As is well known, squeezed states are very sensitive to
interactions with other degrees of freedom
(Giulini et al.~1996). Since such interactions are
unavoidably present in the early Universe, the question arises
whether they would not spoil the above picture.
However, most interactions invoke couplings in
field amplitude space (as opposed to field momentum space)
and therefore,
\begin{equation}
[\hat{y},\hat{H}_{int}] \approx 0\ ,
\end{equation}
where $\hat{H}_{int}$ denotes the interaction Hamiltonian.
The field amplitudes therefore become an excellent pointer basis:
This basis defines the classical property, and due to (\ref{qnd})
this property is conserved in time. The decoherence time
caused by $\hat{H}_{int}$ is very small in most cases.
Employing for the sake of simplicity a linear interaction
with a coupling constant $g$, one finds
for the decoherence time scale (Kiefer and Polarski~1998)
\begin{equation}
t_D\approx \frac{\lambda_{phys}}{g\mbox{e}^r}\ .
\end{equation}
For modes that presently re-enter the horizon, one has
$\lambda_{phys}\approx 10^{28}\mbox{cm}$, $\mbox{e}^r
\approx 10^{50}$ and therefore
\begin{equation}
t_D\approx 10^{-31}g^{-1}\mbox{sec}\ .
\end{equation}
Unless $g$ is very small, decoherence acts on a very short
timescale. This conclusion is enforced if higher-order
interactions are taken into account.
It must be noted that the interaction of the field modes
with its ``environment'' is an {\em ideal} measurement --
the probabilities are unchanged and the main predictions
of the inflationary scenario remain the same (which manifest
themselves, for example, in the form of the anisotropy spectrum
of the cosmic microwave background). This would not be the case,
for example, if one concluded that {\em particle number}
instead of {\em field amplitude} would define the robust
classical property. Realistic models of the early Universe
must of course take into account complicated nonlinear
interactions, see e.g. Calzetta and Hu (1995) and Matacz (1997).
Although these models will affect the values of the decoherence
timescales, the conceptual conclusions drawn above will
remain unchanged.
The results of the last two subsections give rise to the
{\em hierarchy of classicality} (Kiefer and Joos~1999):
The global gravitational background degrees of freedom
are the first variables that assume classical properties.
They then provide the necessary condition for other
variables to exhibit classical behaviour, such as the
primordial fluctuations discussed here. These then serve as the
seeds for the classical structure of galaxies and clusters
of galaxies that are part of the observed Universe.
\section{Acknowledgements}
I thank the organisers of this school for inviting
me to this interesting and stimulating meeting. I am
grateful to the Institute of Advanced Study Berlin for its
kind hospitality during the writing of this contribution.
I also thank John Briggs, Erich Joos, Alexander Kamenshchik,
Jorma Louko for their comments on this manuscript.
|
\section{Introduction}\label{section1}
\PARstart{M}{O}{T}{I}{V}{A}{T}{E}{D} by extensive applications in different fields, such as flocking, distributed computation, synchronization, formation and containment control \cite{c1}-\hspace{-0.5pt}\cite{c10}, {\it et al}., distributed cooperative control for dynamical multiagent systems has received a great deal of attention in different engineering communities in recent years. In many practical applications of multiagent systems, all agents are required to achieve an agreement on some quantity, which is usually referred to consensus or synchronization. Generally speaking, according to the autonomous dynamics of each agent, multiagent systems can be categorized into linear ones and nonlinear ones. For linear multiagent systems, the whole dynamics can often be divided into two parts: consensus dynamics and disagreement dynamics, where disagreement dynamics is independent of consensus dynamics. However, for nonlinear multiagent systems, the whole dynamics cannot be decomposed, where the Lipschitz nonlinearity has specific connotation and has been extensively discussed. By using structure features of the above two types of multiagent systems, different consensus protocols were proposed and some important and interesting conclusions were shown in \cite{c11}-\hspace{-0.5pt}\cite{c18}, where the consensus performance constraints were not considered. \par
For many practical multiagent systems to achieve consensus, some constrains should be imposed, such as network structure constraints \cite{c19}, motion constraints of the maximum speed and acceleration \cite{c190}, and utility constraints \cite{c191}. If consensus constrains include certain cost functions, which are required to be minimum or maximum, then these problems can be modeled as optimal or suboptimal consensus. The cost functions can be divided into the individual ones and the global ones. For the individual cost function, some global goals are realized by optimizing the local objective function of each agent, as shown in \cite{c20} and \cite{c21}. For the global cost function, it is required that the whole consensus performance is minimized or maximized by local interactions among neighboring agents. For first-order multiagent systems, Cao and Ren \cite{c22} proposed a linear quadratic global cost function and gave optimal consensus criteria under the condition that the interaction topology was modeled as a complete graph. For second-order multiagent systems, Guan {\it et al}. \cite{c23} discussed guaranteed-performance consensus by the hybrid impulsive control approach. For high-order multiagent systems, guaranteed-performance consensus analysis and design problems were investigated in \cite{c24,c25,c26,c27}. It should be pointed out that guaranteed-performance consensus is intrinsically suboptimal and it is difficult to achieve optimal consensus for second-order and high-order multiagent systems with global cost functions. Furthermore, guaranteed-performance consensus criteria in \cite{c24,c25,c26,c27} are not completely distributed since they depend on the Laplacian matrix of the interaction topology or its nonzero eigenvalues. \par
In \cite{c28,c29,c30}, an interesting scaling-adaptive strategy was proposed to realize completely distributed consensus control for linear and Lipschitz nonlinear multiagent systems without performance constraints, where the impacts of the nonzero eigenvalues of the Laplacian matrix were eliminated by introducing a scaling factor. The scaling factor is inversely proportional to the minimum nonzero eigenvalue of the interaction topology and cannot be precisely determined since the minimum nonzero eigenvalue is dependent on the algebraic connectivity of the interaction topology which is difficult to be determined. When performance constraints are not involved, it is not necessary to determine the scaling factor. However, the precise value of the scaling factor is required when there are global cost functions. Therefore, the scaling-adaptive strategy cannot be applied to investigate multiagent systems with performance constraints. To the best of our knowledge, the following interesting and challenging guaranteed-performance consensus problems are still open: (i) How to achieve completely distributed guaranteed-performance consensus; (ii) How to determine the impacts of switching topologies with adaptively adjusting weights and the Lipschitz nonlinearity; (iii) How to regulate the consensus control gain and to guarantee consensus performance among all agents.\par
The current paper proposes a completely distributed guaranteed-performance consensus scheme in the sense that consensualization criteria do not depend on any information of interaction topologies and switching motions. Firstly, a new guaranteed-performance consensus protocol with switching topologies and adaptively adjusting weights is constructed, which can regulate consensus performance among all agents instead of only neighboring agents. Then, by using the specific feature of a complete graph that all its eigenvalues are identical, a translation-adaptive strategy is given to realize guaranteed-performance consensus in a completely distributed manner and adaptive guaranteed-performance consensualization criteria for high-order linear multiagent systems with switching topologies are presented. Furthermore, a regulation approach of the consensus control gain and the guaranteed-performance cost is shown by the linear matrix inequality (LMI) tool. Finally, adaptive guaranteed-performance consensus design criteria for high-order Lipschitz nonlinear multiagent systems with switching topologies are proposed. \par
Compared with closely related works on consensus, the current paper has four novel features as follows. Firstly, the current paper proposes a new translation-adaptive strategy to realize completely distributed guaranteed-performance consensus control. The methods for guaranteed-performance consensus in \cite{c23,c24,c25,c26,c27} are not completely distributed and the scaling-adaptive strategy in \cite{c28,c29,c30} cannot guarantee the consensus regulation performance. Secondly, the current paper determines the impacts of switching topologies with time-varying weights and guarantees the consensus regulation performance among all agents. In \cite{c23,c24,c25,c26,c27}, the impacts of switching topologies with time-varying weights on the consensus regulation performance among neighboring agents cannot be dealt with. Thirdly, an explicit expression of the guaranteed-performance cost is determined and an approach to regulate the consensus control gain and the guaranteed-performance cost is presented. The methods in \cite{c23,c24,c25,c26,c27} cannot determine the impacts of time-varying weights on the guaranteed-performance cost and cannot regulate the consensus control gain. Fourthly, the current paper investigates these cases that each agent contains Lipschitz nonlinear dynamics. In \cite{c23,c24,c25,c26,c27}, it was supposed that the dynamics of each agent is linear. \par
The remainder of the current paper is organized as follows. Section II models interaction topologies among agents by switching connected graphs with time-varying weights and describes the adaptive guaranteed-performance consensus problem. In Section III, adaptive guaranteed-performance consensualization criteria for high-order linear multiagent systems are presented. Section IV extends main conclusions for high-order linear multiagent systems to high-order nonlinear ones. Numerical simulations are given to illustrate theoretical results in Section V, followed by some concluding remarks in Section VI. \par
\emph{Notations:} ${\mathbb{R}^d}$ denotes the real column vector space of dimension $d$ and ${\mathbb{R}^{d \times d}}$ stands for the set of $d \times d$ dimensional real matrices. ${I_d}$ denotes the identity matrix of dimension $d$. $0$ and ${\bf{0}}$ represent the zero number and the zero column vector with a compatible dimension, respectively. ${{\bf{1}}_N}$ represents an $N$-dimensional column vector, whose entries are equal to $1$. ${Q^T}$ and ${Q^{ - 1}}$ denote the transpose and the inverse matrix of $Q$, respectively. ${R^T} = R > {\rm{0}}$ and ${R^T} = R < {\rm{0}}$ mean that the symmetric matrix $R$ is positive definite and negative definite, respectively. The notation $ \otimes $ stands for the Kronecker product. $\left\| x \right\|$ represents the two norm of the vector $x$. The symmetric terms of a symmetric matrix are denoted by the symbol *.
\section{Problem description}\label{section2}
\subsection{Modeling interaction topology}
A connected undirected graph $G$ with $N$ nodes can be used to depict the interaction topology of a multiagent system with $N$ identical agents, where each node represents an agent, the edge between two nodes denotes the interaction channel between them and the edge weight stands for the interaction strength. The graph $G$ is said to be connected if there at least exists an undirected path between any two nodes. The graph $G$ is said to be a complete graph if there exists an undirected edge between any two nodes. It is clear that a complete graph is connected. More basic concepts and results about graph theory can be found in \cite{c300}. \par
Let $\sigma (t):\left[ {\left. {0, + \infty } \right)} \right. \to \kappa $ denote the switching signal with $\kappa $ an index of the switching set consisting of several connected undirected graphs, where switching movements satisfy that ${t_m} - {t_{m - 1}} \ge {T_{\rm{d}}}~{\rm{ }}(\forall m \ge 1)$ with ${T_{\rm{d}}} > 0$ for switching sequences $\{ {t_i}:i = 0,1,2, \cdots \} $. The index set of all neighbors of node ${v_i}$ is denoted by ${N_{\sigma (t),i}} = \left\{ {k|\left( {{v_k},{v_i}} \right) \in E\left( {{G_{\sigma (t)}}} \right)} \right\}$, where $({v_k},{v_i})$ denotes the edge between node ${v_k}$ and node ${v_i}$ and $E\left( {{G_{\sigma (t)}}} \right)$ is the edge set of the graph ${G_{\sigma (t)}}$. Define the 0-1 Laplacian matrix of ${G_{\sigma (t)}}$ as ${L_{\sigma (t),0}} = \left[ {{l_{\sigma (t),ik}}} \right] \in {\mathbb{R}^{N \times N}}$ with ${l_{\sigma (t),ii}} = - \sum\nolimits_{k = 1,k \ne i}^N {{l_{\sigma (t),ik}}} $, ${l_{\sigma (t),ik}} = - 1$ if $({v_k},{v_i}) \in E\left( {{G_{\sigma (t)}}} \right)$ and ${l_{\sigma (t),ik}} = 0$ otherwise and the Laplacian matrix of ${G_{\sigma (t)}}$ as ${L_{\sigma (t),w}} = \left[ {{{\tilde l}_{\sigma (t),ik}}(t)} \right] \in {\mathbb{R}^{N \times N}}$ with ${\tilde l_{\sigma (t),ii}}(t) = - \sum\nolimits_{k = 1,k \ne i}^N {{l_{\sigma (t),ik}}{w_{\sigma (t),ik}}} (t)$ and ${\tilde l_{\sigma (t),ik}}(t) = {l_{\sigma (t),ik}}{w_{\sigma (t),ik}}(t){\rm{ }}(i \ne k)$, where the function ${w_{\sigma (t),ik}}(t) \ge 1$ is designed later. It can be found that ${\tilde l_{\sigma (t),ii}}(t) = \sum\nolimits_{k \in {N_{\sigma (t),i}}} {{w_{\sigma (t),ik}}(t)} $, ${L_{\sigma (t),0}}{{\bf{1}}_N} = {L_{\sigma (t),w}}{{\bf{1}}_N} = {\bf{0}}$, and ${L_{\sigma (t),0}}$ is piecewise continuous and is constant at no switching time, but ${L_{\sigma (t),w}}$ may be not. Specially, for ${L_{\sigma (t),0}}$ and ${L_{\sigma (t),w}}$, the zero eigenvalue is simple and all the other $N - 1$ eigenvalues are positive since all the topologies in the switching set are connected. \par
\subsection{Describing guaranteed-performance consensualization}
The dynamics of each agent is described by
\begin{eqnarray}\label{1}
{\dot x_i}(t) = A{x_i}(t) + B{u_i}(t){\rm{ }}\left( {i = 1,2, \cdots ,N} \right),
\end{eqnarray}
where $A \in {\mathbb{R}^{d \times d}},$ $B \in {\mathbb{R}^{d \times p}},$ and ${x_i}(t)$ and ${u_i}(t)$ are the state and the control input of agent $i$, respectively. The following adaptive guaranteed-performance consensus protocol is proposed for agent $i$ to apply the state information of its neighbors
\begin{eqnarray}\label{2}
\left\{ \begin{array}{l}
{u_i}(t) = {K_u}\sum\limits_{k \in {N_{\sigma (t),i}}} {{w_{\sigma (t),ik}}(t)\left( {{x_k}(t) - {x_i}(t)} \right)} , \\
{{\dot w}_{\sigma (t),ik}}(t) = {\left( {{x_k}(t) - {x_i}(t)} \right)^T}{K_w}\left( {{x_k}(t) - {x_i}(t)} \right), \\
{J_x} \! \! = \! \! \frac{1}{N} \! \! \sum\limits_{i = 1}^N \! \! {\sum\limits_{k = 1}^N \! \! {\int_0^{ + \infty } \! \! {{{\left( {{x_k}(t) \! - \! {x_i}(t)} \right)}^T}\! Q \! \left( {{x_k}(t) \! - \! {x_i}(t)} \right)\! {\rm{d}}t} } } , \\
\end{array} \right.
\end{eqnarray}
where ${K_u} \in {\mathbb{R}^{p \times d}}$ and ${K_w} \in {\mathbb{R}^{d \times d}}$ are gain matrices, and ${Q^T} = Q > 0$ is used to guarantee the consensus regulation performance. It is assumed that ${w_{\sigma (t),ik}}(t)$ is a bounded function with an upper bound denoting ${\gamma _{ik}}$, ${w_{\sigma (0),ik}}(0) = 1{\rm{ }}(i \ne k)$ and ${w_{\sigma ({t_m}),ik}}({t_m}) = 1$ if the edge $({v_k},{v_i})$ is newly added at switching time ${t_m}$. In this case, ${w_{\sigma (t),ik}}(t)$ is a practical interaction strength of the channel $({v_k},{v_i})$ if agent $k$ is a neighbor of agent $i$, and can be regarded as a virtual interaction strength of the channel $({v_k},{v_i})$ if agent $k$ is not a neighbor of agent $i$. Especially, the initial value of the interaction strength is designed as $1$ once a virtual channel becomes a practical one.\par
The definition of the adaptive guaranteed-performance consensualization of high-order multiagent systems is given as follows.
\begin{df}\label{definition1}
Multiagent system (\ref{1}) is said to be adaptively guaranteed-performance consensualizable by protocol (\ref{2}) if there exist ${K_u}$ and ${K_w}$ such that ${\lim _{t \to + \infty }}\left( {{x_i}(t) - {x_k}(t)} \right) = {\bf{0}}{\rm{ }}\left( {i,k = 1,2, \cdots ,N} \right)$ and ${J_x} \leqslant {J^*}$ for any bounded initial states ${x_i}(0){\rm{ }}\left( {i = 1,2, \cdots ,N} \right)$, where ${J^*}$ is said to be the guaranteed-performance cost.
\end{df}
The current paper mainly investigates the following four problems: (i) How to design ${K_u}$ and ${K_w}$ such that multiagent system (\ref{1}) achieves adaptive guaranteed-performance consensus; (ii) How to determine the impacts of switching topologies on adaptive guaranteed-performance consensus under the condition that interaction strengths are time-varying; (iii) How to regulate the consensus control gain and the guaranteed-performance cost; (iv) How to extend main results for high-order linear multiagent systems to high-order Lipschitz nonlinear ones.\par
\begin{rmk}\label{remark1}
The consensus protocol given in (\ref{2}) can realize a completely distributed guaranteed-performance consensus control by adaptively regulating interaction weights, but the global information of the interaction topology is required if the consensus protocol is not adaptive. Moreover, protocol (\ref{2}) has two critical characteristics. The first one is that interaction strengths ${w_{\sigma (t),ik}}(t){\rm{ (}}i \ne k{\rm{)}}$ are time-varying, while it was usually assumed that interaction strengths are time-invariant, but the neighbor set is time-varying in most consensus works with switching topologies (see \cite{c10}, \cite{c14}, \cite{c24} and references therein). Moreover, for adaptive consensus protocols with fixed topologies in \cite{c28,c29,c30}, interaction strengths may be monotonously increasing. However, for adaptive consensus with switching topologies, interaction strengths may be suddenly decreasing at some switching times. Hence, it is more difficult to determine the impacts of switching topologies on the adaptive consensus property and the upper bound of the guaranteed-performance cost. The second one is that protocol (\ref{2}) can guarantee the consensus regulation performance between any two agents by ${J_x}$ even if they are not neighboring. However, the consensus regulation performance among neighboring agents can only be ensured by the index function in \cite{c23,c24,c25,c26,c27}. Furthermore, ${J_r}(t)$ is usually called the performance regulation term, which can be realized by choosing a proper $Q$. The matrix $Q$ can be applied to ensure the regulation performance of relative motions among neighboring agents. For practical multiagent systems, $Q$ is often chosen as a diagonal matrix. In this case, a bigger coupling weight in $Q$ can ensure a smaller squared sum of the corresponding element of the state error.
\end{rmk}
\section{Adaptive guaranteed-performance consensualization criteria}\label{section3}
In this section, by the nonsingular transformation, the consensus and disagreement dynamics of multiagent system (\ref{1}) are first determined, respectively. Then, based on the Riccati inequality, adaptive guaranteed-performance consensualization criteria are proposed, and the guaranteed-performance cost ${J^*}$ is meanwhile determined. Finally, an approach to regulate the consensus control gain and the guaranteed-performance cost is presented.\par
Let $x(t) = {\left[ {x_1^T(t),x_2^T(t), \cdots ,x_N^T(t)} \right]^T}{\rm{ ,}}$ then the dynamics of multiagent system (\ref{1}) with protocol (\ref{2}) can be written as
\begin{eqnarray}\label{3}
\dot x(t) = \left( {{I_N} \otimes A - {L_{\sigma (t),w}} \otimes B{K_u}} \right)x(t).
\end{eqnarray}
Let $0 = {\lambda _{\sigma (t),1}} < {\lambda _{\sigma (t),2}} \le \cdots \le {\lambda _{\sigma (t),N}}$ denote the eigenvalues of ${L_{\sigma (t),0}}$, then there exists an orthonormal matrix ${U_{\sigma (t)}} = \left[ {{{{{\bf{1}}_N}} \mathord{\left/ {\vphantom {{{{\bf{1}}_N}} {\sqrt N }}} \right. \kern-\nulldelimiterspace} {\sqrt N }},{{\tilde U}_{\sigma (t)}}} \right]$ with ${\tilde U_{\sigma (t)}} \in {\mathbb{R}^{N \times (N - 1)}}$ such that
\[
U_{\sigma (t)}^T{L_{\sigma (t),0}}{U_{\sigma (t)}} = {\rm{diag}}\left\{ {{\lambda _{\sigma (t),1}},{\lambda _{\sigma (t),2}}, \cdots ,{\lambda _{\sigma (t),N}}} \right\}.
\]
Since all interaction topologies in the switching set are undirected, one has ${L_{\sigma (t),w}}{{\bf{1}}_N} = {\bf{0}}$ and ${\bf{1}}_N^T{L_{\sigma (t),w}} = {\bf{0}}$. Thus, one can show that
\[
U_{\sigma (t)}^T{L_{\sigma (t),w}}{U_{\sigma (t)}} = \left[ {\begin{array}{*{20}{c}}
0 & {{{\bf{0}}^T}} \\
{\bf{0}} & {\tilde U_{\sigma (t)}^T{L_{\sigma (t),w}}{{\tilde U}_{\sigma (t)}}} \\
\end{array}} \right].
\]
Due to ${T_{\rm{d}}} > 0$, the matrix ${U_{\sigma (t)}}$ is piecewise continuous and is constant in the switching interval. Hence, let $\tilde x(t) = \left( {U_{\sigma (t)}^T \otimes {I_d}} \right)x(t) = {\left[ {\tilde x_1^T(t),{\zeta ^T}(t)} \right]^T}$ with $\zeta (t) = {\left[ {\tilde x_2^T(t),\tilde x_3^T(t), \cdots ,\tilde x_N^T(t)} \right]^T}$, then multiagent system (\ref{3}) can be transformed into
\begin{eqnarray}\label{4}
{\dot {\tilde x}_1}(t) = A{\tilde x_1}(t),
\end{eqnarray}
\begin{eqnarray}\label{5}
\dot \zeta (t) = \left( {{I_{N - 1}} \otimes A - \tilde U_{\sigma (t)}^T{L_{\sigma (t),w}}{{\tilde U}_{\sigma (t)}} \otimes B{K_u}} \right)\zeta (t).
\end{eqnarray}
Define
\begin{eqnarray}\label{6}
{x_{\bar c}}(t) \buildrel \Delta \over = \sum\limits_{i = 2}^N {{U_{\sigma (t)}}{e_i} \otimes {{\tilde x}_i}(t)},
\end{eqnarray}
\begin{eqnarray}\label{7}
{x_c}(t) \buildrel \Delta \over = {U_{\sigma (t)}}{e_1} \otimes {\tilde x_1}(t) = \frac{1}{{\sqrt N }}{{\bf{1}}_N} \otimes {\tilde x_1}(t),
\end{eqnarray}
where ${e_i}$ $(i \in \{ 1,2, \cdots ,N\} )$ denotes an $N$-dimensional column vector with the $i$th element $1$ and $0$ elsewhere. Due to
\[
\sum\limits_{i = 2}^N {{e_i} \otimes {{\tilde x}_i}(t)} = {\left[ {{{\bf{0}}^T},{\zeta ^T}(t)} \right]^T},
\]
one can show by (\ref{6}) that
\begin{eqnarray}\label{8}
{x_{\bar c}}(t) = \left( {{U_{\sigma (t)}} \otimes {I_d}} \right){\left[ {{{\bf{0}}^T},{\zeta ^T}(t)} \right]^T}.
\end{eqnarray}
By (\ref{7}), one has
\begin{eqnarray}\label{9}
{x_c}(t) = \left( {{U_{\sigma (t)}} \otimes {I_d}} \right){\left[ {\tilde x_1^T(t),{{\bf{0}}^T}} \right]^T}.
\end{eqnarray}
From (\ref{8}) and (\ref{9}), one can see that ${x_{\bar c}}(t){\rm{ }}$ and ${x_c}(t){\rm{ }}$ are linearly independent since ${U_{\sigma (t)}} \otimes {I_d}$ is nonsingular. Due to $\left( {U_{\sigma (t)}^T \otimes {I_d}} \right)x(t) = {\left[ {\tilde x_1^T(t),{\zeta ^T}(t)} \right]^T}$, one has \[x(t) = {x_{\bar c}}(t) + {x_c}(t){\rm{.}}\]
According to the structure of ${x_c}(t){\rm{ }}$ shown in (\ref{7}), multiagent system (\ref{1}) achieves consensus if and only if ${\lim _{t \to + \infty }}\zeta (t) = {\bf{0}}$; that is, subsystems (\ref{4}) and (\ref{5}) describe the consensus and disagreement dynamics of multiagent system (\ref{1}).\par
Based on the above analysis, the following theorem gives a sufficient condition for adaptive guaranteed-performance consensualization in terms of the Riccati inequality, which can realize completely distributed guaranteed-performance consensus control.\par
\begin{thm}\label{theorem1}
For any given translation factor $\gamma > 0$, multiagent system (\ref{1}) is adaptively guaranteed-performance consensualizable by protocol (\ref{2}) if there exists a matrix ${R^T} = R > 0$ such that
\[
RA + {A^T}R - \gamma RB{B^T}R + 2Q \le 0.
\]
In this case, ${K_u} = {B^T}R$, ${K_w} = RB{B^T}R$ and the guaranteed-performance cost satisfies that ${J^ * } = J_{x(0)}^* + J_{x(t)}^*,$ where
\[
J_{x(0)}^* = {x^T}(0)\left( {\left( {{I_N} - \frac{1}{N}{{\bf{1}}_N}{\bf{1}}_N^T} \right) \otimes R} \right)x(0),
\]
\[
J_{x(t)}^* \! = \! \gamma \! \! \int_0^{ + \infty } \! {{x^T}(t) \! \left( {\left( {{I_N} - \frac{1}{N}{{\bf{1}}_N}{\bf{1}}_N^T} \right) \otimes RB{B^T}R} \right) \! x(t)} {\rm{d}}t.
\]\vspace{0.05em}
\end{thm}
\begin{proof}
First of all, we design ${K_u}$ and ${K_w}$ such that ${\lim _{t \to + \infty }}\zeta (t) = {\bf{0}}$. Construct a new Lyapunov function candidate as follows
\[
\hspace{-7em}V(t) = {\zeta ^T}(t)\left( {{I_{N - 1}} \otimes R} \right)\zeta (t) \vspace{-4pt}
\]
\[
\hspace{3em} + \sum\limits_{i = 1}^N {\sum\limits_{k \in {N_{\sigma (t),i}}} {\frac{{{{\left( {{w_{\sigma (t),ik}}(t) + {l_{\sigma (t),ik}}} \right)}^2}}}{2}} } \vspace{-10pt}
\]
\begin{eqnarray}\label{10}
\hspace{4em} + \frac{\gamma }{N}\sum\limits_{i = 1}^N {\sum\limits_{k = 1,k \ne i}^N {\left( {{\gamma _{ik}} - {w_{\sigma (t),ik}}(t)} \right)} } ,
\end{eqnarray}
where $\gamma > 0$ and $R$ is the solution of $RA + {A^T}R - \gamma RB{B^T}R + 2Q \le 0$. Due to ${R^T} = R > 0$ and ${\gamma _{ik}} \ge {w_{\sigma (t),ik}}(t)$ $\left( {i,k = 1,2, \cdots ,N} \right)$, one can find that $V(t) \ge 0$. Since ${l_{\sigma (t),ik}}$ is piecewise continuous and is constant in the switching interval, the time derivative of $V(t)$ is
\[
\hspace{-1.25em}\dot V(t) = {\zeta ^T}(t)( {{I_{N - 1}} \otimes \left( {RA + {A^T}R} \right) - \tilde U_{\sigma (t)}^T{L_{\sigma (t),w}}{{\tilde U}_{\sigma (t)}} }
\vspace{-4pt}
\]
\[
\hspace{3.25em} \otimes\left. {\left( {RB{K_u}} \right.\left. { + K_u^T{B^T}R} \right)} \right)\zeta (t)
\sum\limits_{i = 1}^N {\sum\limits_{k \in {N_{\sigma (t),i}}} \hspace{-3pt}{\left( {{w_{\sigma (t),ik}}(t)} \right.} }
\vspace{-12pt}
\]
\begin{eqnarray}\label{11}
\hspace{3em}\left. { + {l_{\sigma (t),ik}}} \right){{\dot w}_{\sigma (t),ik}}(t)
\hspace{-3pt}-\hspace{-3pt} \frac{\gamma }{N}\sum\limits_{i = 1}^N {\sum\limits_{k = 1,k \ne i}^N {{{\dot w}_{\sigma (t),ik}}(t)} } .
\end{eqnarray}
From (\ref{2}), one can obtain that
\[
\sum\limits_{i = 1}^N {\sum\limits_{k \in {N_{\sigma (t),i}}}\hspace{-10pt} {\left( {{w_{\sigma (t),ik}}(t) \hspace{-3pt}+\hspace{-3pt} {l_{\sigma (t),ik}}} \right)} {{\dot w}_{\sigma (t),ik}}(t)} \hspace{-1pt}-\hspace{-1pt} \frac{\gamma }{N}\sum\limits_{i = 1}^N \hspace{-3pt}{\sum\limits_{\scriptstyle k = 1 \hfill \atop
\scriptstyle k \ne i \hfill}^N\hspace{-3pt} {{{\dot w}_{\sigma (t),ik}}(t)} } \vspace{-12pt}
\]
\begin{eqnarray}\label{12}
= 2{x^T}(t)\left( {\left( {{L_{\sigma (t),w}} - {L_{\sigma (t),0}} - \gamma {L_N}} \right) \otimes {K_w}} \right)x(t),
\end{eqnarray}
where ${L_N}$ is the Laplacian matrix of a complete graph with the weights of all the edges ${1 \mathord{\left/ {\vphantom {1 N}} \right. \kern-\nulldelimiterspace} N}$. Due to ${U_{\sigma (t)}}U_{\sigma (t)}^T = {I_N}$, one can show that
\[{\tilde U_{\sigma (t)}}\tilde U_{\sigma (t)}^T = {I_N} - \frac{1}{N}{{\bf{1}}_N}{\bf{1}}_N^T = {L_N}.\]
Thus, one can derive that
\[
\hspace{-6em} {x^T}(t)\left( {\left( {{L_{\sigma (t),w}} - {L_{\sigma (t),0}} - \gamma {L_N}} \right) \otimes {K_w}} \right)x(t) \vspace{-4pt}
\]
\[
= {\zeta ^T}(t)((\tilde U_{\sigma (t)}^T{L_{\sigma (t),w}}{{\tilde U}_{\sigma (t)}} - \tilde U_{\sigma (t)}^T{L_{\sigma (t),0}}{{\tilde U}_{\sigma (t)}} \vspace{-10pt}
\]
\begin{eqnarray}\label{13}
\hspace{-7.5em}~~- \gamma {I_{N - 1}}) \otimes {K_w})\zeta (t).
\end{eqnarray}
Let ${K_u} = {B^T}R$ and ${K_w} = RB{B^T}R$, then from (\ref{11}) to (\ref{13}), by $\tilde U_{\sigma (t)}^T{L_{\sigma (t),0}}{\tilde U_{\sigma (t)}} = {\rm{diag}}\left\{ {{\lambda _{\sigma (t),2}},{\lambda _{\sigma (t),3}}, \cdots ,{\lambda _{\sigma (t),N}}} \right\}$, one has
\[\dot V(t) = \! \sum\limits_{i = 2}^N \! {\tilde x_i^T(t) \! \left( {RA \! + \! {A^T}R \! - \! 2\left( {{\lambda _{\sigma (t),i}} \! + \! \gamma } \right)RB{B^T}R} \right) \! {{\tilde x}_i}(t)} .\]
Due to $\gamma > 0$ and ${\lambda _{\sigma (t),i}} > 0{\rm{ }}\left( {i = 2,3, \cdots ,N} \right)$, one has
\[
\hspace{-9em}RA + {A^T}R - 2\left( {{\lambda _{\sigma (t),i}} + \gamma } \right)RB{B^T}R \vspace{-2pt}
\]
\[
\hspace{4em}\le RA + {A^T}R - 2\gamma RB{B^T}R{\rm{ }}(i = 2,3, \cdots ,N).
\]
If $RA + {A^T}R - \gamma RB{B^T}R + 2Q \le 0$, then $RA + {A^T}R - 2\gamma RB{B^T}R < 0$ since ${Q^T} = Q > 0$ and $RB{B^T}R \ge 0$. Thus, one can obtain that $\dot V(t) \le 0$ and $\dot V(t) \equiv 0$ if and only if ${\lim _{t \to + \infty }}\zeta (t) = {\bf{0}}$, which means that multiagent system (\ref{1}) achieves adaptive consensus. \par
In the following, the guaranteed-performance cost is determined. One can show that
\[
\hspace{-6em}\frac{1}{N}\sum\limits_{i = 1}^N {\sum\limits_{k = 1}^N {{{\left( {{x_k}(t) - {x_i}(t)} \right)}^T}Q\left( {{x_k}(t) - {x_i}(t)} \right)} } \vspace{-10pt}
\]
\begin{eqnarray}\label{14}
\hspace{10em}= {x^T}(t)\left( {2{L_N} \otimes Q} \right)x(t).
\end{eqnarray}
Due to $U_{\sigma (t)}^T{L_N}{U_{\sigma (t)}} = {\rm{diag}}\left\{ {0,{I_{N - 1}}} \right\}$, one has
\begin{eqnarray}\label{15}
{x^T}(t)\left( {{L_N} \otimes Q} \right)x(t) = \sum\limits_{i = 2}^N {\tilde x_i^T(t)Q{{\tilde x}_i}(t)} .
\end{eqnarray}
For $h \ge 0$, define
\[{J_{xh}} \buildrel \Delta \over = \frac{1}{N}\sum\limits_{i = 1}^N {\sum\limits_{k = 1}^N {\int_0^h {{{\left( {{x_k}(t) - {x_i}(t)} \right)}^T}Q\left( {{x_k}(t) - {x_i}(t)} \right){\rm{d}}t} } } .\]
By (\ref{14}) and (\ref{15}), one can show that
\[
{J_{xh}} = \sum\limits_{i = 2}^N {\int_0^h {2\tilde x_i^T(t)Q{{\tilde x}_i}(t){\rm{d}}t} } .
\]
If $RA + {A^T}R - \gamma RB{B^T}R + 2Q \le 0$, then one has
\[
\hspace{-3em}{J_{xh}} \hspace{-3pt}=\hspace{-3pt} \sum\limits_{i = 2}^N {\int_0^h {\hspace{-4pt}2\tilde x_i^T(t)Q{{\tilde x}_i}(t){\rm{d}}t} }\hspace{-2pt} +\hspace{-3pt} \int_0^h \hspace{-4pt}{\dot V(t)} {\rm{d}}t\hspace{-2pt} -\hspace{-3pt} V(h) \hspace{-2pt}+ \hspace{-3pt}V(0)\vspace{-10pt}
\]
\begin{eqnarray}\label{16}
\hspace{1em} \le\hspace{-2pt} - \hspace{-0pt}\gamma \sum\limits_{i = 2}^N {\int_0^{ + \infty }\hspace{-4pt} {\tilde x_i^T(t)RB{B^T}R{{\tilde x}_i}(t){\rm{d}}t} } \hspace{-2pt}+ \hspace{-3pt}V(0) \hspace{-3pt}-\hspace{-3pt} V(h).
\end{eqnarray}
Due to $\zeta (t) = \left[ {{{\bf{0}}_{(N - 1)d \times d}},{I_{(N - 1)d}}} \right]\left( {U_{\sigma (t)}^T \otimes {I_d}} \right)x(t)$ and ${\tilde U_{\sigma (t)}}\tilde U_{\sigma (t)}^T = {I_N} - {N^{ - 1}}{{\bf{1}}_N}{\bf{1}}_N^T$, one can show that
\[
\hspace{-16em}{\zeta ^T}(0)\left( {{I_{N - 1}} \otimes R} \right)\zeta (0) \vspace{-10pt}
\]
\begin{eqnarray}\label{17}
\hspace{4em}= {x^T}(0)\left( {\left( {{I_N} - \frac{1}{N}{{\bf{1}}_N}{\bf{1}}_N^T} \right) \otimes R} \right)x(0).
\end{eqnarray}
Due to ${w_{\sigma (0),ik}}(0) = 1$ and ${K_w} = RB{B^T}R \ge 0$, one has
\[
\hspace{-8em}\sum\limits_{i = 1}^N \sum\limits_{k \in {N_{\sigma (0),i}}} \frac{{{{\left( {{w_{\sigma (0),ik}}(0) + {l_{\sigma (0),ik}}} \right)}^2}}}{2} \vspace{-10pt}
\]
\begin{eqnarray}\label{18}
\hspace{3em}- \sum\limits_{i = 1}^N {\sum\limits_{k \in {N_{\sigma (h),i}}} {\frac{{{{\left( {{w_{\sigma (h),ik}}(h) + {l_{\sigma (h),ik}}} \right)}^2}}}{2}} } \le 0.
\end{eqnarray}
Due to ${\lim _{t \to + \infty }}\left( {{\gamma _{ik}} - {w_{\sigma (t),ik}}(t)} \right) = 0$, one can show that
\begin{eqnarray}\label{19}
\mathop {\lim }\limits_{h \to + \infty } \sum\limits_{i = 1}^N {\sum\limits_{k = 1,k \ne i}^N {\left( {{\gamma _{ik}} - {w_{\sigma (h),ik}}(h)} \right)} } = 0,
\end{eqnarray}
\begin{eqnarray}\label{20}
\sum\limits_{i = 1}^N {\sum\limits_{\scriptstyle k = 1 \hfill \atop
\scriptstyle k \ne i \hfill}^N {\hspace{-2pt}\left( {{\gamma _{ik}} \hspace{-2pt}- \hspace{-2pt}{w_{\sigma (0),ik}}(0)} \right)} } \hspace{-2pt}=\hspace{-2pt} \sum\limits_{i = 1}^N {\sum\limits_{\scriptstyle k = 1 \hfill \atop
\scriptstyle k \ne i \hfill}^N {\int_0^{ + \infty } \hspace{-2pt}{{{\dot w}_{\sigma (t),ik}}(t)} {\rm{d}}t} } .
\end{eqnarray}
Since
\[
\sum\limits_{i = 1}^N {\sum\limits_{\scriptstyle k = 1 \hfill \atop
\scriptstyle k \ne i \hfill}^N {\int_0^{ + \infty } \hspace{-4pt} {{{\dot w}_{\sigma (t),ik}}(t)} {\rm{d}}t} } \hspace{-2pt}= \hspace{-2pt}2N \hspace{-2pt}\int_0^{ + \infty } \hspace{-6pt}{{x^T}(t)\left( {{L_N} \hspace{-2pt} \otimes \hspace{-2pt} {
|
K_w}} \right)x(t)} {\rm{d}}t,
\]
it can be derived by (\ref{20}) that
\[
\hspace{-10em}\frac{\gamma }{N}\sum\limits_{i = 1}^N {\sum\limits_{k = 1,k \ne i}^N {\left( {{\gamma _{ik}} - {w_{\sigma (0),ik}}(0)} \right)} } \vspace{-10pt}
\]
\begin{eqnarray}\label{21}
\hspace{5em} = 2\gamma \sum\limits_{i = 2}^N {\int_0^{ + \infty } {\tilde x_i^T(t)RB{B^T}R{{\tilde x}_i}(t){\rm{d}}t.} }
\end{eqnarray}
Let $h \to + \infty $, then one can set from (\ref{16})-(\ref{19}) and (\ref{21}) that
\[
\hspace{-6.5em}{J^ * } = {x^T}(0)\left( {\left( {{I_N} - \frac{1}{N}{{\bf{1}}_N}{\bf{1}}_N^T} \right) \otimes R} \right)x(0) \vspace{-4pt}
\]
\[
\hspace{2.5em}{\rm{ + }}\gamma \int_0^{ + \infty }\hspace{-4pt} {{x^T}(t)\left(\hspace{-2pt} {\left( {{I_N} \hspace{-2pt}-\hspace{-2pt} \frac{1}{N}{{\bf{1}}_N}{\bf{1}}_N^T} \right)\hspace{-2pt} \otimes\hspace{-2pt} RB{B^T}R} \right)\hspace{-2pt}x(t)} {\rm{d}}t.
\]
Thus, the conclusion of Theorem \ref{theorem1} is obtained.
\end{proof}
If $(A,B)$ is stabilizable, then the Riccati equation $RA + {A^T}R - \gamma RB{B^T}R + 2Q = 0$ has a unique and positive definite solution $R$ for any given $\gamma > 0$ as shown in \cite{c31}. In this case, the \emph{are} solver in the Matlab toolbox can be used to solve this Riccati equation. It should be pointed out that $\gamma$ represents the rightward translated quantity of the eigenvalues of ${L_{\sigma (t),0}}$, which can be previously given. Intuitionally speaking, because $ - RB{B^T}R$ is negative semidifinite, a large $\gamma$ can decrease the consensus control gain and the guaranteed-performance cost.\par
Furthermore, the consensus control gain and the guaranteed-performance cost can be regulated by introducing the gain factor $\varepsilon > 0$ with $R \le \varepsilon I$, which means that $RB{B^T}R \le {\varepsilon ^2}B{B^T}$ if the maximum eigenvalue of $B{B^T}$ is not larger than $1$; that is, ${\lambda _{\max }}(B{B^T}) \le 1$. In this case, $\varepsilon $ can also be regarded as the maximum nonzero eigenvalue of $R$. In this case, both $\gamma$ and $R$ are variables, so it is difficult to determine the solution of $RA + {A^T}R - \gamma RB{B^T}R + 2Q \le 0$. Based on LMI techniques, by Schur complement lemma in \cite{c32}, the following corollary presents an adaptive guaranteed-performance consensualization criterion with a given gain factor, which can be solved by the \emph{feasp} solver in the LMI toolbox.\par
\begin{col}\label{corollary1}
For any given gain factor $\varepsilon > 0$, multiagent system (\ref{1}) is adaptively guaranteed-performance consensualizable by protocol (\ref{2}) if ${\lambda _{\max }}(B{B^T}) \le 1$ and there exist $\gamma > 0$ and ${\tilde R^T} = \tilde R \ge {\varepsilon ^{ - 1}}I$ such that
\[\tilde \Xi = \left[ {\begin{array}{*{20}{c}}
{A\tilde R + \tilde R{A^T} - \gamma B{B^T}} & {2\tilde RQ} \\
$*$ & { - 2Q} \\
\end{array}} \right] < 0.\]
In this case, ${K_u} = {B^T}{\tilde R^{ - 1}}$, ${K_w} = {\tilde R^{ - 1}}B{B^T}{\tilde R^{ - 1}}$ and the guaranteed-performance cost satisfies that
\[
{J^ * } = \sum\limits_{i = 2}^N {\left( {\varepsilon {{\left\| {{{\tilde x}_i}(0)} \right\|}^2} + \gamma {\varepsilon ^2}\int_0^{ + \infty } {{{\left\| {{B^T}{{\tilde x}_i}(t)} \right\|}^2}} {\rm{d}}t} \right)} .
\]
\end{col}
\vspace{6pt}
\begin{rmk}\label{remark2}
In \cite{c24}, guaranteed-performance consensus for multiagent systems with time-varying neighbor sets and time-invariant interaction strengths was investigated, where the minimum and maximum nonzero eigenvalues of the Laplacian matrices of all interaction topologies in the switching set are required to design gain matrices of consensus protocols. It should be pointed out that global structure information of interaction topologies of the whole system is required to determine the precise values of the minimum and maximum nonzero eigenvalues. Moreover, their methods cannot deal with time-varying interaction strength cases. By the translation-adaptive strategy, the impacts of both the minimum and maximum nonzero eigenvalues and time-varying interaction strengths are eliminated in Theorem 1 and Corollary 1, and a completely distributed guaranteed-performance consensus control is realized in the sense that consensualization criteria are independent of the Laplacian matrices of interaction topologies in the switching set and their eigenvalues.
\end{rmk}
\begin{rmk}\label{remark3}
The scaling strategy was applied to realize adaptive consensus control in \cite{c28,c29,c30}, where the impacts of switching topologies were not investigated. The scaling factor in the Lyapunov function is inversely proportional to the minimum nonzero eigenvalue of the Laplacian matrix, so this factor is difficult to be determined and may be very large since the minimum nonzero eigenvalue may be very small. Thus, the consensus regulation performance cannot be guaranteed since the scaling factor in the Lyapunov function cannot be eliminated when dealing with guaranteed-performance constraints. By translating all nonzero eigenvalues of the Laplacian matrices in the switching set rightward instead of the scaling factor, the guaranteed-performance constraints can be dealt with and the guaranteed-performance cost consists of two terms: the initial state term $J_{x(0)}^*$ and the state integral term $J_{x(t)}^*$. The guaranteed-performance cost only contain the initial state term in \cite{c23,c24,c25,c26,c27}, where the adaptive consensus strategy was not applied. Actually, the state integral term is introduced since the interaction strengths are adaptively adjusted.
\end{rmk}
\section{Extensions to Lipschitz nonlinear cases }\label{section4}
This section extends adaptive guaranteed-performance consensualization criteria for linear multiagent systems shown in the above section to multiagent systems with each agent containing the Lipschitz nonlinearity.\par
The dynamics of each agent is modeled as
\begin{eqnarray}\label{22}
{\dot x_i}(t) = A{x_i}(t) + f({x_i}(t)) + B{u_i}(t){\rm{ }}(i \in \{ 1,2, \cdots ,N\} ),
\end{eqnarray}
where the nonlinear function $f:{\mathbb{R}^d} \times \left[ {0, + \infty } \right) \to {\mathbb{R}^d}$ is continuous and differentiable and satisfies the Lipschitz condition $\left\| {f({x_i}(t)) - f({x_k}(t))} \right\| \le \mu \left\| {{x_i}(t) - {x_k}(t)} \right\|$ with the Lipschitz constant $\mu > 0$, and all the other notations are identical with the ones in (\ref{1}). Let $F(x(t)) = {\left[ {{f^T}\left( {{x_1}(t)} \right),{f^T}\left( {{x_2}(t)} \right), \cdots ,{f^T}\left( {{x_N}(t)} \right)} \right]^T}{\rm{ }}{\rm{.}}$ By the similar analysis in the above section, the dynamics of multiagent system (\ref{22}) with protocol (\ref{2}) can be transformed into
\begin{eqnarray}\label{23}
{\dot {\tilde x}_1}(t) = A{\tilde x_1}(t) + \left( {\frac{1}{{\sqrt N }}{\bf{1}}_N^T \otimes {I_d}} \right)F(x(t)),
\end{eqnarray}
\[
\hspace{-1em}\dot \zeta (t) = \left( {{I_{N - 1}} \otimes A - \tilde U_{\sigma (t)}^T{L_{\sigma (t),w}}{{\tilde U}_{\sigma (t)}} \otimes B{K_u}} \right)\zeta (t)\vspace{-12pt}
\]
\begin{eqnarray}\label{24}
\hspace{-5em}+ \left( {\tilde U_{\sigma (t)}^T \otimes {I_d}} \right)F(x(t)),
\end{eqnarray}
where subsystem (\ref{24}) describes the disagreement dynamics of multiagent system (\ref{22}). \par
By the Lipschitz condition and the structure feature of the transformation matrix ${\tilde U_{\sigma (t)}} \otimes {I_d}$, the following theorem linearizes the impacts of the nonlinear term $\left( {\tilde U_{\sigma (t)}^T \otimes {I_d}} \right)F(x(t))$ and gives an adaptive guaranteed-performance consensualization criterion in the completely distributed manner; that is, it is not associated with Laplacian matrices of interaction topologies in the switching set and their eigenvalues. \par
\begin{thm}\label{theorem2}
For any given $\gamma > 0$, multiagent system (\ref{22}) is adaptively guaranteed-performance consensualizable by protocol (\ref{2}) if there exists a matrix ${P^T} = P > 0$ such that
\[
PA + {A^T}P - P(\gamma B{B^T} - I)P + 2Q + {\mu ^2}I \le 0.
\]
In this case, ${K_u} = {B^T}P$, ${K_w} = PB{B^T}P$ and the guaranteed-performance cost satisfies that ${J^ * } = J_{x(0)}^* + J_{x(t)}^*,$ where
\[
J_{x(0)}^* = {x^T}(0)\left( {\left( {{I_N} - \frac{1}{N}{{\bf{1}}_N}{\bf{1}}_N^T} \right) \otimes P} \right)x(0),
\]
\[
J_{x(t)}^*{\rm{ = }}\gamma \int_0^{ + \infty } \hspace{-2pt}{{x^T}(t)\left( {\left( {{I_N} \hspace{-2pt}- \hspace{-2pt} \frac{1}{N}{{\bf{1}}_N}{\bf{1}}_N^T} \right) \hspace{-2pt} \otimes \hspace{-2pt}PB{B^T}P} \right)x(t)} {\rm{d}}t.
\] \hspace{4pt}
\end{thm}
\begin{proof}
Due to ${\tilde U_{\sigma (t)}}\tilde U_{\sigma (t)}^T = {L_N}$, it can be shown that
\[
\hspace{-9em}{F^T}(x(t))\left( {{{\tilde U}_{\sigma (t)}}\tilde U_{\sigma (t)}^T \otimes {I_d}} \right){\rm{ }}F(x(t)) \vspace{-6pt}
\]
\[
\hspace{6em} = \frac{1}{{2N}}\sum\limits_{i = 1}^N {\sum\limits_{k = 1}^N {{{\left\| {f({x_i}(t)) - f({x_k}(t))} \right\|}^2}} }.
\]
By the Lipschitz condition, one can see that
\[
\frac{1}{{2N}}\hspace{-2pt}\sum\limits_{i = 1}^N \hspace{-3pt}{\sum\limits_{k = 1}^N {{{\left\| \hspace{-1pt}{f({x_i}(t)\hspace{-1pt}) \hspace{-3pt}-\hspace{-3pt} f({x_k}(t)\hspace{-1pt})}\hspace{-1pt} \right\|}^2}} } \hspace{-3pt}\le\hspace{-2pt} \frac{{{\mu ^2}}}{{2N}}\hspace{-2pt}\sum\limits_{i = 1}^N\hspace{-3pt} {\sum\limits_{k = 1}^N {{{\left\| \hspace{-1pt}{{x_i}(t) \hspace{-3pt}- \hspace{-3pt}{x_k}(t)}\hspace{-1pt} \right\|}^2}} }\vspace{0pt}
\]
\[
\hspace{5em} = \hspace{-2pt}{\mu ^2}{x^T}(t)\hspace{-2pt}\left( {{{\tilde U}_{\sigma (t)}}\tilde U_{\sigma (t)}^T\hspace{-2pt} \otimes\hspace{-2pt} {I_d}} \right)\hspace{-2pt}{\rm{ }}x(t) \vspace{-10pt}
\]
\begin{eqnarray}\label{25}
\hspace{2.5em} = {\mu ^2}\sum\limits_{i = 2}^N {\tilde x_i^T(t){{\tilde x}_i}(t)} .
\end{eqnarray}
It can be derived that
\[
\hspace{-4em}2{\zeta ^T}(t)\left( {\tilde U_{\sigma (t)}^T \otimes P} \right)F\left( {x(t)} \right) \le \sum\limits_{i = 2}^N {\tilde x_i^T(t)PP{{\tilde x}_i}(t)} \vspace{-10pt}
\]
\begin{eqnarray}\label{26}
\hspace{6em}+ {F^T}\left( {x(t)} \right)\left( {{{\tilde U}_{\sigma (t)}}\tilde U_{\sigma (t)}^T \otimes {I_d}} \right){\rm{ }}F\left( {x(t)} \right).
\end{eqnarray}
By constructing a similar Lyapunov function in (\ref{10}) with $R$ replacing by $P$, from (\ref{25}) and (\ref{26}), the conclusion of Theorem 2 can be obtained.
\end{proof}
For any given gain factor $\varepsilon > 0$ with $P \le \varepsilon I$, the adaptive guaranteed-cost consensualization can be realized by choosing proper $\gamma $ and $P$. According to Theorem 2 and Schur complement lemma, the following corollary presents an approach to determine gain matrices of consensus protocols with a given gain factor in terms of LMIs.\par
\begin{col}\label{corollary2}
For any given $\varepsilon > 0$, multiagent system (\ref{22}) is adaptively guaranteed-cost consensualizable by protocol (\ref{2}) if ${\lambda _{\max }}(B{B^T}) \le 1$ and there exist $\gamma > 0$ and ${\tilde P^T} = \tilde P \ge {\varepsilon ^{ - 1}}I$ such that
\[
\hat \Xi = \left[ {\begin{array}{*{20}{c}}
{A\tilde P + \tilde P{A^T} - \gamma B{B^T} + I} & {2\tilde PQ} & {\mu \tilde P} \\
$*$ & { - 2Q} & 0 \\
$*$ & $*$ & { - I} \\
\end{array}} \right] < 0.
\]
In this case, ${K_u} = {B^T}{\tilde P^{ - 1}}$, ${K_w} = {\tilde P^{ - 1}}B{B^T}{\tilde P^{ - 1}}$ and the guaranteed-performance cost satisfies that
\[{
J^ * } = \sum\limits_{i = 2}^N {\left( {\varepsilon {{\left\| {{{\tilde x}_i}(0)} \right\|}^2} + \gamma {\varepsilon ^2}\int_0^{ + \infty } {{{\left\| {{B^T}{{\tilde x}_i}(t)} \right\|}^2}} {\rm{d}}t} \right)} .
\]
\vspace{0pt}
\end{col}
We adopt two critical approaches to deduce our main conclusions: the variable changing approach and the Riccati inequality approach. It should be pointed out that the variable changing approach is an equivalent transformation, so it does not introduce any conservatism. However, since there exists some scalability of the Lyapunov function, the Riccati inequality approach may bring in some conservatism. Actually, the Riccati inequality approach is extensively applied in optimization control and often has less conservatism as shown in \cite{c31}. Moreover, two key difficulties exist in obtaining the main result of the current paper shown in Theorems \ref{theorem1} and \ref{theorem2}. The first one is to design a proper Lyapunov function, which can be used to rightward translate the nonzero eigenvalues of the Laplacian matrix. The second one is to determine the relationship between the Laplacian matrix and the linear quadratic index, as given in (\ref{14}) and (\ref{15}).
\begin{rmk}\label{remark4}
Many multiagent systems contain Lipschitz nonlinear dynamics. For example, sinusoidal terms are globally Lipschitz, which are usually encountered in cooperative control for multiple robotics and cooperative guidance for multiple unmanned vehicles as shown in \cite{c33} and \cite{c34}. The key difficulties contain two aspects: how to decompose the disagreement components from the whole $F(x(t))$ and how to eliminate the impacts of the time-varying transformation matrix ${\tilde U_{\sigma (t)}} \otimes {I_d}$. Because ${\tilde U_{\sigma (t)}}\tilde U_{\sigma (t)}^T$ is the Laplacian matrix of a complete graph with the weights of all the edges ${1 \mathord{\left/ {\vphantom {1 N}} \right. \kern-\nulldelimiterspace} N}$, these two key challenges can be dealt with by using this special structure characteristic. Moreover, since the consensus regulation performance and adaptively adjusting interaction weights are considered, the approaches to deal with Lipschitz nonlinear dynamics in \cite{c16,c17,c18} are no longer valid.
\end{rmk}
\section{Numerical simulations}\label{section5}
In this section, two simulation examples are provided to demonstrate the theoretical results obtained in the previous sections. \par
\vspace{1em}
\centerline {***** Put Fig. 1 about here *****}
\vspace{1em}
\begin{exm}[Linear cases]
Consider a four-order linear multiagent system composed of six agents with switching interaction topologies ${G_1}$, ${G_2}$, ${G_3}$ and ${G_4}$ given in Fig. 1. The dynamics of each agent is shown in (\ref{1}) with
\[A \! = \! \left[ {\begin{array}{*{20}{c}}
{{\rm{ - 3}}{\rm{.375}}} & {{\rm{ - 4}}{\rm{.500}}} & {{\rm{ - 4}}{\rm{.125}}} & {{\rm{ - 3}}{\rm{.250}}} \\
{{\rm{1}}{\rm{.625}}} & {{\rm{ - 1}}{\rm{.500}}} & {{\rm{ - 1}}{\rm{.125}}} & {{\rm{1}}{\rm{.250}}} \\
{{\rm{ - 0}}{\rm{.875}}} & {{\rm{0}}{\rm{.500}}} & {{\rm{ - 1}}{\rm{.625}}} & {{\rm{1}}{\rm{.750}}} \\
{{\rm{1}}{\rm{.750}}} & {{\rm{3}}{\rm{.500}}} & {{\rm{2}}{\rm{.750}}} & {{\rm{ - 0}}{\rm{.500}}} \\
\end{array}} \right] \! \! ,B \! = \! \left[ {\begin{array}{*{20}{c}}
0 \\
{1.5} \\
0 \\
0 \\
\end{array}} \right]\! \! .\]
The initial state of each agent is
\[\begin{array}{l}
{x_1}\left( 0 \right) = {\left[ { - 1,0, - 4,5} \right]^T},~~{x_2}\left( 0 \right) = {\left[ {5, - 4, - 8, - 2} \right]^T}, \\
{x_3}\left( 0 \right) = {\left[ { - 7,3, - 4,7} \right]^T},~~{x_4}\left( 0 \right) = {\left[ { - 2,7, - 1, - 5} \right]^T}, \\
{x_5}\left( 0 \right) = {\left[ {4,6, - 1, - 3} \right]^T},~~{x_6}\left( 0 \right) = {\left[ {8,1,5, - 4} \right]^T}. \\
\end{array}\]
The parameters are chosen as $\gamma = 5$ and
\[
Q = \left[ {\begin{array}{*{20}{c}}
{0.10} & {0.02} & {0.01} & 0 \\
{0.02} & {0.10} & {0.01} & {0.02} \\
{0.01} & {0.01} & {0.10} & {0.03} \\
0 & {0.02} & {0.03} & {0.10} \\
\end{array}} \right],
\]
then one can obtain gain matrices according to Theorem \ref{theorem1} as follows
\[
{K_u} = \left[ {\begin{array}{*{20}{c}}
{{\rm{0}}{\rm{.2}}653} & {{\rm{1}}{\rm{.0549}}} & {0.7878} & {0.6790} \\
\end{array}} \right],
\]
\[
{K_w} = \left[ {\begin{array}{*{20}{c}}
{0.0704} & {0.2799} & {0.2090} & {0.1801} \\
{0.2799} & {1.1128} & {0.8311} & {0.7163} \\
{0.2090} & {0.8311} & {0.6207} & {0.5349} \\
{0.1801} & {0.7163} & {0.5349} & {0.4610} \\
\end{array}} \right].
\]
In this case, the guaranteed-performance cost is ${J^ * } = {\rm{267}}{\rm{.9357}}$. \par
As illustrated in Fig. 2, let ${G_{\sigma (t)}}$ randomly switch among ${G_1}$, ${G_2}$, ${G_3}$ and ${G_4}$ with switching interval $0.5$s. Figs. 3 and 4 depict the guaranteed-performance function ${J_{x\left( t \right)}}$ and the trajectories of the states of all agents ${x_i}(t)$ $\left( {i = 1,2, \cdots ,N} \right)$, respectively. One can see that multiagent system (1) achieves adaptive consensus and the guaranteed-performance function ${J_{x\left( t \right)}}$ converges to a finite value with ${J_{x\left( t \right)}} < {J^ * }$. The simulation results illustrate that multiagent system (1) can be adaptively guaranteed-performance consensualizable by protocol (2) with the above gain matrices ${K_u}$ and ${K_w}$ obtained by Theorem \ref{theorem1} without using the global information of the interaction topology. However, the distributed consensus control approach in \cite{c35} required the precise value of the minimum nonzero eigenvalue of the interaction topology; that is, the completely distributed control cannot be realized. Moreover, the main conclusion of Theorem 1 is completely distributed, so it should be pointed out that the computational complexity does not increase as the number of agents increases.\par
\vspace{1em}
\centerline {***** Put Fig. 2 about here *****}
\vspace{1em}
\vspace{1em}
\centerline {***** Put Fig. 3 about here *****}
\vspace{1em}
\vspace{1em}
\centerline {***** Put Fig. 4 about here *****}
\vspace{1em}
\end{exm}
\begin{exm}[Lipschitz nonlinear cases]
Consider a four-order Lipschitz nonlinear multiagent system with six agents and the dynamics of each agent is described by (\ref{22}) with
\[\begin{array}{l}
A = \left[ {\begin{array}{*{20}{c}}
{{\rm{ - 3}}{\rm{.125}}} & {{\rm{ - 5}}{\rm{.250}}} & {{\rm{ - 4}}{\rm{.625}}} & {{\rm{ - 4}}{\rm{.250}}} \\
{{\rm{0}}{\rm{.875}}} & {{\rm{ - 2}}{\rm{.250}}} & {{\rm{ - 2}}{\rm{.625}}} & {{\rm{ - 0}}{\rm{.750}}} \\
{{\rm{ - 0}}{\rm{.625}}} & {{\rm{1}}{\rm{.750}}} & {{\rm{ - 0}}{\rm{.125}}} & {{\rm{2}}{\rm{.750}}} \\
{{\rm{2}}{\rm{.250}}} & {{\rm{3}}{\rm{.000}}} & {{\rm{2}}{\rm{.750}}} & {{\rm{0}}{\rm{.500}}} \\
\end{array}} \right], \\
B = \left[ {\begin{array}{*{20}{c}}
0 \\
1 \\
0 \\
0 \\
\end{array}} \right],f\left( {{x_i}} \right) = \left[ {\begin{array}{*{20}{c}}
0 \\
0 \\
0 \\
{ - \mu \sin \left( {{x_{i3}}} \right)} \\
\end{array}} \right], \\
\end{array}\]
where ${x_i} = {\left[ {{x_{i1}},{\rm{ }}{x_{i2}},{\rm{ }}{x_{i3}},{\rm{ }}{x_{i4}}} \right]^T}$ $\left( {i = 1,2, \cdots ,6} \right)$ and $\mu = 0.0333$. The initial states of all agents are given as
\[\begin{array}{l}
{x_1}\left( 0 \right) = {\left[ { - 1, - 2, - 3,5} \right]^T},\ \ \; {x_2}\left( 0 \right) = {\left[ { - 0.5,2, - 4,1.6} \right]^T}, \\
{x_3}\left( 0 \right) = {\left[ {6, - 3,2,3} \right]^T},\qquad \ \, {x_4}\left( 0 \right) = {\left[ { - 2.5,2,3, - 5} \right]^T}, \\
{x_5}\left( 0 \right) = {\left[ {1.7, - 9,1.5, - 3} \right]^T},{\kern 1pt} {x_6}\left( 0 \right) = {\left[ { - 1,4, - 2, - 6} \right]^T}. \\
\end{array}\]
Let $\varepsilon = 5$ and
\[
Q = \left[ {\begin{array}{*{20}{c}}
{0.20} & {0.02} & {0.01} & 0 \\
{0.02} & {0.1} & {0.03} & {0.02} \\
{0.01} & {0.03} & {0.2} & {0.03} \\
0 & {0.02} & {0.03} & {0.10} \\
\end{array}} \right],
\]
then one can obtain from Corollary 2 that
\[\gamma = 21.1207,\]
\[
{K_u} = \left[ {\begin{array}{*{20}{c}}
{{\rm{0}}{\rm{.0989}}} & {{\rm{0}}{\rm{.6246}}} & {0.5940} & {0.4970} \\
\end{array}} \right],
\]
\[
{K_w} = \left[ {\begin{array}{*{20}{c}}
{0.0098} & {0.0618} & {0.0587} & {0.0491} \\
{0.0618} & {0.3902} & {0.3710} & {0.3104} \\
{0.0587} & {0.3710} & {0.3529} & {0.2952} \\
{0.0491} & {0.3104} & {0.2952} & {0.2470} \\
\end{array}} \right],
\]
and the guaranteed-performance cost is that ${J^ * } = {\rm{4}}{\rm{.1478}} \times {\rm{1}}{{\rm{0}}^4}$. \par
Fig. 5 shows the switching signal $\sigma (t)$ and the switching set is also given in Fig. 1. Figs. 6 and 7 show the curves of the guaranteed-performance function and the state trajectories of this multiagent system, respectively. It can be found that the given Lipschitz nonlinear multiagent system (\ref{22}) can be adaptively guaranteed-cost consensualizable by protocol (\ref{2}) with ${J_{x\left( t \right)}} < {J^*}$.
\vspace{1em}
\centerline {***** Put Fig. 5 about here *****}
\vspace{1em}
\vspace{1em}
\centerline {***** Put Fig. 6 about here *****}
\vspace{1em}
\vspace{1em}
\centerline {***** Put Fig. 7 about here *****}
\vspace{1em}
Furthermore, for the case that $\varepsilon = 10$ and the other parameters are identical, according to Corollary \ref{corollary2}, one can acquire that
\[
{K_u} = \left[ {\begin{array}{*{20}{c}}
{{\rm{0}}{\rm{.1043}}} & {{\rm{0}}{\rm{.6793}}} & {0.6524} & {0.5448} \\
\end{array}} \right],
\]
\[
{K_w} = \left[ {\begin{array}{*{20}{c}}
{0.0109} & {0.0708} & {0.0680} & {0.0568} \\
{0.0708} & {0.4615} & {0.4432} & {0.3701} \\
{0.0680} & {0.4432} & {0.4257} & {0.3554} \\
{0.0568} & {0.3701} & {0.3554} & {0.2968} \\
\end{array}} \right],
\]
\[
{J^ * } = {\rm{1}}{\rm{5.953}} \times {\rm{1}}{{\rm{0}}^4}.
\]
It can be seen that the gain matrices ${K_u}$, ${K_w}$ and guaranteed-performance cost ${J^ * }$ for $\varepsilon = 10$ are larger than the ones for $\varepsilon = 5$, which means that the values of ${K_u}$, ${K_w}$ and ${J^ * }$ become larger as $\varepsilon$ gets larger. Thus, one can obtain different ${K_u}$, ${K_w}$ and ${J^ * }$ to satisfy different requirements by regulating $\varepsilon$ in practical applications.\par
\end{exm}
\section{Conclusions}\label{section6}
A completely distributed guaranteed-performance consensus scheme was proposed to make consensus control gains independent of Laplacian matrices of switching topologies and their eigenvalues. An adaptive guaranteed-performance consensus design criterion for high-order linear multiagent systems with switching topologies was given based on the Riccati inequality, where the impacts of nonzero eigenvalues of Laplacian matrices of switching topologies with adaptively adjusting weights were eliminated by rightward translating them instead of scaling them. Furthermore, by adding constraints on the input matrix of each agent, it was shown that the consensus control gains and the guaranteed-performance cost can be regulated via choosing the different translation factor. Moreover, adaptive guaranteed-performance consensus conclusions for high-order linear multiagent systems were extended to high-order nonlinear ones by the Lipschitz condition and the structure characteristic of the transformation matrix. The further work is to investigate the influences of directed topologies, given cost budgets, and time-varying delays on adaptive guaranteed-performance consensus of multiagent systems with jointly connected switching topologies.
|
\section{Introduction}
\label{one}
The rogue wave solution of the nonlinear Schr\"odinger equation (NLSE) is attractive because of its ability to explain sudden extreme formation in nature such as oceanic rogue waves, shaping of light pulse with unusually high intensity and creation of localised structures in Bose-Einstein condensates etc.
After three decades of intense study, the rogue wave solution has extended to a range of other multidisciplinary fields from optics \cite{Akhmediev:09:PLA1,onorato2001freak,muller2005rogue,Kibler:10:Nature,moslem2011surface,stenflo2010rogue,Efimov:10:EPJST,Bludov:10:EPJST,Shats:10:PRL,veldes2013electromagnetic,moslem2011dust,tsai2016generation,bludov2009matter} to economics \cite{zhen2010financial} .
In nonlinear optics, the formation of optical rogue waves is a short burst of light pulses with high intensities that appears in a chaotic optical wave-field. The first demonstration of the existence of an optical rogue wave was reported in the work of Solli et.al. \cite{Solli:07:Nature} in fiber supercontinuum generation. Since then, the phenomenon has been studied in various different branches of nonlinear optics because of their multiplicity of applications and impact such as in optical cavities \cite{montina2009non,residori2012rogue}, mode-locked lasers \cite{lecaplain2012dissipative,soto2011dissipative}, photonic crystal fibers \cite{buccoliero2011midinfrared}, Raman fibers laser and amplifiers \cite{runge2014raman,finot2009selection} and optical parametric process \cite{hammani2009emergence}. A comprehensive report on recent progress in research of optical rogue waves in the field of nonlinear optics can be found in these works \cite{akhmediev2013recent,song2020recent}.
Instability seeded by noise acts as a breeding ground for the emergence of optical rogue waves. This is described by a process called modulation instability (MI). It is a complex nonlinear process which can exponentially amplify a small noise or disturbance leading to a drastic change in the system \cite{zakharov2009modulation} such as forming optical rogue waves in optical fibre.
Despite the strong interest on the nature of optical rogue wave and how it emerges and influences an optical system, many of its characteristics are not well understood. For instance, during propagation of ultrashort optical pulse through an optical fiber, the fibre medium exerts higher-order linear and nonlinear effects. Under these effects, how the spontaneous MI develops, where it leads to and the impact of MI on a particular optical phenomena is not well understood. One prime example where we can observe the MI which plays the pivotal role is in the process of continuous wave supercontinuum generation (CW-SCG).
In the CW-SCG regime, the continuous wave pump can be considered as a higher-order soliton of a very large soliton number. Their evolution is dominated by a noise-seeded MI leading to its break-up \cite{super}. The disintegration is highly non-trivial and is a multi-stage process. At the beginning of the evolution, the presence of noise among the large number of solitons produces MI, making the higher-order soliton unstable. Immediately before the higher-order soliton collapses, this MI is strong enough to form a wide variety of rogue waves-type substructures. In the final stage, all these rogue waves become a collections of fundamental solitons. However, the physics behind how the initial MI results in the production of many solitons is relatively less studied, and a definitive insight into the process is yet to be realized.
Applying the NLSE, the formation of fundamental and higher-order rogue waves as a result of noise driven MI on a continuous wave field is already presented in \cite{toenger2015emergent}. In this work, using the generalised nonlinear Schr\"odinger equation (GNLSE), we show that higher-order rogue waves undergo fission, similarly to higher-order soliton fission. The GNLSE is perturbed with third-order dispersion (TOD), self-steepening (SS), and Raman induced self-frequency shift (RIFS) effects. We solve it numerically including the TOD, SS, and RIFS individually and show that each of these effects induce rogue wave fission breaking the higher-order rogue wave to their constituent parts. Note that while all these effects induce fission in the higher-order rogue waves, as we shall see later, it is the RIFS effect that has the major impact on transforming the disintegrated rogue waves to solitons. Finally we solve the full GNLSE and demonstrate that simultaneously, these three effect induce fission on the higher-order rogue waves and after a long-time evolution the breakaway rogue wave components transform to fundamental solitons.
The article is organized as follows. In Sec.(\ref{three}), (\ref{four}), and (\ref{five}), we investigate the effect of TOD, SS, and RIFS on a second and third-order rogue waves showing that each of them individually induce rogue wave fission. We also demonstrate that under the RIFS effect, higher-order rogue waves generate red-shifted frequency components while decelerating, until eventually transforming into a soliton. In Sec.(\ref{six}), we investigate the combined influence, showing that after the occurrence of the fission, the evolving rogue wave components create an asymmetrical spectral profiles producing both red and blue shifted frequencies.
\subsection {Model, solutions and techniques}
The GNLSE in its normalized form is
\begin{multline}
\label{gnlse}
i \frac{\partial \psi}{\partial z}-\frac{\beta_2}{2} \frac{\partial^2 \psi}{\partial t^2}+\gamma \,\psi\lvert\psi\rvert^2=\\
i\epsilon_3\, \frac{\partial^3 \psi}{\partial t^3}-is\frac{\partial}{\partial t}(\psi\lvert\psi\rvert^2)+\tau_R \psi \frac{\partial|\psi|^2}{\partial t}\textrm{,}
\end{multline}
where $\psi=\psi(z,t)$ is the complex field envelop, $z$ the evolution variable and $t$ the transverse variable. $\epsilon_3$ is the TOD parameter, $s$ and $\tau_R$ are the coefficients of SS and RIFS with their explicit expression given by
\begin{equation}\label{parameterss}
\epsilon_3 = \frac{\beta_3}{6|\beta_2|\,t_0}, \; s=\frac{1}{\omega_0\,t_0}, \; \tau_R=\frac{T_r}{t_0}
\end{equation}
where $\beta_2$ is the group velocity dispersion (GVD), $\gamma$ the nonlinear strength, $\beta_3$ the coefficient of TOD, $\omega_0$ the carrier angular frequency, $t_0$ the pulse duration, and $T_r$ the Raman time constant \cite{atieh1999measuring}. Since the coefficients of TOD, SS and RIFS in Eq.(\ref{gnlse}) are inversely proportional to the pulse duration $t_0$, they are negligible in the long pulse regime, while they contribute significantly in the ultrashort pulse regime.
With GVD parameter $\beta_2=-1$, nonlinear parameter $\gamma=1$ and $\epsilon_3=s=\tau_R=0$, the Eq.(\ref{gnlse}) is the NLSE and is the most basic form of the equation that can be used to model optical pulse propagation in nonlinear dispersive media \cite{agrawal2011nonlinear}. This form of the equation can be solved analytically using the inverse scattering transformation \cite{Zakharov:72:JETP}. In this work, we assume that the TOD, SS and RIFS effects are perturbations to the rogue wave solution of NLSE. We apply small magnitude of perturbation and solve Eq.(\ref{gnlse}) numerically.
To numerically generate higher-order rogue waves, we adopt the analytic solutions as the initial conditions well before their fully developed stage. Specifically we used the second and third-order rogue wave solution as presented in \cite{Akhmediev:09:PRE}(see eq.~23 and 26). The analytic solutions in their exact form can be obtained by solving NLSE through the Darboux transformation technique using a continuous wave, $\psi=\exp\left(iz\right)$, as the seed \cite{Matveev:91:Book}. The main features of a higher-order rogue wave of order $N$ is, they consists of $N (N + 1)/2$ fundamental rogue waves and can reach the maximum amplitude $2N +1$. The phase shift across the peak becomes $\pi$. The details of the employed numerical techniques can be found in \cite{chowdury2021rogue}.
Apart from the investigation of higher-order rogue waves fission, we also study their impact on the surrounding wave background. In optics or hydrodynamics, in a turbulent wave field a variety of waves can be formed. Recently, the inverse scattering techniques (IST) has been successfully applied to classify them. We will leverage this technique to identify particular types of wave from calculated spectral profiles. The details of the technique and its implementation is outlined in \cite{randoux2016inverse,randoux2018nonlinear,bonnefoy2020modulational}.
All simulations throughout this work range from $z=0$ till $z=24$ will be ignored. However figures will show only the fraction of the calculated range which shows rogue waves' structures and interactions.
\section{Effect of TOD}
\label{three}
We first address only the TOD term with $\beta_2=-1, \gamma=1$ and $s=\tau_R=0$ in Eq.~\ref{gnlse}. The modified equation is given by
\begin{equation}
\label{nlse-tod}
i \frac{\partial \psi}{\partial z}+\frac{1}{2} \frac{\partial^2 \psi}{\partial t^3}+\psi\lvert\psi\rvert^2-i\epsilon_3\, \frac{\partial^3 \psi}{\partial t^3}=0\textrm{.}
\end{equation}
Since no analytic solution is known, the effect of the perturbation on the second-order rogue wave is examined numerically by solving Eq.~(\ref{nlse-tod}) for a range of $\beta_3$ values (and therefore TOD parameter $\epsilon_3$, as in Eq.~\ref{parameterss}).
The temporal and phase profiles of the numerically generated second-order rogue wave is shown in Fig.~\ref{todsecond}(a) and \ref{todsecond}(b). With $\beta_3=0$, the maximum height reached to $5$ while there is a $\pi$ phase shift at $z=30$ across the transverse direction.
\begin{figure}[htbp]
\centering
\includegraphics{Fig-1.pdf}
\caption{ (a) Amplitude and (b) phase profiles of the second-order rogue wave with $\beta_3=0$. Panels (c) and (d) ($\beta_3=0.04$) show how the rogue wave breaks apart into a doublet and a first-order rogue wave. Panels (e) and (f) ($\beta_3=0.08$) show the doublet further breaking into two conjoined first-order rogue waves. Panels (g) and (h) are with $\beta_3=0.2$ where each of the rogue waves are well separated.}
\label{todsecond}
\end{figure}
An in-depth analysis of the fission under TOD is described in \cite{chowdury2021rogue}. Here we focus on how increasing magnitude of TOD coefficient $\beta_3$ affects the higher-order rogue waves. We simulated a second-order rogue wave with a range of $\beta_3$ values starting from $0.01$ to $0.2$. Results are presented in in Fig.~\ref{todsecond}, showing the onset of the fission process.
When the TOD term is small ($\beta_3=0.04$), we observe already a collapse of the rogue wave. The main structure separates into two substructures, creating a doublet accompanied by one separate first-order rogue wave. The doublet within the white rectangle in Fig.~\ref{todsecond}(c) appeares as two conjoined first-order rogue waves with amplitude $3$ while a separate first-order rogue wave appears on the right side of the doublet. Corresponding phase profiles are shown in Fig.~\ref{todsecond}(d).
Interestingly, just a slightly higher values of $\beta_3=0.08$ leads to the breaking of the doublet, that becomes two independent first-order rogue waves. Including the first-order rogue wave in the right, there are three rogue waves appearing in Fig.~\ref{todsecond}(e). Independently each rogue wave's amplitude is $\approx 3$ and they undergo a $\pi$ phase shift shown in Fig.~\ref{todsecond}(f). As $\beta_3$ increases, the three rogue waves get pushed further apart. For instance, with $\beta_3=0.2$ the three first-order rogue waves (marked by numbers and presented in Fig.~\ref{todsecond}(g)) are completely separated and appear independently at different space and time, while undergoing a $\pi$ phase shift, as observed in Fig.~\ref{todsecond}(h).
\begin{figure}[h]
\centering
\includegraphics{Fig-2.pdf}
\caption{(a) Amplitude and (b) phase of a third-order rogue wave with $\beta_3=0$. (c) and (d) show the onset of a separated second-order rogue wave along with a first-order one when $\beta_3=0.04$. Panels (e) and (f) show how the second-order rogue wave is transformed into first-order rogue waves when $\beta_3=0.08$. Finally, (g) and (h) show a cluster of well-separated first-order rogue waves at $\beta_3=0.15$. }
\label{third-fission}
\end{figure}
The same concept can be extended to $N$-th order rogue waves. We apply the perturbation $\beta_3$ to a third-order rogue wave, which, from the exact analytic solution, is known to consist of six fundamental rogue waves. We take the same numerical approach as in the previous sections. To observe the impact of the TOD, we varied $\beta_3$ from $0$ to $0.15$ and show selected cases in Fig.~\ref{third-fission}. Without any perturbation ($\beta_3=0$) the temporal and phase profile of a third-order rogue wave is presented in Fig.~\ref{third-fission}(a) and \ref{third-fission}(b). The amplitude is $7$ and the $\pi$ phase shift across the maximum height is visible at the centre of the phase profile.
If we perturb the third-order rogue wave weakly ($\beta_3=0.04$, in Fig.~\ref{third-fission}(c)), the rogue wave breaks apart in two with a transient appearance of a second-order rogue wave accompanied with an isolated first-order rogue wave. The former (highlighted in figure by a white rectangle) reaches the maximum amplitude of $\approx 5$. The presence of $\beta_3$ makes its appearance highly distorted. The associated phase change is shown in Fig.~\ref{third-fission}(d). As the magnitude of the perturbation increases, ($\beta_3=0.08$) the short-lived second-order rogue wave also breaks apart into five premature first-order rogue waves with varying amplitudes, as shown in Fig.~\ref{third-fission}(e). Their phase profiles appear more and more independently with a phase shift of $\pi$ along the maximum amplitude, as the TOD $\beta_3$ increases (see Fig.~\ref{third-fission}(f)).
This clearly demonstrate that the disintegration of higher-order rogue waves due to a weak TOD occurs via progressive separations following a hierarchical pattern. As such a second-order rogue wave produces a doublet and a first-order rogue wave. Similarly, a third-order rogue wave breaks into a short-lived second-order rogue wave and a first-order one. This pattern of disintegration is followed by any higher-order rogue waves under the effect of a small $\beta_3$.
Finally, when relatively strong perturbation ($\beta_3=0.15$) is applied to the third-order rogue wave, the separation distance among the break-away components increases and the whole structure turns into a cluster of six fundamental rogue waves (marked in numbers and shown in the temporal profile in Fig.~\ref{third-fission}(g)). The break-away first-order rogue waves are completely independent from each other can be seen in the phase profile in \ref{third-fission}(h). Similar disintegration schemes applies to all $N$-th order rogue waves.
\begin{figure}[h]
\centering
\includegraphics{Fig-3.pdf}
\caption{(a) and (b) are the spectral profile of a second and third-order rogue wave with $\beta_3=0.20$ and $\beta_3=0.15$ respectively. Fig.~\ref{todsecond}(g) and Fig.~\ref{third-fission}(g) are their time-domain profiles. Each break-away rogue wave is accompanied with dispersive wave emission.}
\label{DS}
\end{figure}
One significant consequence of perturbing the higher-order rogue wave with TOD effect is dispersive wave (DW) emission. When the TOD effect is strong, each disintegrated rogue wave generates a group of linear waves which is phase matched with the rogue wave itself. In the spectral domain, the frequency components corresponding to these linear waves or DW at the maximally compressed points,
becomes clear in Figs.~\ref{DS}(a) and \ref{DS}(b) respectively (their corresponding time domain images are Figs.~\ref{todsecond}(g) and \ref{third-fission}(g)).
For second-order rogue waves (see Fig.~\ref{DS}(a)), each generated fundamental rogue wave produces the broadest spectrum at the maximum compression point, as indicated by the dashed white line. The corresponding rogue waves are marked with number within circles. Each of the rogue wave generates DW around $\omega \approx 20$ which is clearly visible that appear as a shoulder of component frequencies.
A similar effect appears in Fig.~\ref{DS}(b), which shows the spectral profile of the rogue wave cluster formed in Fig.~\ref{third-fission}(g). The spectra from rogue wave $1$ and $2$ are superimposed on each-other and produce a periodic spectral pattern along the bottom white-dashed line. The generated phase matched DW from both of these rogue waves creates a shoulder at $\omega \approx 20$. The second lowest dashed-white line in Fig.~\ref{DS}(b) shows the overlapping frequency spectrum of rogue waves $3$ and $4$, again forming a strong interference pattern and a shoulder at the edge of the spectrum showing DW emission. The spectrum of the rogue waves $5$ and $6$, on the other hand, remain sufficiently separated and their spectra do not mutually interfere. Their relative positions marks the end of the cluster and generate DW separately (marked by the two top white lines). The phase matched frequency, $\omega_{\textrm{DW}}$, can be approximated to the first order to be $\omega_{\textrm{DW}}=3/\beta_3$. The numerically observed radiations match closely with the earlier reports made in \cite{baronio2020resonant,chowdury2021rogue}.
\section{Effect of self-steepening}
\label{four}
By including the SS effect only with the NLSE, the Eq.~\ref{gnlse} becomes
\begin{equation}
\label{gnlse-ss}
i \frac{\partial \psi}{\partial z}-\frac{\beta_2}{2} \frac{\partial^2 \psi}{\partial t^2}+\gamma \,\psi\lvert\psi\rvert^2+is\frac{\partial}{\partial t}(\psi\lvert\psi\rvert^2)=0
\end{equation}
where $s$ is the coefficient of SS effect. Analytic first-order rogue wave solution of Eq.~\ref{gnlse-ss} is presented in \cite{chen2016chirped} with a complicated form.
\begin{figure}[htbp]
\centering
\includegraphics{Fig-4.pdf}
\caption{(a) Amplitude and (b) phase profiles of the rogue wave from Eq.~\ref{gnlse-ss} with $s=0.2$. It has the maximum amplitude of $3$ with a distorted phase shift of $\pi$ across the peak. Note that this solution formed at $t=z=0$ because of the closed form nature of the analytic solution.}
\label{steep}
\end{figure}
We re-formulate the solution into a simpler form which is given as:
\begin{equation}
\label{eq9}
\begin{aligned}
\psi_{s}(z,t)&= \left(1-\frac{G+ i H z+8 is \tau}{D_s}\right)e^{ i \left[ z \left(1+\frac{1}{2}s ^2\right)-t s+\Phi \right]}\textrm{,}
\end{aligned}
\end{equation}
\begin{figure}[h]
\centering
\includegraphics{Fig-5.pdf}
\caption{A second-order rogue wave under the influence of SS effect. (a) temporal (b) phase profile with $s=0.01$ shows a doublet and a separated first-order rogue wave. Similarly, (c) and (d) are with $s=0.08$ showing the doublet separated in two first-order rogue wave amplitude reaching to $3$. (e) and (f) showing the disintegrated well separated first-order rogue wave with a higher magnitude of $s=0.2$. }
\label{sssecond}
\end{figure}
where $\tau=t-z s$, $\kappa=1+s^2$ and
\begin{equation*}
\begin{aligned}
D_s&=D+4 i s (2 \tau -t)+4 s \tau (s \tau -2 z)\textrm{,}\\
D&=1+4t^2+4z^2\textrm{,}\\
\Phi &= 2 \tan ^{-1}\left[\frac{4 s (z s -\tau )}{1+4 \kappa \left(z^2+\tau ^2\right)}\right]\textrm{.}\\
\end{aligned}
\end{equation*}
Here, $\beta_2=-1$ and $\gamma=1$, while $s$ can be an arbitrary value. With $s\to0$ it directly reduces to fundamental rogue wave solution \cite{Kibler:10:Nature}. This solution profile can now be translated to any point on the $z$-$t$ plane following the relations $t=t_1-t_{s}$ and $z= z_1-z_{s}$. The SS effect induces a drift velocity in the development of the rogue wave along with a distorted time varying phase profile presented in Fig.~\ref{steep} (a) and \ref{steep}(b) respectively. This make the rogue wave to appear tilted at its emerging point. Depending on the magnitude of $s$, this tilted position varies. We shall see that the SS effect induces fission in higher-order rogue waves and the disintegrated first-order components display the same temporal and phase characteristics as in the Fig.~\ref{steep}.
Taking the exact analytic NLSE second-order rogue wave solution as initial condition, we numerically solved Eq.~\ref{gnlse-ss} for a range of $s$ values. We find that in the presence of SS, the second-order rogue wave experiences fission, breaking apart the structure shown in Fig.~\ref{todsecond}(a) into three fundamental rogue waves. Most importantly we observe that along with a distorted time varying phase profile, the SS effect also relocates the disintegrated rogue waves in a transversely shifted location. This indicates that in numerical simulation the SS effect spontaneously activates the translational parameter $t_s$ that arise in the analytic solution Eq.~\ref{eq9}.
When the SS coefficient $s=0$, the growth of MI for each of these fundamental rogue waves takes place at the same space and time in a synchronized way resulting in the appearance of a bound-state formation of a second-order rogue wave as shown in Fig.~\ref{todsecond}(a). However, the presence of the SS effects breaks this degeneracy allowing a space-time varying MI developemnt. Because of the SS effect while the rogue waves are developing, each of them experiences translation relatively different from each-other in the positive $t$ direction. The intrinsic translation among the disintegrated rogue wave components inhibit the formation of a second-order rogue wave, instead the three fundamental rogue waves appear in three distinct positions and times.
SS-drive and TOD-driven fission share several similarities, with some notable exception. For instance, the breaking direction of the second-order rogue waves for TOD and SS effects are opposite to each other when the corresponding
|
coefficients are both positive. As seen from Fig.~\ref{todsecond}(c), for TOD case the doublet has appeared in the left, where as with the SS effect with $s=0.03$, it appeared to the right at a $t$ translated location, as shown in Fig.~\ref{sssecond}(a). Using a relatively strong SS effect ($s=0.06$), during the rogue waves development, the SS effect continuously shifts the component rogue waves. As a result, shown in Fig.~\ref{sssecond}(a), the conjoined doublet in the right is now broken apart in Fig.~\ref{sssecond}(c). The separation is also evident in the phase profile \ref{sssecond}(d). Such translation mechanism is absent in the case of TOD, where fissioned components emerge at around $\psi(z=30,t=0)$.
With even stronger value of $s=0.2$, the disintegrated rogue waves are far apart from each-other, and appear at three distinct positions and times, as shown in the temporal evolution (Fig.~\ref{sssecond}(e)) with a distorted phase profile (Fig.~\ref{sssecond} (f)). The respective position of the fundamental rogue wave components $1$, $2$ and $3$ are opposite compared to the case of TOD induced fission. With further increased value of $s$, the transverse translation distance for the appearance of the rogue wave also grows. Note that the phase profile of each of the separated rogue wave closely match with that of the first-order rogue wave shown in Fig.~\ref{steep}(b).
\begin{figure}[h]
\centering
\includegraphics{Fig-6.pdf}
\caption{(a) temporal (b) phase profile of a third-order rogue wave with $s=0.01$ showing the onset of a second-order rogue wave within the white rectangle appeared with three premature first-order rogue wave. (c) and (d) shows the second-order rogue wave now breaks apart with a doublet within the white rectangle along with fundamental rogue wave with $s=0.06$. (e) and (f) demonstrating the complete break-off of the third-order rogue wave reduced to well-separated six first-order rogue wave with comparatively strong value of $s=0.2$.}
\label{ssteepthird}
\end{figure}
The effect of SS on a third-order rogue wave is similar to that on the second-order one. To observe the process of fission we plot few examples. Interestingly, with a small value of $s=0.04$ the onset of a second-order rogue wave within the white rectangle is clear along with a group of three underdeveloped first-order rogue waves in the left (Fig.~\ref{ssteepthird}(a) and \ref{ssteepthird}(b)). The emerging point is slightly translated towards positive $t$. In Figs.~\ref{ssteepthird}(c) and \ref{ssteepthird}(d) ($s=0.04$), this second order rogue wave is disintegrated into a doublet (encapsulated within a white rectangle) associated with a first-order rogue wave in the left and three fully developed rogue waves to the right.
The breaking of the second-order rogue wave mimics exactly the same process as demonstrated in Fig.~\ref{sssecond}. However, with a higher-value of $s=0.2$, the transient appearance of the second-order rogue wave shown in Fig.~\ref{ssteepthird}(a) completely breaks apart into three fundamental rogue waves $2$, $3$ and $5$ in Fig.~\ref{ssteepthird}(e). In total the six rogue waves in Fig.~\ref{ssteepthird}(e) are well-separated and arranged in an asymmetric way, yet located at new transversely shifted locations. Note that the phase profile of each rogue waves in Fig.~\ref{ssteepthird}(e) is also distorted similarly to that of in Fig.~\ref{steep}(b).
\begin{figure}[h]
\centering
\includegraphics{Fig-7.pdf}
\caption{(a) and (b) are the spectral profile of Fig.~\ref{sssecond}(e) and Fig.~\ref{ssteepthird}(e) with self-steepening effect $s=0.2$ for both. Corresponding Spectral width of each of the rogue wave is shown in numbers.}
\label{ssfreq}
\end{figure}
In the spectral domain, each of the fissioned rogue wave develops optical chirp, resulting from the time varying phase development during the evolution under the SS effect. Optical chirp is a process where spectral density increases or decreases with time \cite{Agrawal:12:Book}. The origin of this instantaneous change comes from the SS term which in the Fourier domain becomes $\frac{\partial}{\partial t}(\psi\lvert\psi\rvert^2)= -i \omega (\psi\lvert\psi\rvert^2)$ by replacing the derivative ${\partial}/{\partial t}= -i \omega$. This results in an instantaneous asymmetrical spectral broadening in the rogue waves spectra. In Fig.~\ref{ssfreq}(a) three distinct asymmetrical spectra $1$, $2$ and $3$ correspond to the three numbered rogue waves formed in Fig.~\ref{sssecond}(e). Note that because of the time-varying distorted phase development, there is a spectral discontinuity in the spectral profile indicated by the white arrow in Fig.~\ref{ssfreq}(a).
Similarly, the spectra in Fig.~\ref{ssfreq}(b) correspond to the six disintegrated rogue waves presented in Fig.~\ref{ssteepthird}(e). In this figure, the distinct spectral profile appeared for rogue wave $1$, and $6$ only. The spectral components arise from the rogue waves $2$ and $5$ are partly interacting with $3$ and $4$ giving rise to an interfered spectral profile for both of the rogue wave $2$ and $5$. However, the spectral components arising from rogue waves $3$ and $4$ are mutually interacting and generating an overlapping spectral profile along the white dashed line shown in Fig.~\ref{ssfreq}(b). Similar to that of second-order case, a spectral discontinuity also arise here marked by a white arrow. Note that while there is a DW emission during the splitting in TOD case, is completely absent in the case of SS effect.
\section{Effect of RIFS}
\label{five}
The study of Raman effect on rogue waves has, so far, been done mostly on the first-order rogue wave \cite{ankiewicz2018rogue,ankiewicz2013rogue}. Similar studies on higher-order rogue waves has never been reported before. We use the equation:
\begin{equation}
\label{raman-eq}
i \frac{\partial \psi}{\partial z}-\frac{\beta_2}{2} \frac{\partial^2 \psi}{\partial t^2}+\gamma \,\psi\lvert\psi\rvert^2-\tau_R \psi \frac{\partial|\psi|^2}{\partial t}=0,
\end{equation}
\begin{figure}[ht]
\begin{center}
\includegraphics{Fig-8.pdf}
\caption{ Impact of Raman effect on a second-order rogue wave. (a) the temporal evolution showing the ejection of an decelerated pulsating soliton. (b) is the spectral domain showing the Raman gain is generating the red-shifted spectral components with the expense of the blue spectral frequencies. The magnitude of the Raman coefficient is $\tau_R = 0.008$. (c) is the temporal profile a third-order rogue wave with Raman effect with $\tau_R=0.005$ showing the ejection of an decelerated soliton (d) is the spectral domain illustrating the red-shifted frequency generation.}
\label{ramanfreq}
\end{center}
\end{figure}
to study the RIFS effect on higher-order rogue waves. Unlike the TOD and SS effect, the Raman term is a non-Hamiltonian, meaning that it does not preserve the energy of the system \cite{menyuk1993soliton}, and hence do not have an analytic solution. As a result when the rogue wave is evolving in $z$ it dissipates energy, thus altering its amplitude and width while shifting its central frequency.
Under weak RIFS effect, the disintegration of higher-order rogue wave follows the similar steps as for TOD and SS effects. For instance, we applied $\tau_R=0.008$ on a second-order rogue wave and observed its disintegration into a doublet (left) and a first-order rogue wave (right) at the position of $\psi(z=30,t=0)$ (Fig.~\ref{ramanfreq}(a)). The separated first-order rogue wave immediately assumes the flight-trajectory of a soliton and gradually slows down. Along the path, while it decelerates in the positive $t$ direction, it emits red-shifted frequencies, as presented in Fig.~\ref{ramanfreq}(b). In other words, at this stage, because of the non-Hamiltonian nature of the solution, the rogue wave is no longer robust, but instead it loses energy by generating red spectral components. As the energy dissipation continues, the rogue wave slows down and decelerates assuming a bend trajectory, which is shown in Fig.~\ref{ramanfreq}(a).
\begin{figure}[ht]
\begin{center}
\includegraphics{Fig-9.pdf}
\caption{Evolution of the center of mass of a second-order rogue wave. The blue curves are the temporal and red curves are the spectral evolution. As there is no development before $z=30$, the advanced evolution is shown from $z=30$ to $z=40$.}
\label{cmass}
\end{center}
\end{figure}
Similar behaviours are observed for a third-order rogue wave as shown in Fig.~\ref{ramanfreq} (c) and Fig.~\ref{ramanfreq} (d). Note that with an applied RIFS effect with $\tau_R=0.005$ on a third-order rogue wave, a transient appearance of a second-order rogue wave also observed, similarly to the case of TOD and SS effects (Fig.~\ref{third-fission}(c) and Fig.~\ref{ssteepthird}(a) respectively).
The energy dissipation and the subsequent change in amplitude and frequency shift can be described by the progression of the center of mass of the rogue wave while it is under the influence of RIFS effect. We define the center of mass of the evolving rogue wave in temporal $\nu_0$ and spectral domain $\Omega_0$ as
\begin{equation}
\nu_0 = \frac{\int_{-\infty}^{\infty}\,t\,|\psi(z,t)|^2 dt}{\int_{-\infty}^{\infty}|\psi(z,t)|^2 dt}, \;\;\Omega_0 = \frac{\int_{-\infty}^{\infty}\,\omega\,|\psi(z,\omega)|^2 d\omega}{\int_{-\infty}^{\infty}|\psi(z,\omega)|^2 d\omega}
\end{equation}
\begin{figure}[ht]
\begin{center}
\includegraphics{Fig-10.pdf}
\caption{(a) the fission of a second-order and (b) a third-order rogue wave when under the influence of a strong Raman effect with $\tau_R=0.20$ and $0.25$ respectively.}
\label{rfission}
\end{center}
\end{figure}
Fig.~\ref{cmass} shows how $\nu_0$ and the $\Omega_0$ of a pulsating rogue wave is evolving for four different values of the coefficient $\tau_R=0.004$, $0.006$, $0.008$ and $0.01$. From the trajectory of $\nu_0$, it is clear that for high value of $\tau_R$, the rogue wave advances more slowly indicating higher energy leakage. This induces a more skewed bow-shape trail. Due to the red-shifted frequency emission, the center of mass of the frequency profile shifted towards negative $\Omega_0$. With increased evolution distance, as the rogue wave continues to lose energy, the pulsating rogue wave becomes more compressed with a wider bandwidth spectral profile. As a result, more shifting of the centre of mass of the frequency profile towards negative $\Omega_0$ direction occurs.
Under the RIFS effect, the full disintegration of higher-order rogue waves requires stronger $\tau_R$ values. As shown in Fig.~\ref{rfission}(a), a second-order rogue wave disintegrated into three fundamental rogue waves with $\tau_R=0.20$. With such strong RIFS value, the separated rogue wave distorted significantly because of strong decelerating effect. Similar observation also noted in the case of a third-order rogue wave shown in Fig.~\ref{rfission}(b), where it disintegrated into six fundamental rogue waves with $\tau_R=0.25$.
\begin{figure}[ht]
\begin{center}
\includegraphics{Fig-11.pdf}
\caption{Top panel is the temporal profile of the evolution of a rogue wave under the influence of RIFS effect. The rogue wave developed at $z=30$, and a skewed background wave dynamics is visible with a combination of different types of wave entities. In the mid-panel, the separated wave envelop is an instance that extracted at $z=43.8$ showing with dashed-white line. Bottom panel shows the IST spectrum of various types of wave profile within a box. Note that the top and mid panel shares the same $t$ axis.}
\label{fig_IST}
\end{center}
\end{figure}
Higher-order rogue waves also influence the neighbouring background when it is under the effect of RIFS. To investigate this, we simulate a second-order rogue wave with $\tau_R=0.004$ for an extended period of time shown in the top panel in Fig.~\ref{fig_IST}. Clearly, the continuous wave background is now highly distorted with a variety of other waves appearing in the neighbourhood of the evolving rogue wave. To classify the types of waves formed, we take an envelop at $z=43.8$ shown with a white dashed line in the top panel. Its corresponding amplitude plot is shown in mid-panel. We select six representative amplitude profiles from the envelop as sample and employ the IST-spectral analysis \cite{randoux2016inverse}.
The IST reveals a combination of spectral bands in these chosen localised structures shown within six-boxes in the bottom panel. In the mid-panel, the structures within the green-shaded areas $\text{i}$ and $\text{vi}$ have three spectral band, making them genus-2 solution which allows to classify them to be either breathers or rogue waves. Similarly, the localized structures $\text{ii}$, $\text{iii}$ and $\text{iv}$ within the pink-shaded areas present two-spectral band, and therefore can be classified as solitons (which belongs to genus-1-type solutions). Finally, the localised structure in the magenta-shaded area, is the soliton that is created from the decelerating rogue wave. The corresponding eigenvalues for this structure (box-$\text{v}$) are not centered around the real line. This occurs because the shape of the corresponding soliton is highly asymmetric. These observation indicates that higher-order rogue waves under the influence of RIFS effect trigger the formation of a series of other waves around it, such as breathers and solitons.
\section{Combined effect}
We now address the case where all the three perturbations, TOD, SS and RIFS, are present. We find that the process of rogue wave fission persists. Fig.~(\ref{final})(a) shows temporal evolution of a second-order rogue wave fission. We observe that all the disintegrated components assume the same trajectory towards the positive $t$ direction.
As noted earlier, due to TOD effect, the orientation of the breakaway first-order rogue waves from a second-order one is opposite to that of SS effect shown in Fig.~\ref{todsecond}(g) and Fig.~\ref{sssecond}(e). This opposite ejection acts as a balance, aligning them when they are in a combined effects. Though the RIFS effect induced fission in the main structure is similar to that of TOD, however, its leading mechanism is to slowdown the appeared rogue waves by leaking energy from them.
\label{six}
\begin{figure}[ht]
\begin{center}
\includegraphics{Fig-12.pdf}
\caption{Combined effects of TOD, SS and RIFS effects on a second and third-order rogue wave. (a) temporal (b) spectral evolution of a second-order rogue wave where the leading third disintegrated rogue wave component is accelerating with the simultaneous generation of blue and red-shifted frequency components. The coefficient values are $\epsilon_3= 0.2$, SS $ s=0.10$ and $\tau_R=0.008$. Similarly (c) temporal and (d) spectral domain of a disintegrated third-order rogue wave with the coefficient values as $ \epsilon_3= 0.15$, $s=0.10$, and $\tau_R=0.005$. It created two trails of decelerating rogue waves generating blue and red shifted frequency components.}
\label{final}
\end{center}
\end{figure}
Note that the RIFS effect is inversely proportional to the pulse duration $t_0$ and a rogue wave's pulse duration must be short enough to come under the active influence of RIFS effect. In Fig.~\ref{final}(a), component $1$ and $2$ remain stationary because their durations are not within the range to be affected by the RIFS effect. However rogue wave $3$ is at the leading position and achieve comparatively shorter pulse duration that enable it to come under the active influence of RIFS effect resulting in a bent trajectory.
\begin{figure}[ht]
\begin{center}
\includegraphics{Fig-13.pdf}
\caption{(a) shown from the extended evolution of Fig.~\ref{final}(a), the decelerating rogue wave from a disintegrated second-order rogue wave transformed into a soliton. (c) is the prolonged evolution from Fig.~\ref{final}(c) for a third-order rogue wave. It generates two solitons. However, for both case, low amplitude solitons are also ejected from around the central region marked by white arrows. }
\label{final2}
\end{center}
\end{figure}
The dynamics become clearer in the spectral domain (see Fig.~\ref{final}(b)). The fixed components $1$ and $2$ do not lose energy, as a result they do not generate red-shifted frequency components. Nevertheless, due to the presence of TOD effect they generates DW indicated by the white arrow. On the contrary, rogue wave $3$ with shorter pulse duration is under the active influence of RIFS effect and continues to lose more energy as it evolves further. Along the evolution, the repeated compression stages of rogue wave $3$ creates a cascaded emission of DW. As the rogue pulse becomes highly compressed, it achieve enough spectral bandwidth to pump and create the red-shifted frequency components as well. A steady red-shifted skewed development of frequencies is clearly visible within the frequency range $\omega = 0$ to $-10$ indicated by the long white arrow in Fig.~\ref{final}(b). Note that a spectral discontinuity is observed due to the time varying phase development related to spectral chirp due to SS effect. It is also responsible for the achieved spectral asymmetry.
We have also investigated the combined effects on a third-order rogue wave. TOD effects tend to break the third-order rogue wave apart and arrange its components in a cluster as shown in Fig.~\ref{third-fission}(g). On the other hand, the SS effect tends to arrange them in a triangular fashion, as shown in Fig.~\ref{ssteepthird}(e). When both of these effect act on a third-order rogue wave simultaneously, together with RIFS effect, they counter balance the directional arrangements of the disintegrated rogue wave and force its components to align in a parallel fashion as shown in Fig.~\ref{final}(c). This is similar to the case of a second-order rogue wave described above. The Raman effect eventually slows-down this parallel arrangement in the forward evolution direction, generating both blue and red-shifted frequency components (indicated by the long white arrow). Notice that SS induced spectral discontinuity indicated by red arrow and DW emission indicated by white arrow are also present.
As the system keep evolving, the rogue waves eventually transform into a group of fundamental solitons. This is shown in Fig.~\ref{final2}(a) (extended evolution of Fig.~\ref{final}(a)) for a second-order rogue wave. After fission, it triggers a collection of low-amplitude solitons created around the central region (indicated by the white horizontal arrow). The rogue wave itself, now transformed into a fundamental soliton, proceed with a bow-trajectory. Similarly, in Fig.~\ref{final2}(b), a third-order rogue wave also eject a small number of solitons in the central region. This time however, it transforms into two fundamental solitons. In both cases, a breathers-type formation emerges near the edge of the evolution field due to MI on an unstable wave background.
\begin{figure}[ht]
\begin{center}
\includegraphics{Fig-14.pdf}
\caption{(a) temporal and (b) spectral domain presentation of the MI induced disintegration of $N=300$ soliton under the influence of TOD, SS and RIFS effects with $s=0.1$, $\epsilon_3=0.03$ and $\tau_R=0.001$ respectively. Onset of MI is shown with white arrows while the spectral discontinuity arises due to SS effects shown with a red arrow.}
\label{final3}
\end{center}
\end{figure}
It is possible to show that, this type of rogue waves formation, fission and subsequently their soliton transformation has real world implication. For example, in CW-SCG, it is observed that, initially higher-order soliton goes through MI and the end product is hundreds of solitons \cite{russell2014hollow,travers2011ultrafast}. To show this we simulated $N=300$ soliton using Eq.~\ref{gnlse} under the influence of TOD, SS and RIFS effects. The evolution dynamics in temporal domain is presented in Fig.~\ref{final3} (a). It shows that initially, the noise driven MI leads to the formation of hundreds of rogue wave-type substructures that eventually transforming to many solitons. The onset of MI is clearly visible in the spectral domain in Fig.~\ref{final3}(b) with the appearance of side-lobes around $\omega=0$ as an evidence of MI development.
A large number of soliton together can be act as a continuous wave background. Presence of noise among the solitons can spontaneously trigger the MI which leads to the development of both fundamental as well as higher-order rogue waves on it. This type of formations as a result of noise seeded MI is reported in \cite{toenger2015emergent}. If the formation is a fundamental rogue wave, the concurrent influence of TOD, SS and RIFS effect transforms it directly to a small number of those solitons shown in Fig.~\ref{final3}(a). However if MI contributes to the development of higher-order rogue waves, first, the combined effects of TOD, SS and RIFS disintegrates them into a group of fundamental rogue waves, which eventually transforms into a bunch of solitons as shown in Fig.~\ref{final3}(a).
Thus whether it is a fundamental rogue wave or a higher-order rogue wave that is formed in the initial stage of the MI, under the influence of TOD, SS and RIFS effects, the end product is always be a large collections of fundamental solitons.
\section{Conclusion}
Using extensive numerical approaches, we have showed that higher-order rogue wave undergo fission in a system weakly perturbed by the TOD, SS and RIFS effects. This is similar to the higher-order soliton fission. Under these effects, employing the second and third-order rogue wave solutions, we reveal their breaking mechanisms and how they reduce to their constituent parts.
Importantly, we observed that, if the applied perturbation is weak, the higher-order rogue waves reveal their hierarchical pattern, such as a second-order rogue wave shows that it is made of one doublet and a first-order rogue wave. Similarly, a third-order rogue wave reveals that it is build on a second-order rogue wave together with three fundamental rogue waves. However, under strong perturbation they completely disintegrate to their constituent parts.
We observe that with the weak effect of RIFS, a higher-order rogue wave immediately ejects one or more decelerating fundamental rogue waves. Since the Raman effect is a dissipative term, the ejected rogue waves loses energy during its evolution, leading to a red shift in the frequency domain. With all the effects combined, we observe that after fission, the higher-order rogue wave trigger a collection of solitons. In the frequency domain this appear as asymmetrical new blue and red shifted spectral components.
These new insights may provide a pathway to a new kinds of supercontinua, to produce frequency components beyond the traditional higher-order soliton based supercontinuum generation. Moreover, these new observation may prove to be useful for interpreting various other nonlinear phenomena in optics, hydrodynamics and similar systems.
\section*{Acknowledgements}
AC and MB acknowledge Nanyang Technological University, NAP-SUG grant. WC acknowledges funding support from Ministry of Education--Singapore, AcRF Tier 1 RG135/20.
\bibliographystyle{unsrt}
|
\section{Introduction}
Multipole moments are important tools for the characterization of stationary spacetimes representing the exterior region of neutron stars or black holes. The purpose of this paper is the presentation of an improved, more efficient method for the calculation of the multipole moments of stationary axially symmetric spacetimes, when the moments are expressed in terms of the power series expansion coefficients of the Ernst potential on the axis of rotation. For the case when electromagnetic fields are also allowed, we give the correct expressions for the octupole and higher moments, correcting a mistake in the literature. As an application of the method, we calculate the multipole moments of a charged magnetized generalization of the Kerr and Tomimatsu-Sato solutions.
Multipole moment tensors for asymptotically flat static vacuum spacetimes were introduced in 1970 by Geroch, in a coordinate system independent way \cite{Geroch70}. The generalization of the definition for stationary spacetimes have been given by Hansen \cite{Hansen74}. In the stationary case there are two sets of multipole tensors, the mass moments and the angular momentum (or mass-current) moments, which can be unified into a set of complex valued quantities. Alternative, but equivalent definitions in terms of specific coordinate systems have been proposed by Thorne \cite{Thorne80,Gursel83}, and also by Simon and Beig \cite{Simon83}. A good review of early results on the topic can be found in \cite{Quevedo90}.
The concrete physical applications of multipole moments in general relativity have been pioneered by the work of Fintan D. Ryan \cite{Ryan95}. The gravitational radiation emitted by a compact object orbiting around a much larger central object can be used to determine the multipole moments of the central body. In case of extreme-mass-ratio inspirals, one is expected to be able to determine the first few gravitational multipole moments by the proposed space-based gravitational wave detector LISA \cite{Ryan97,LiLovelace08,Barack07,Babak17}.
Since the multipole moments of the Kerr black hole are uniquely determined by the mass and the angular momentum, this should provide a practical way of testing the no-hair theorem \cite{Chrusciel2012,Cardoso16,Cardoso19}.
Multipole moments can also be defined in alternative gravitational theories \cite{Sopuerta09,Pappas15a,Kleihaus14,Pappas15b}.
The multipole moments of so-called bumpy black holes and the gravitational radiation of test bodies orbiting around them have been studied in \cite{Collins04,Gl
|
ampedakis06,Vigeland10}.
We expect that astrophysical observations will provide a way in the near future to test general relativity in the strong field regime \cite{Gair13,Berti15,Yagi16}.
In addition to gravitational waves, the observation of large compact objects and their multipole moments may be achieved by other methods, such as measuring the motion of stars or pulsars around them, study of accretion disks, or the observation black hole shadows by the event horizon telescope \cite{Will08,Broderick14,Suvorov16,Psaltis16}.
Further highly relativistic physical systems where multipole moments are important are neutron stars. In that case the observation of the spacetime structure outside the star is expected to give information about the equation of state of the matter in the interior \cite{Paschalidis17,Maselli20}. The innermost stable circular orbit marks the inner edge of accretion disks. Its properties, in terms of the multipole moments, has been calculated in \cite{Shibata98,SanabriaGomez10,Berti04}. It is possible to find certain universal relations between the multipole moments of a neutron star, establishing a no-hair property for these objects. The three-hair relations determine the higher multipole moments by the mass, angular momentum and quadrupole moment, in an approximately equation of state independent way \cite{Pappas14,Yagi14,Yagi17}. Exact solutions for the vacuum exterior region have played important role in the establishment of these universal relations \cite{Manko95,Manko00,Manko00b,Teichmuller11,Pachon12,Pappas13,Manko16}.
The above physical applications have been made possible by strict mathematical results establishing the theory of multipole moments for the nonlinear Einstein equations in the stationary case. Multipole moments are defined for any asymptotically flat stationary spacetime, and if two spacetimes have the same multipole moments, then they agree at least in a neighborhood of conformal infinity \cite{Xanthopoulos79,Beig80,Kundu81,Beig81}.
Most astrophysically relevant nonradiating spacetimes are expected to be axially symmetric. In the axisymmetric case the multipole moment tensor of order $n$ can be represented by a single scalar moment, called $P_n$ by Hansen \cite{Hansen74}. These scalar moments can be expressed in terms of the power series expansion coefficients of the Ernst potential on the symmetry axis \cite{Hoenselaers86}. An algorithm for this calculation has been published in 1989 by F
|
p(y_U|y_L)$ as mentioned in Sec.~\ref{sec:overall_compatibility}.
To overcome this problem, we apply variational inference~\cite{DBLP:journals/ijon/Hogan02}, in which we optimise the Evidence Lower Bound (ELBO) of the KL-divergence.
As shown in Eq.~\ref{eq:KL} and \ref{eq:elbo}, the KL-divergence can be written as the difference between observed evidence $\log p (y_L)$ and its ELBO.
Thus, minimising the KL-divergence is equivalent to maximising the ELBO, which is computationally simpler as will be seen below.
\vspace{-8pt}
\begin{equation}
\begin{aligned}
\mathrm{KL}(q(y_U) || p(y_U|y_L)) = \log p(y_L)
- \mathrm{ELBO}
\end{aligned}
\label{eq:KL}
\end{equation}
\vspace{-8pt}
\begin{equation}
\mathrm{ELBO} = \mathbb{E}_{q (y_U)} \log p(y_U, y_L) - \mathbb{E}_{q (y_U)} \log q (y_U)
\label{eq:elbo}
\end{equation}
Towards deriving an easy-to-use form of ELBO, we first notice $q(y_U)$ can be factorised as in $q(y_U) = \prod_{u \in U} q(y_u) $ because the neural EA model infers $y_u$ for $u \in U$ independently.
In addition, we use pseudolikelihood~\cite{besag1975statistical} to approximate the joint probability $p(y_U, y_L)$ for simpler computation, as in Eq.~\ref{eq:pseudolikelihood}.
Here, the computation of conditional distribution $p(y_u | y_{E \setminus u})$ can be simplified as in Eq.~\ref{eq:cond_p}~\footnote{The derivation process can be done by introducing Eq.~\ref{eq:overall_comp} and reducing the items $g(F_i)$ irrelevant to $y_u$.}, where $\mathrm{MB}_u = \bigcup_{u \in F_i} F_i $ is called the Markov Blanket of $u$ and actually contains $u$ and its two-hop neighbours.
\begin{equation}
\begin{aligned}
p(y_U, y_L) & = p(y_L) \cdot \prod_{i = 1... |U|} p(y_{U_i} | y_{U_{1:i-1}, y_L} ) \\
& \approx p(y_L) \cdot \prod_{u \in U} p(y_u | y_{E \setminus u} )
\end{aligned}
\label{eq:pseudolikelihood}
\end{equation}
\begin{equation}
\begin{aligned}
&p(y_u | y_{E \setminus u}) = \frac{p(y_E)}{\sum_{e' \in E'} p(y_u=e', y_{E \setminus u})} \\
& = \frac{ \prod_{F_i | u \in F_i} \exp \left( g(y_{F_i}) \right) }{\sum_{e' \in E'} \prod_{F_i | u \in F_i} \exp \left(g(y_{F_i}|y_u=e') \right) } \doteq p(y_{u} | y_{\mathrm{MB}_u})
\end{aligned}
\label{eq:cond_p}
\end{equation}
\noindent
Then, we can derive an approximation of ELBO shown in Eq.~\ref{eq:elbo_approx} by introducing the factorised $q(y_U)$, Eq.~\ref{eq:pseudolikelihood}, and Eq.~\ref{eq:cond_p} to Eq.~\ref{eq:elbo}.
\begin{equation}
\begin{split}
Q = \log p(y_L) + \sum_{u \in U} \mathbb{E}_{q(y_u)}
\Big[ \mathbb{E}_{q({y_{\mathrm{MB}_u }})}
[ \log p(y_u | y_{\mathrm{MB}_u }) ]
- \log q(y_u) \Big]
\end{split}
\label{eq:elbo_approx}
\end{equation}
Now, our goal becomes to get optimal $q(y_u)$ which maximises $Q$.
We solve this problem with coordinate ascent~\cite{wright2015coordinate}.
In particular, we update $q(y_u)$ for each $u \in U$ in turn iteratively. Everytime we only update a single (or a block of) $q(y_u)$ with Eq.~\ref{eq:q_star}, which can be derived from $\frac{d Q}{d q(y_u)} = 0$, while keeping the other $q(y_{e \in U \setminus u})$ fixed.
This process ends until $q(y_{e \in U})$ converges.
\begin{equation}
q^*(y_u) \propto \exp \left(\mathbb{E}_{q(y_{ \mathrm{MB}_u })} \log p(y_u | y_{\mathrm{MB}_u}) \right)
\label{eq:q_star}
\end{equation}
Eventually, the original $\{q(y_u) | u \in U\}$ are adjusted to $\{q^*(y_u) | u \in U \}$ for better compatibility.
\vspace{-4pt}
\subsection{Generating Pseudo Mappings}
\label{sec:pseudo}
As mentioned, we can derive $\{q^*(y_u), u \in U \}$ and $\{q^*(y_{u'}), u' \in U \}$ from both KGs.
Based on them, we explore three types of strategies for generating pseudo mappings.
\subsubsection{UniThr}
In this strategy, we generate the pseudo mappings in unidirection based on probability threshold.
Suppose choose $\mathcal{G}$ as the source KG while $\mathcal{G}'$ as the target KG.
For each $u \in U$, we sample $\hat{y}_u = \arg \max_{e' \in E'} q^*(y_u=e')$. If $q^*(\hat{y}_u)$ is greater than a threshold $\alpha$, $(u, \hat{y}_u)$ forms a pseudo mapping.
When using this strategy, we would need to search two hyperparameters -- the choice of source KG and a probability threshold.
\subsubsection{BiThr}
In this strategy, we generate the pseudo mappings in bidirection using a probability threshold.
After deriving the two groups of dependency-aware predictions, we use one threshold to filter some predictions as pseudo mappings for each group as in \textit{UniThr}.
The obtained two sets of pseudo mappings are merged directly.
Compared with \textit{UniThr}, \textit{BiThr} only has a threshold as its hyperparameter.
\subsubsection{MutHighestProb}
As in \textit{BiThr}, we exploit both $\{q^*(y_u), u\in U\}$ and $\{q^*(y_{u'}, u' \in U')\}$.
If two entities $u$ and $u'$ mutually have the highest probability on each other, i.e. $u' = \arg \max q^*(y_u)$ and $u = \arg \max q^*(y_{u'})$, then we select $e \equiv e'$ as one pseudo mapping.
This strategy has no hyperparameter and thus would be easy to use in practice.
In our framework, the strategy \textit{MutHighestProb} is used by default, while the others are explored for comparison.
\subsection{Implementation}
\begin{algorithm}[t!]
\caption{The \textsf{STEA}\text{ } Framework}
\label{alg}
Train neural EA model $\Theta$ using labelled data $M^l$ \;
\For{iterations}{
Normalise EA similarity to get $q (y_u)$ \tcp{Eq.~\ref{eq:simtoprob_1}, \ref{eq:simtoprob_2}, \ref{eq:objective_sim2prob}}
\tcp{Use $\mathcal{G}$ as the source KG to derive dependency-aware EA predictions}
Measure local compatibilities at each $e \in E$ ; \tcp{Eq.~\ref{eq:paris_prob}}
Compute $p(y_u | y_{\mathrm{MB}_u})$ for each $u \in U$; \tcp{Eq.~\ref{eq:cond_p}}
Derive $q^*(y_u)$ for each $u \in U$; \tcp{Eq.~\ref{eq:q_star}}
\tcp{Use $\mathcal{G}'$ as the source KG to derive dependency-aware EA predictions similarly}
Generate pseudo-mappings $M^p$; \tcp{MutHighestProb}
Update EA model $\Theta$ with training set $M^l \cup M^p$ \;
}
\end{algorithm}
We take a few methods to simplify the computation.
(1) In Eq.~\ref{eq:q_star}, it is costly to estimate distribution $q^*(y_u)$ because $y_u$'s assignment space $E'$ can be very large.
Instead, we only estimate $q^*(y_u)$ for the top $K$ most likely candidates according to current $q(y_u)$.
(2) Both Eq.~\ref{eq:elbo_approx} and Eq.~\ref{eq:q_star} involve sampling from $q(y_u)$ for estimating the expectation. We only sample one $y_u$ as in $\hat{y}_u = \arg \max_{e' \in E} q(y_u=e')$ for each $u \in U$.
(3) When computing $q^*(y_U)$ with coordinate ascent, we treat $U$ as a single block and update $q^*(y_U)$ for once.
The whole process of \textsf{STEA}\text{ } and associated equations of each step are described in Alg.~\ref{alg}.
\section{Experimental Settings}
\begin{table}
\centering
\caption{Performance of neural EA models. Only the structural information of KGs is used. 30\% labelled data.}
\vspace{-8pt}
\scalebox{0.92}[0.92]{
\input{sections/tables/baselines.tex}
}
\label{tab:baselines}
\vspace{-10pt}
\end{table}
\subsection{Datasets and Partitions}
We choose five datasets widely used in previous EA research. Each dataset contains two KGs and a set of pre-aligned entity mappings.
Three datasets are from \textit{DBP15K}~\cite{DBLP:conf/semweb/SunHL17}, which are cross-lingual and built from different language versions of DBpedia: French-English (\textit{fr\_en}), Chinese-English (\textit{zh\_en}), and Japanese-English (\textit{ja\_en}).
Within each dataset, each KG contains around 20K entities, among which 15K are pre-aligned.
The other two datasets are from \textit{DWY100K}~\cite{DBLP:conf/ijcai/SunHZQ18}, each of which contains two mono-lingual KGs extracted from different sources: \textit{dbp\_yg} extracted from DBpedia and Yago, and \textit{dbp\_wd} extracted from DBpedia and Wikidata.
Within each dataset, each KG contains 100K entities which are all pre-aligned.
Our experiment settings only consider the structural information of KGs and thus will not be affected by the problems of attributes like name bias in these datasets~\cite{DBLP:conf/emnlp/LiuCPLC20,DBLP:journals/tkde/ZhaoZTWS22}.
Most existing EA works use 30\% of the pre-aligned mappings as training data, which however was pointed out unrealistic in practice~\cite{DBLP:conf/coling/ZhangLCCLXZ20}.
In this work, to thoroughly evaluate the self-training methods, we create a few variants of each dataset with different amounts of labelled data -- 1\%, 5\%, 10\%, 20\%, and 30\% of pre-aligned mappings, which are sampled randomly.
Within each dataset, all the remaining mappings except the labelled data form the test dataset.
\subsection{Metrics}
The EA methods typically output a ranked list of candidate counterparts for each entity. Therefore, we choose metrics for measuring the quality of ranking.
Following most existing works, we use \textit{Hit@k (k=1,10)} and \textit{Mean Reciprocal Rank (MRR)} as metrics.
\textit{Hit@k} is the proportion of entities whose ground-truth counterparts rank in top $k$ positions.
MRR assigns each prediction result the score $1/Rank(ground\mbox{-}truth\mbox{\hspace{2pt}}counterpart)$ and then averages them.
Higher \textit{Hit@k} or MRR indicates better performance. Statistical significance is performed using paired two-tailed t-test.
\begin{table*}[!ht]
\centering
\footnotesize
\vspace{-10pt}
\caption{Overall performance of \textsf{STEA}\text{ } and the baselines when running with Dual-AMN. Self-training methods can always achieve much better effectiveness than supervised training; Our method \textsf{STEA} outperforms the baselines significantly; All differences between \textsf{STEA} and other baselines are statistically significant (p < 0.01) except a few cells marked with $*$;}
\vspace{-8pt}
\scalebox{1}[1]{
\input{sections/tables/overall_dualamn.tex}
}
\label{tab:overall_perf}
\vspace{-8pt}
\end{table*}
\subsection{Baselines}
We select the following three self-training methods as baselines:
\noindent
\textbf{SimThr.}
It is a general strategy to select the most confident predictions as pseudo-labelled data.
Zhu et al.~\cite{DBLP:conf/ijcai/ZhuXLS17} followed this idea and implemented it with weighted loss, which makes the training method coupled with the EA model.
Instead, we implement it in a typical way -- select the predicted mappings with similarities over a certain threshold and add them to the training data. Obviously, no EA-specific measure is taken to improve the quality of pseudo mappings in this strategy.
\noindent
\textbf{OneToOne.}
The work BootEA~\cite{DBLP:conf/ijcai/SunHZQ18} proposed to improve the quality of pseudo-mappings by introducing a one-to-one constraint.
In each iteration, BootEA filtered the candidate mappings with a similarity threshold and derived mappings with Maximum Bipartite Matching under the one-to-one constraint.
The generated pseudo mappings in different iterations are accumulated, while conflicting mappings violating the one-to-one constraint are resolved.
\noindent
\textbf{MutNearest.}
In the work MRAEA~\cite{DBLP:conf/wsdm/MaoWXLW20}, Mao et al. applied a simple strategy:
if and only if the entities $e \in E$ and $e' \in E'$ are mutually nearest neighbours of each other, then the pair $(e,e')$ is considered as a pseudo mapping.
Compared with \textit{SimThr} and \textit{OneToOne}, this strategy does not introduce any hyperparameter.
\vspace{-5pt}
\subsection{Neural EA models}
We apply each self-training method to different EA models to evaluate its effectiveness and generality.
As for our choice of EA models, we select Dual-AMN~\cite{DBLP:conf/www/MaoWWL21}, RREA~\cite{DBLP:conf/cikm/MaoWXWL20}, AliNet~\cite{DBLP:conf/aaai/SunW0CDZQ20}, and GCN-Align~\cite{DBLP:conf/emnlp/WangLLZ18}.
These neural models vary in performance, KG encoders, and etc. Among them, Dual-AMN and RREA are SOTA models.
See Table~\ref{tab:baselines} for a performance summary of existing EA models.
\vspace{-5pt}
\subsection{Details for Reproducibility}
\noindent
\textbf{Hyperparameters}.
We search the number of candidate counterparts $K$ from [5,10,15,20,25], and set it as 10 for the trade-off of effectiveness and efficiency.
\noindent
\textbf{Implementation of Baselines and Neural EA Models}.
The baseline \textit{OneToOne} is implemented by referring to the source code of BootEA implemented in OpenEA~\footnote{\url{https://github.com/nju-websoft/OpenEA}}, while the other baselines are implemented by ourselves. The implementation of all baselines is also included in our released source code.
For the neural EA models, Dual-AMN~\footnote{\url{https://github.com/MaoXinn/Dual-AMN}}, RREA~\footnote{\url{https://github.com/MaoXinn/RREA}}, and GCN-Align~\footnote{https://github.com/1049451037/GCN-Align} are implemented based on their source codes, while AliNet is implemented with OpenEA.
We use the default settings of their hyperparameters in these source codes.
For a fair comparison, all the self-training methods run for the same number of iterations and the same number of epochs within each iteration under a certain experimental setting.
\noindent
\textbf{Configuration of Running Device}.
The experiments on 15K datasets were run on one GPU server, which is configured with an Intel(R) Xeon(R) Gold 6128 3.40GHz CPU, 128GB memory, 3 NVIDIA GeForce GTX 2080Ti GPUs and Ubuntu 20.04 OS.
The experiments on 100K datasets were run on one computing cluster, which runs CentOS 7.8.2003, and allocates us 200GB memory and 2 NVidia Volta V100 SXM2 GPUs.
\section{Conclusion}
Entity Alignment is a primary step of fusing different KGs. Though neural EA models have achieved promising performance, their reliance on labelled data is still an open problem.
To address this issue, a few self-training strategies have been explored and shown effective in boosting the training of EA models.
However, self-training for EA is never studied systematically.
In addition, the self-training strategies used in existing works have limited impact because of the introduced noise.
In this work, we expand the knowledge about self-training for EA by benchmarking the existing self-training strategies and evaluate them in comparable experimental settings.
Furthermore, towards more effective self-training strategy for EA, we propose a new self-training framework named \textsf{STEA} . This framework features in exploiting the dependencies between entities to detect suspicious mappings and improve EA predictions. Based on the derived dependency-aware EA predictions, we further explored different ways for generating pseudo mappings to be included in self-training.
We empirically show that \textsf{STEA} outperforms the existing methods with a big margin across different datasets, annotation amounts, and neural EA models.
\textsf{STEA} can greatly reduce the reliance of EA on annotations. In particular, \textsf{STEA} using 1\% labelled data can achieve decent effectiveness, which is equivalent to supervised training using 30\% labelled data.
In future, we plan to investigate the combination of active learning and self-training for EA. As shown, in the self-training based EA, random data selection is very inefficient in improving the EA performance. With
|
active learning, we aim to reach the desired EA performance with the least annotation effort.
\section{Evaluation}
\subsection{Datasets \& Partitions}
\subsection{Metrics}
\subsection{Baselines}
\section{Results and Analysis}
\begin{figure*}[!ht]
\centering
\vspace{-10pt}
\includegraphics[width=17.9cm]{images/generality_ea_model_bar_zhen.pdf}
\vspace{-20pt}
\caption{The performance of self-training methods running with another three EA models: RREA, AliNet, and GCN-Align. The superiority of \textsf{STEA} is genetic across different EA models.}
\label{fig:generality_neural_models}
\vspace{-10pt}
\end{figure*}
\subsection{Comparing \textsf{STEA} with the Baselines}
\subsubsection*{Overall performance}
To compare \textsf{STEA} with the baselines, we run these different self-training methods with the SOTA EA model Dual-AMN~\cite{DBLP:conf/www/MaoWWL21} on five datasets using different annotation amounts ranging from 1\% to 30\%.
Table~\ref{tab:overall_perf} reports their results measured with three metrics. Our observations include:
(1) By comparing the supervised training with each self-training method, we can see that all the self-training methods can bring very significant improvement to the final EA effectiveness under all the settings. Thus, self-training is a better way of training EA model.
(2) OneToOne shows an obvious advantage over SimThr, which reveals that one-to-one constraint is useful for deriving pseudo mappings of higher quality.
(3) MutNearest outperforms SimThr consistently, while both MutNearest and OneToOne have their own merits in different experimental settings. Note that MutNearest has the advantage of being easier to use in practice.
(4) Comparing \textsf{STEA} with each baseline, we can observe that \textsf{STEA} has an obvious advantage in performance under all the settings. The success of \textsf{STEA} verifies it is promising to exploit the dependencies between entities, which is a characteristic of graph data, in devising the self-training framework for EA.
\noindent
To conclude, self-training can always provide much more effective training for the SOTA EA model than supervised training.
Our \textsf{STEA} framework outperforms the existing self-training methods consistently across different datasets and annotation settings.
\subsubsection*{Generality across different EA models}
To check whether these self-training methods work well for different EA models (e.g. in terms of model architecture and performance), we run them with another three EA models: RREA~\cite{DBLP:conf/cikm/MaoWXWL20}, AliNet~\cite{DBLP:conf/aaai/SunW0CDZQ20}, and GCN-Align~\cite{DBLP:conf/emnlp/WangLLZ18}.
Fig.~\ref{fig:generality_neural_models} reports their results on the zh\_en dataset (results on the other datasets show a quite similar trend).
We have the following findings:
(1) The choice of EA model can affect the impact of self-training methods. For example, the existing three self-training methods have slighter impact on AliNet than on the other EA models.
(2) Our \textsf{STEA} framework can always outperform the existing self-training methods when running with the different EA models under different datasets and annotation settings.
(3) In few annotation settings (e.g. 1\% and 5\%), the advantage of \textsf{STEA} over the baselines is more obvious than that in rich annotation settings. This is because the baselines depend on the model's confidence and suffer more from the introduced noise when the EA model is poorly trained.
(4) In terms of the final EA effectiveness, the SOTA EA models (i.e. Dual-AMN and RREA) in supervised mode can achieve the best performance in self-training mode as well.
\noindent
In short, the superiority of \textsf{STEA}\text{ } over the baselines is generic across different EA models.
\subsubsection*{Reliance on data annotation}
\begin{figure}
\centering
\includegraphics[width=8.8cm]{images/reliance_on_annotation.pdf}
\vspace{-20pt}
\caption{Performance change of self-training methods w.r.t. annotation amounts. The \textsf{STEA} can achieve decent effectiveness in extremely few annotation scenario, while sharp increasement in annotation cannot bring quick improvement.}
\vspace{-12pt}
\label{fig:reliance_on_anno}
\end{figure}
As known, the motivation of self-training is to alleviate the models' reliance on annotations.
Therefore, we specially examine the effect of annotation amount on the performances of the self-training methods with Dual-AMN and RREA (two SOTA EA models) on dataset zh\_en.
In Fig.~\ref{fig:reliance_on_anno}, each curve depicts the performance change of a certain self-training method w.r.t. the increasing amount of annotations.
We can observe that:
(1) Even though the three baselines can always boost supervised training, their performances are still poor when the training data is few;
(2) On the contrary, \textsf{STEA} can achieve pretty decent effectiveness. With only 1\% labelled data, \textsf{STEA}\text{ } can achieve comparable performance as supervised learning using 30\% labelled data. Similar observations on the other datasets can be obtained from Table~\ref{tab:overall_perf}.
From the perspective of self-training for EA, this finding is inspiring. The value of self-training for EA is much beyond what has been realised. Thus, this direction deserves more exploration.
(3) The performance of \textsf{STEA} is relatively stable, like reaching a ceiling, even the annotation amount is increased sharply from 1\% to 30\%. We reckon this phenomenon indicates that random data selection for annotation is not suitable for the self-training based EA methods. Active Learning is a promising direction of smart data selection. Though it has been explored by a few recent works for EA task~\cite{DBLP:conf/emnlp/LiuSZHZ21}, the combination of active learning with self-training remains to be investigated.
\noindent
In conclusion, \textsf{STEA} can greatly alleviate the reliance of EA models on annotation. Facing the performance ceiling, active learning for self-traininng is a promising direction to be studied.
\subsubsection*{Precision and recall of pseudo mappings.}
\begin{figure}
\centering
\includegraphics[width=8.7cm]{images/pseudo_quality_annotation.pdf}
\vspace{-20pt}
\caption{Precision and recall of pseudo mappings w.r.t. annotation amount. \textsf{STEA} has obvious advantage in both precision and recall regardless of the annotation amount.}
\vspace{-12pt}
\label{fig:prec_recall_annotation}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=8.7cm]{images/pseudo_quality_iterations.pdf}
\vspace{-20pt}
\caption{Precision and recall of generated pseudo mappings in each iteration when the annotation amount is 1\%. \textsf{STEA} is able to suppress the noise without sacrificing recall from the beginning iteration.}
\label{fig:prec_recall_iteration}
\vspace{-10pt}
\end{figure}
High precision means there is less noise, while high recall of ground-truth mappings means more true training data are added.
To get more insight into the advantage of \textsf{STEA} , we check these two metrics for different self-training methods.
Fig.~\ref{fig:prec_recall_annotation} shows these metrics of finally generated pseudo mappings by different self-training methods w.r.t. different annotation amounts on zh\_en.
We can observe that \textsf{STEA} has a big advantage in both precision and recall over the baselines regardless of the amount of annotation. It is impressive to see \textsf{STEA} still can achieve high precision in few annotation settings thanks to its dependency-aware predictions.
Furthermore, we take a closer look at the generated pseudo mappings in each iteration when the annotation amount is 1\%.
As shown in Fig.~\ref{fig:prec_recall_iteration}, \textsf{STEA} has high precision from the beginning (when the EA model has poor effectiveness), while its recall increases gradually from a low level.
On the contrary, the baselines can introduce much noise in the beginning and then can hardly get improved in the later iterations.
\noindent
Thus, we reckon the success of \textsf{STEA} comes from: the exploitation of dependency can help suppress the noise without sacrificing high recall of ground-truth mappings.
\vspace{-4pt}
\subsection{Detailed Anlysis of the \textsf{STEA} }
\subsubsection*{Comparing strategies of selecting pseudo mappings within \textsf{STEA} }
\begin{figure}
\centering
\includegraphics[width=8.7cm]{images/comp_uni_bi_mut.pdf}
\vspace{-20pt}
\caption{Comparing different strategies of deriving pseudo mappings in \textsf{STEA} . MutHighestProb can achieve the best performance and is also easiest to use in practice.}
\vspace{-12pt}
\label{fig:uni_bi_mut}
\end{figure}
Based on the dependency-aware probabilities, we explore three types of strategies to generate the pseudo mappings -- UniThr, BiThr, MutHighestProb. Among them, UniThr has two variants which do compatibility checking only from the first KG (denoted as \textit{UniThr (KG1)}) or only from the second KG (denoted as \textit{UniThr (KG2)}).
In addition, we use \textit{Baseline} to denote the best baseline within each annotation setting.
Fig.~\ref{fig:uni_bi_mut} shows their performances on the zh\_en dataset. As can be seen:
(1) The choice of KG for UniThr may have a big effect on the final performance.
(2) BiThr always works better than UniThr (both variants). Meanwhile, it does not need to choose source KG and thus only need to adjust one hyperparameter -- threshold.
(3) MutHighestProb can achieve the best performance. In addition, it is the easiest strategy to use in practice since it has no hyperparameter.
(4) In most cases, these strategies outperform the best baseline.
\subsubsection*{Necessity of the normalisation module.}
To normalise the similarities into probabilities, we use one learnable normalisation module based on \textit{softmax} instead of a simple one, e.g. MinMax scaler. To verify the necessity, we replace our normalisation module with a MinMax scaler in \textsf{STEA} .
Table~\ref{tab:normalisation} shows the results on dataset zh\_en. Obviously, MinMax scaler performs poorly. We think the reason is: the normalised similarity vector by MinMax scaler only contains very small values and the highest value is not distinguishable from the others. Thus, it is not suitable to be used as probability distribution.
\subsubsection*{Sensitivity to hyperparameter}
\textsf{STEA} only has one hyperparameter $K$, which means, for each source entity, only the top $K$ counterpart candidates suggested by the neural EA model will be estimated for compatibility to simplify the computation.
In Fig.~\ref{fig:paramK}, we show the performance of \textsf{STEA} w.r.t. different $K$ values under different annotation settings.
(1) \textsf{STEA} is only sensitive to very small values of $K$ ($<5\%$).
(2) Fewer annotations make \textsf{STEA} more sensitive to $K$ with small values ($<5\%$).
(3) We prefer to set $K$ as a small value out of the sensitive interval like 10, considering larger $K$ will lead to higher computation cost.
\begin{figure}
\begin{minipage}{0.25\textwidth}
\includegraphics[width=4.3cm]{images/paramK_curve.pdf}
\vspace{-16pt}
\captionof{figure}{Performance of \textsf{STEA} w.r.t. hyperparameter $K$ under different annotations. \textsf{STEA} is not sensitive to $K$.}
\vspace{-12pt}
\label{fig:paramK}
\end{minipage}%
\hspace{2mm}
\begin{minipage}{0.2\textwidth}
\captionof{table}{Comparison of normalisation methods in \textsf{STEA} . Simple MinMax Scaler leads to very poor effectiveness (Hit@1).}
\label{tab:normalisation}
\scalebox{0.85}[0.85]{
\centering
\begin{tabular}{|c|c|c|}
\hline
Anno. & MinMax & Softmax \\
\hline
1\% & 0.48&0.59\\
5\% & 0.59&0.74\\
10\% & 0.65&0.79\\
20\% & 0.72&0.83\\
30\% & 0.74&0.84\\
\hline
\end{tabular}
}
\end{minipage}
\end{figure}
\section{Related Work}
\subsection*{Neural EA Model}
Neural EA models~\cite{DBLP:journals/pvldb/SunZHWCAL20,DBLP:conf/coling/ZhangLCCLXZ20,DBLP:journals/tkde/ZhaoZTWS22,DBLP:conf/cikm/LiuHZZZ22} emerge with the development of deep learning techniques and has been the mainstream of current EA research.
KG encoders lie in the core of neural EA models.
Various neural architectures have been explored to encode entities.
The initial works tried translation-based models ~\cite{DBLP:conf/ijcai/ChenTYZ17,DBLP:conf/ijcai/ZhuXLS17}.
Later, the emergence of Graph Convolutional Network (GCN)~\cite{DBLP:journals/tnn/WuPCLZY21} changed the landscape of neural EA models.
Many GCN-based KG encoders for EA task were designed and showed significant superiority than the translation-based models~\cite{DBLP:conf/emnlp/WangLLZ18,DBLP:conf/wsdm/MaoWXLW20,DBLP:conf/aaai/SunW0CDZQ20,DBLP:conf/cikm/MaoWXWL20}.
Specifically, the GCN-based EA models can unify the way of encoding structure and attribute information naturally, while it is not easy for the translation-based models to exploit the attributes in KGs.
In addition, the GCN-based methods can achieve state-of-the-art (SOTA) performance with big leads.
In this work, we choose different GCN-based SOTA EA models to study the training effectiveness of different self-training frameworks on them.
Among previous neural EA works, some only considered structure information of KG and dedicated to the modelling of KG structure~\cite{DBLP:conf/ijcai/SunHZQ18,DBLP:conf/aaai/SunW0CDZQ20,DBLP:conf/cikm/MaoWXWL20}, while some others focus on exploiting extra information in KGs other than the structure~\cite{DBLP:conf/ijcai/WuLF0Y019,DBLP:conf/aaai/0001CRC21,DBLP:conf/emnlp/LiuCPLC20}.
The former setting is more challenging and more general since structure is the most basic information in KGs.
We follow this setting in our experiments.
\subsection*{Semi-supervised Learning for EA}
Semi-supervised learning is an approach to machine learning that combines a small amount of labelled data with a large amount of unlabelled data during training.
Self-training is a wrapper method for semi-supervised learning
\cite{DBLP:journals/corr/abs-2202-12040}. In general, the model is initially trained with labelled data. Then, it is used to assign pseudo-labels to the set of unlabelled training samples and enrich the training data and train a new model in conjunction with the labelled training set.
In EA research, different self-training strategies have been explored.
Zhu et al.~\cite{DBLP:conf/ijcai/ZhuXLS17} added all predicted mappings into training data while assigned high confidence predictions high weights in their training loss.
Sun et al.~\cite{DBLP:conf/ijcai/SunHZQ18} improved the self-training strategy further by imposing a one-to-one constraint in generating pseudo mappings.
Mao et al.~\cite{DBLP:conf/wsdm/MaoWXLW20} proposed a strategy based on the asymmetric nature of alignment directions.
If and only if two entities are mutually nearest neighbours, they form one pseudo mapping.
Compared with the above two methods, this strategy does not introduce any hyperparameter and is very simple in implementation.
Apart from self-training, co-training is another semi-supervised method explored to improve the effectiveness of EA models.
Co-training is an extension of self-training in which multiple models are trained on different sets of features and generate labelled examples for one another.
In EA task, Chen et al.~\cite{DBLP:conf/ijcai/ChenTCSZ18} performed co-training of two EA models taking different types of KG information as input.
Xin et al.~\cite{DBLP:conf/aaai/XinSHLHQ022} extended the number of EA models to $k>2$.
Our work focuses on improving self-training for EA by producing better pseudo mappings based on entity dependency in the KG.
\section{Notes of Method}
Table~\ref{tab:symbols} shows the symbols.
\begin{table}
\caption{Symbols}
\begin{tabular}{c|c}
\hline
Symbols & Explanation \\
\hline
$\mathcal{G}, \mathcal{G}'$ & two KGs \\
$E, E'$ & entity sets \\
$L \subset E, U \subset E$ & labelled and unlabelled entity set \\
$M^l = \{ (e \in E, e' \in E') \}$ & labelled mappings \\
$M^p = \{ (e \in E, e' \in E') \}$ & pseudo mappings \\
$y_e \in E'$ & counterpart variable of entity $e \in E$ \\
$\hat{y}_e$ & assigned counterpart of entity $e$ \\
\hline
\end{tabular}
\label{tab:symbols}
\end{table}
For a certain source entity $e$,
similarities
$$ \{ \mathrm{Sim}_\Theta (e, e') \textrm{ for } e \in E' \} $$
probabilities of all possible counterpart candidates
probability distribution over all possible counterpart candidates
$$ \{ \Pr (y_e = e') \textrm{ for } e \in E' \} $$
linear transofrmation
$$ f(e,e') = \omega_1 \cdot \mathrm{Sim}_\Theta (e,e') + \omega_0 $$
softmax
$$ \Pr_\Omega (y_e=e') = \frac{\exp (f(e,e') / \tau)}{\sum_{e'' \in E'} \exp(f(e,e'') / \tau)} $$
cross-entropy loss
$$ L_\Omega = - \sum_{e \in L} \log \Pr (y_e = \hat{y}_e) $$
|
\section{Introduction}
The theory of Donaldson--Thomas invariants started around 2000 with the seminal work of R.\ Thomas \cite{Thomas1}. He associated integers to those moduli spaces of sheaves on a compact Calabi--Yau 3-fold which only contain stable sheaves. After some years, K.\ Behrend realized in \cite{Behrend} that these numbers, originally written as ``integrals'' over algebraic cycles or characteristic classes, can also be obtained by an integral over a constructible function, the so-called Behrend function, with respect to the measure given by the Euler characteristic. This new point of view did not only extend the theory to non-compact moduli spaces but revealed also the ``motivic nature'' of this new invariant. It has also been realized that quivers with potential provide another class of examples to which Donaldson--Thomas theory applies. Starting around 2006, D. Joyce \cite{JoyceI},\cite{JoyceCF},\cite{JoyceII},\cite{JoyceIII},\cite{JoyceMF},\cite{JoyceIV} and Y.\ Song \cite{JoyceDT} extended the theory using all kinds of ``motivic functions'' to produce (possibly rational) numbers even in the presence of semistable objects which is the generic situation when classifying objects in abelian categories. Around the same time, M.\ Kontsevich and Y.\ Soibelman \cite{KS1},\cite{KS2},\cite{KS3} independently proposed a theory producing even motives, some sort of refined ``numbers'', instead of simple numbers, also in the presence of semistable objects. The technical difficulties occurring in their approach disappear in the special situation of representations of quivers with potential. The case of zero potential has been intensively studied by M.\ Reineke in a series of papers \cite{Reineke2},\cite{Reineke3},\cite{Reineke4}.
Despite some computations of motivic or even numerical Donaldson--Thomas invariants for quivers with potential (see \cite{BBS},\cite{DaMe1},\cite{DaMe2},\cite{MMNS}), the true nature of Donaldson--Thomas invariants for quiver with potential remained mysterious for quite some time. A full understanding has been obtained recently and is the content of a series of papers \cite{DavisonMeinhardt4},\cite{DavisonMeinhardt3},\cite{MeinhardtReineke},\cite{Meinhardt4}. \\
The present text aims at giving a gentle introduction to Donaldson--Thomas theory in the case of quiver with potential. We have two reasons for our restriction to quivers. Firstly, so-called orientation data will not play any role, and secondly, we do not need to touch derived algebraic geometry. Apart from this, many important ideas and concepts are already visible in the case of quiver representations, and since the theory is fully understood, we belief that this is a good starting point for your journey towards an understanding of Donaldson--Thomas theory. There are more survey articles available focusing on different aspects of the theory. (see \cite{JoyceDT},\cite{KS3},\cite{Szendroi}) \\
Let us give a short outline of the paper. The next section starts very elementary by discussing the problem of classifying objects. The objects which are of interest to us form an abelian category although many ideas of section 2 also apply to ``non-linear'' moduli problems. We study in detail the difficulties arising from the construction of moduli spaces and develop slowly the concept of a (moduli) stack. Although the theory of stacks is rather rich and complicated, we can restrict ourselves to quotient stacks throughout this paper. Hence, a good understanding of a quotient stacks is inevitable. We try to illustrate this concept by giving important examples. We should mention that only very little knowledge of algebraic or complex geometry is needed. In many cases, you can easily replace ``schemes'' with ``varieties'' or ``complex manifolds''.\\
Section 3 provides the background on quivers and their representations. The point of view taken here is that quivers are the categorical (noncommutative) analogue of polynomial algebras in ordinary commutative algebra. In other words, they are a useful tool for practical computations when dealing with linear categories, but at the end of the day the result should only depend on the linear category and not its presentation as a quotient of the path category of a quiver by some ideal of relations. The relations important in this paper are given by noncommutative partial derivatives of a so-called potential. \\
The next two sections provide the language and the framework to formulate Donaldson--Thomas theory in section 6. We start in section 4 with the concept of ``motivic theories''. The best example the reader should have in mind are constructible functions. It should be clear that constructible functions can be pulled back and multiplied. Using fiberwise integrals with respect to the Euler characteristic, we can even push forward constructible functions. Moreover, every locally closed subscheme/subvariety/submanifold determines a constructible function, namely its characteristic function. In a nutshell, a motivic theory is just a generalization of this associating to every scheme $X$ an abelian group $R(X)$ of ``functions'' on $X$ which can be pulled back, pushed forward and multiplied. Moreover, to every locally closed subscheme in $X$ there is a ``characteristic function'' in $R(X)$ such that the characteristic function of a disjoint union is the sum of the characteristic function of its summands. Its is this property what makes a theory of generalized functions ``motivic''. As usual in algebraic geometry, the term ``function'' should be used with some care. Every function on say a complex variety $X$ determines a usual function from the set of points in $X$ to the coefficient ring $R(\mbox{point})$ of our theory, but this is not a one-one correspondence. \\
In section 5 we introduce vanishing cycles. We do not assume that the reader is familiar with any theory of vanishing cycles. As in the previous section, a vanishing cycle is just an additional structure on motivic theories formalizing the properties of ordinary classical vanishing cycles. The Behrend function mentioned at the beginning of this introduction provides a good example of a vanishing cycle on the theory of constructible functions. In fact, we will construct in a functorial way two vanishing cycles associated to a given motivic theory. The first construction is rather stupid, but the second one essentially covers all known nontrivial examples. At the end of sections 4 and 5 we extend motivic theories and vanishing cycles to quotient stacks as quotient stacks arise naturally in moduli problems. There is a way to circumvent stacks in Donaldson--Thomas theory by considering framed objects, but we belief that the usual approach of using stacks is more conceptual and should be known by anyone who wants to understand Donaldson--Thomas theory seriously.\\
In the last section 6 we finally introduce Donaldson--Thomas functions and invariants. After stating the main results, we consider many examples to illustrate the theory. Finally, we develop some tools used in Donaldson--Thomas theory such as Ringel--Hall algebras, an important integration map and the celebrated wall-crossing formula. \\
The reader will realize shortly that the text contains tons over exercises and examples. Most of the exercises are rather elementary and require some elementary computations and standard arguments. Nevertheless, we suggest to the reader to do them carefully in order to get your hands on the subject and to obtain a feeling about the objects involved. There is a lot of material in this text which is not part of the standard graduate courses at universities, and if you are not already familiar with the subject you certainly need some practice as we cannot provide a deep and lengthy discussion of the material presented here.\\
\textbf{Acknowledgments.} The paper is an expanded version of a couple of lectures the author has given in collaboration with Ben Davison at KIAS in February 2015. He is more than grateful to Michel van Garrel and Bumsig Kim for giving him the opportunity to visit this wonderful place. A lot of work on this paper has also been done at the University of Hong Kong, where the author gave another lecture series on Donaldson--Thomas theory. The author wants to use the opportunity to thank Matt Young for being such a wonderful host. He also wants to thank Jan Manschot for keeping up the pressure to finish this paper and for offering the opportunity to publish the paper. Finally, the author is very grateful to Markus Reineke for giving him as much support as possible.
\section{The problem of constructing a moduli space}
\subsection{Moduli spaces}
Let us start by recalling the general idea of a moduli space. Depending on the situation, mathematicians are trying to classify objects of various types. The general pattern is the following. There is some set (or class) of objects and isomorphisms between two objects. Such a structure is called a groupoid. A groupoid is a category with every morphism being an isomorphism. If the set of objects has cardinality one, a groupoid is just a group. The other extreme is a groupoid such that every morphism is the identity morphism of some object. Such groupoids are in one-to-one correspondence with ordinary sets. Hence, a groupoid interpolates between sets and groups. There are two main sources of groupoids.
\begin{example} \rm Let $X$ be a topological space. The fundamental groupoid $\pi_1(X)$ is the groupoid having the points of $X$ as objects, and given two points $x,y\in X$, the set of morphisms from $x$ to $y$ is the set of homotopy classes of paths from $x$ to $y$. Fixing a base point $x\in X$, the usual fundamental group $\pi_1(X,x)$ is just the automorphism group of $x$ considered as an object in the groupoid $\pi_1(X)$. Denote by $\pi_0(X)$ the set of path connected components, i.e.\ the set of objects in $\pi_1(X)$ up to isomorphism.
\end{example}
\begin{example} \rm
Given a category $\mathcal{C}$, one can consider the subcategory $\mathcal{I} so(\mathcal{C})$ of all isomorphisms in $\mathcal{C}$. Thus, $\mathcal{I} so (\mathcal{C})$ is a groupoid, and $\mathcal{C}/_\sim$ denotes the set of objects in $\mathcal{C}$ up to isomorphism.
\end{example}
These two examples are related as follows. To every (small) category one can construct a topological space $X_\mathcal{C}$ - the classifying space of $\mathcal{C}$ - such that $\pi_1(X_\mathcal{C})\cong \mathcal{I} so (\mathcal{C})$ and $\pi_0(X_\mathcal{C} )=\mathcal{C}/_\sim$. \\
Let us come back to the classification problem, say of objects in $\mathcal{C}$ up to isomorphism. The problem is to describe the set $\mathcal{C}/_\sim$. If it is discrete in a reasonable sense, one tries to find a parameterization by less complicated (discrete) objects. This applies for instance to the classification of semisimple algebraic groups or finite dimensional representations of the latter. In many other situations, $\mathcal{C}/_\sim$ is uncountable, and one wants to put a geometric structure on the set $\mathcal{C}/_\sim$ to obtain a ``moduli space''. However, if for instance $\mathcal{C}/_\sim$ has the cardinality of the field of complex numbers, one can always choose a bijection $\mathcal{C}/_\sim \cong M$ to the set of points of any complex variety or manifold $M$ of dimension greater than zero. Pulling back the geometric structure of $M$ along this bijection, we can equip $\mathcal{C}/_\sim$ in many different (non-isomorphic) ways with a structure of a complex manifold. Hence, we should ask:\\
\textbf{Question:} Is there a natural geometric structure on $\mathcal{C}/_\sim$? What does ``natural'' actually mean?\\
There is a very beautiful idea of what ``natural'' should mean, and which applies to many situations. Assume there is a notion of a family of objects in $\mathcal{C}$ over some ``base'' scheme/variety/(complex) manifold $S$, i.e.\ some object on $S$ which has ``fibers'' over $s\in S$, and these fibers should be objects in $\mathcal{C}$.
\begin{example} \rm
Given a $\mathbb{C}$-algebra $A$, a family of finite dimensional $A$-representations is a (holomorphic) vector bundle $V$ on $S$ and an $\mathbb{C}$-algebra homomorphisms $A\to \Gamma(S,\mathcal{E}nd(V))$ from $A$ into the algebra of sections of the endomorphism bundle of $V$.
\end{example}
\begin{example} \rm
Given a scheme/variety/manifold $X$ over $\mathbb{C}$ and some parameter space $S$, a family of coherent sheaves on $X$ parametrized by $S$ is just a coherent sheaf $E$ on $S\times X$ which is flat over $S$. The latter condition ensures that taking fibers and pull-backs of families behaves well. If $E$ is a family of zero dimensional sheaves on $X$, i.e.\ if the projection $p:\Supp(E)\to S$ has zero-dimensional fibers, flatness of $E$ over $S$ is equivalent to the requirement that $E$ is locally free over $S$. Using the coherence of $E$ once more, one can show that $p:\Supp(E)\to S$ is a finite morphism and if $X=\Spec A$ is affine, $E$ is completely determined by the vector bundle $V:=p_\ast E$ on $S$ together with a $\mathbb{C}$-algebra homomorphism $A\to \Gamma(S,\mathcal{E} nd(V))$. From that perspective, the previous example can be seen as a non-commutative version, namely families of zero dimensional sheaves on the non-commutative affine scheme $\Spec A$ for $A$ being a $\mathbb{C}$-algebra.
\end{example}
\begin{example} \rm A $G$-homogeneous space with respect to some (algebraic) group $G$ is a scheme $P$ with a right $G$-action such that $P\cong G$ as varieties with right $G$-action, where $G$ acts on $G$ by right multiplication. A (locally trivial) family of $G$-homogeneous spaces over $S$ is defined as a principal $G$-bundle on $S$.
\end{example}
Once a family is given, by taking the ``fiber'' over $s\in S$ we get an object in $\mathcal{C}$ and, hence, a point in $\mathcal{C}/_\sim$. Varying $s\in S$, we end up with a map $u:S\to \mathcal{C}/_\sim$. Moreover, we see that the pull-back of a family on $S$ along a morphism $f:S'\to S$ induces a morphism $u':S'\to \mathcal{C}/_\sim$ such that $u'=u\circ f$. Coming back to the question formulated above, we can now be more precise by asking: \\
\textbf{Question:} Is there a structure of a variety or scheme on $\mathcal{C}/_\sim$ such that for every family of objects over any $S$, the induced map $S\to \mathcal{C}/_\sim$ is a morphism of schemes? If so, is there any way to get back the family by knowing only the morphism $S\to \mathcal{C}/_\sim$?\\
If the first question has a positive answer, we call $\mathcal{M}=(\mathcal{C}/_\sim,\mbox{scheme structure}) $ a coarse moduli space for $\mathcal{C}$. If the second part of the question is also true, we should be able to (re)construct a ``universal'' family on $\mathcal{M}$ by considering the map $\id:\mathcal{M}\to \mathcal{M}$. Moreover, given a map $u':S'\to \mathcal{M}$ such that $u'=u\circ f$ for some map $f:S'\to S$, the family on $S'$ should be the pull-back of the family on $S$ associated to $u$ by uniqueness. As every morphism $u:S\to \mathcal{M}$ has an obvious factorization $S\xrightarrow{u} \mathcal{M}\xrightarrow{\id} \mathcal{M}$, we finally see that every family on $S$ must be the pull-back of the ``universal'' family on $\mathcal{M}$. In such case, we call $\mathcal{M}$ a fine moduli space.
\begin{example} \rm
Let $\mathcal{C}=\Vect_\mathbb{C}$ be the category of finite dimensional $\mathbb{C}$-vector spaces. A (locally trivial) family of finite dimensional vector spaces is just a vector bundle on some parameter space $S$. As a vector space is classified by its dimension, we can put the simplest scheme structure on $\Vect_\mathbb{C}/_\sim=\mathbb{N}$ by thinking of $\mathbb{N}$ as a disjoint union of countably many copies of $\Spec\mathbb{C}$. Given a vector bundle $V$, we obtain a well-defined morphism $S\to \mathbb{N}$ mapping $s\in S$ to the copy of $\Spec\mathbb{C}$ indexed by the dimension of the fiber $V_s$ of $V$ at $s$. The scheme $\mathbb{N}$ is a course moduli space, but apart from the zero dimensional case, it can never be a fine moduli space. Indeed, there is an obvious and essentially unique vector bundle on $\mathbb{N}$ inducing the identity map $\mathbb{N}\to \mathbb{N}$, but a vector bundle on $S$ can never be the pull-back of the one on $\mathbb{N}$ unless it is constant. Thus, $\mathbb{N}$ is not a fine moduli space.
\end{example}
These are also bad news for our previous examples concerning representations of an algebra $A$ or sheaves on a variety $X$. Indeed, for $A=\mathbb{C}$ or $X=\Spec \mathbb{C}$, we are back in the classification problem of finite dimensional $\mathbb{C}$-vector spaces.
\begin{example} \rm
Similar to the previous example, we see that the classification problem for homogeneous $G$-spaces has only a coarse moduli space given by $\Spec\mathbb{C}$.
\end{example}
There are several strategies to overcome the difficulty of constructing a fine moduli space.
\begin{example}[rigid families] \rm
One possibility is to rigidify families of objects. For example, instead of considering all vector bundles we could also restrict ourselves to constant vector bundles. In this particular case, $\mathbb{N}$ is even a fine moduli space. However, in many situations one wants to glue families together to form families of objects on bigger spaces. This is incompatible with the concept of rigidity, and we will not follow this path.
\end{example}
\begin{example}[weaker equivalence] \rm \label{weaker_equivalence}
Instead of classifying objects up to isomorphism, we could allow weaker equivalences. For example, we could identify to families $V^{(1)}$ and $V^{(2)}$ (over $S$) of vector spaces or representations of an algebra $A$ if there is a line bundle $L$ on $S$ such that $V^{(2)}=V^{(1)}\otimes_{\mathcal{O}_S} L$. By doing this, we can always replace a rank one bundle with the trivial rank one bundle. Hence, the moduli space $\Spec \mathbb{C}$ of one-dimensional vector spaces is a fine moduli space.
\end{example}
\begin{example}[projectivization] \rm \label{projectivization}
Similar to families of vector spaces of dimension $r$, one could look at locally trivial families $\mathcal{P}$ of projective spaces $\mathbb{P}^{r-1}$. The transition functions between local trivializations are regular functions with values in $\Aut(\mathbb{P}^{r-1})=\PGl(r)$. Every vector bundle $V$ of rank $r$ provides such a bundle by taking $\mathcal{P}:=\mathbb{P}(V)$, the bundle of lines or hyperplanes in $V$. Two vector bundles $V^{(1)}$, $V^{(2)}$ define isomorphic bundles $\mathbb{P}(V^{(1)})\cong \mathbb{P}(V^{(2)})$ if and only if $V^{(2)}=V^{(1)}\otimes_{\mathcal{O}_S} L$ for some line bundle $L$ on $S$, providing the bridge to the previous example. However, not every $\mathbb{P}^{r-1}$-bundle $\mathcal{P}$ can be realized as $\mathbb{P}(V)$ for some vector bundle $V$ on $S$. Given a $\mathbb{P}^{r-1}$-bundle $\mathcal{P}$, there is an associated locally trivial bundle $\mathcal{E}_{\mathcal{P}}$ of $\mathbb{C}$-algebras isomorphic to $\End_\mathbb{C}(\mathbb{C}^r)\cong\Mat_\mathbb{C}(r,r)$. Conversely, every locally trivial bundle $\mathcal{E}$ of $\mathbb{C}$-algebras isomorphic to $\Mat_\mathbb{C}(r,r)$ defines an associated $\mathbb{P}^{r-1}$-bundle $\mathcal{P}_{\mathcal{E}}$ as the transition functions of $\mathcal{E}$ must be in $\Aut(\Mat_\mathbb{C}(r,r))=\PGl(r)$. Thus, we have an equivalence of categories between locally trivial $\mathbb{P}^{r-1}$-bundles and locally trivial $\Mat_\mathbb{C}(r,r)$-bundles. If the $\mathbb{P}^{r-1}$-bundle $\mathcal{P}$ is given by $\mathbb{P}(V)$ for a vector bundle $V$ of rank $r$, then $\mathcal{E}_{\mathbb{P}(V)}=\mathcal{E} nd(V)$. Given a $\mathbb{C}$-algebra $A$, we can study families given by a locally free $\mathbb{P}^{r-1}$-bundle $\mathcal{P}$ or equivalently a locally free $\Mat_\mathbb{C}(r,r)$-bundle $\mathcal{E}$ and a homomorphism of algebras $A\to \Gamma(S,\mathcal{E})$. If $A=\mathbb{C}$, there is only a fine moduli space for $r=1$ as every $\mathbb{P}^0$-bundle must be constant. If the algebra $A$ is more complicated, there are also fine moduli spaces for $r>1$, but only for objects which are simple in a suitable sense. For $A=\mathbb{C}$ there are no simple vector spaces of dimension $r>1$.
\end{example}
As we have seen, the construction of fine moduli spaces can only be done in a few cases and severe restrictions. But even if we were only interested in coarse moduli spaces, a standard problem will occur as the following example shows.
\begin{example} \rm \label{S_equivalence} Instead of looking at representations of $A=\mathbb{C}$, we enter the next level of complexity by looking at finite dimensional representations of $A=\mathbb{C}[z]$. A one-dimensional representation $V$ is determined by the value of $z$ in $\End_\mathbb{C}(V)\cong \mathbb{C}$. In other words, a coarse moduli space is given by the complex affine line ${\mathbb{A}^1}$. Still, we have to face the problem discussed before that a line bundle on $S$ with $z$ acting by multiplication with a fixed number $c\in \mathbb{C}$ could almost never be the pull-back of a universal family under the constant map $S\to {\mathbb{A}^1}$ mapping $s\in S$ to $c\in {\mathbb{A}^1}$. Let us ignore the problem of finding a fine moduli space and continue with two-dimensional representations. Consider the trivial rank 2 bundle on $S={\mathbb{A}^1}$ with $z$ acting via the nilpotent matrix
\[ { 1 \;\; s \choose 0 \;\; 1 } \]
in the fiber over $s\in S={\mathbb{A}^1}$. The representations for $s\not=0$ are all isomorphic to each other, and our ``classifying map'' $u:S\to \mathcal{M}_2$ to a coarse moduli space $\mathcal{M}_2$ of rank 2 representations must be constant on $S\!\setminus\!\!\{0\}$. For $s=0$ we obtain a different representation and $u(0)$ must be another point in $\mathcal{M}_2$ if the latter parametrizes isomorphism types. However, such a discontinuous map $u:S\to \mathcal{M}_2$ cannot exist, and we have to abandon the idea of finding a coarse moduli space parameterizing isomorphism classes. One can show that a ``reasonable'' coarse moduli space is given by the GIT-quotient $\Mat_\mathbb{C}(2,2)/\!\!/\Gl(2)$ which is realized as $\Spec \mathbb{C}[\Mat_{\mathbb{C}}(2,2)]^{\Gl(2)}\cong \AA^2$ and similar for higher ranks. The classifying map $S\to \AA^2$ will map $s\in S$ to the unordered pair of eigenvalues of the $z$-action in the fiber over $s$. Such an unordered pair of eigenvalues is determined by the sum (corresponding to the trace) and the product (corresponding to the determinant) of the eigenvalues and similar for higher ranks. Therefore, $\mathcal{M}$ will parametrize unordered direct sums of one-dimensional representations. In other words, by passing from $\mathcal{C}/_\sim$ to $\mathcal{M}$, we identify each representation with the (unordered) direct sum of its simple Jordan--H\"older factors. Representations having the same Jordan--H\"older factors, i.e.\ corresponding to the same point in $\mathcal{M}$, are often called S-equivalent\footnote{The ``S'' in ``S-equivalent'' refers to semisimple, i.e.\ sums of simples, and should not be confused with our notation of a base of a family.}.
\end{example}
Let us summarize the lessons we have learned in the previous examples:
\begin{enumerate}
\item Constructing coarse moduli spaces has only a chance if we do not parametrize objects up to isomorphism but up to the weaker S-equivalence. In other words, classifying objects up to isomorphism is only possible for simple objects, i.e. objects without subobjects.
\item The construction of a universal family on the moduli space of simple objects might only work if we identify two families under a weaker equivalence (twist with a line bundle) or pass to some projectivization.
\end{enumerate}
We suggest to the reader to check these statements in the previous examples.
\subsection{Stability conditions}
Even though the set of objects in $\mathcal{C}$ up to isomorphism might be very large, the set of (isomorphism classes of) simple objects can be quite small, even finite. Thus, the ``coarse'' moduli space would not deliver much insight into the set of isomorphism types in $\mathcal{C}$. However, there is a simple but clever idea to overcome this problem. Instead of looking at $\mathcal{C}$, we should ``scan'' $\mathcal{C}$ by means of a collection $(\mathcal{C}_\mu)_{\mu\in T}$ of ``small'' full subcategories $\mathcal{C}_\mu\subseteq \mathcal{C}$. An object which might be far away from being simple or semisimple (direct sum of simples) can become semisimple or even simple in $\mathcal{C}_\mu$. By doing this, we can distinguish many S-equivalent objects either because they live in different subcategories or they live in the same subcategory $\mathcal{C}_\mu$ but have different Jordan--H\"older filtrations taken in $\mathcal{C}_\mu$. This brilliant idea is the essence of the concept of stability conditions. The following definition is due to Tom Bridgeland. However, there are more general definitions of stability conditions.
\begin{definition} \hfill
\begin{enumerate}
\item A central charge on a noetherian abelian category $\mathcal{C}$ is a function $Z$ on the set of objects in $\mathcal{C}$ with values in $\mathbb{H}_+:=\{r\exp(\sqrt{-1}\phi)\in \mathbb{C}\mid r\ge 0, \phi\in (0,\pi]\}$ such that $Z(E)=0$ implies $E=0$ and $Z(E)=Z(E')+Z(E'')$ for every short exact sequence $0\to E'\to E\to E''\to 0$. \\
\item Given a central charge $Z$, we call an object $E\in \mathcal{C}$ semistable if
\[ \arg Z(E'))\le \arg Z(E) \;\; \mbox{ for all subobjects } E'\subset E. \]
\item For $\mu\in (-\infty,+\infty]$ we denote with $\mathcal{C}_\mu$ the full subcategory of all semistable objects $E$ of slope $-\cot(\arg Z(E))=\mu$ and the zero object. It turns out that $\mathcal{C}_\mu$ is an abelian subcategory of $\mathcal{C}$ (cf.\ Exercise \ref{semistable_reps}).
\item A simple object in $\mathcal{C}_\mu$ is called stable. We assume that every semistable object of slope $\mu$ has a Jordan--H\"older filtration with stable subquotients of the same slope. Semisimple objects of $\mathcal{C}_\mu$, i.e.\ sums of stable objects of slope $\mu$, are called polystable.
\item Every object $E$ in $\mathcal{C}$ has a unique filtration $0 \subset E_1 \subset E_2 \subset \ldots \subset E_n=E$, the Harder--Narasimhan filtration, with semistable quotients $E_i/E_{i-1}$ of strictly decreasing slopes.
\end{enumerate}
\end{definition}
\begin{example}[The $r$-Kronecker quiver] \label{Kronecker} \rm
Let us illustrate this idea with a simple example. Consider the abelian category of $r$-tuples $\bar{x}$ of linear maps $x_i:V_1\to V_2 $ for $1\le i\le r$ between finite dimensional vector spaces $V_1,V_2$. Choosing two complex numbers $\zeta_1,\zeta_2 \in \mathbb{H}_+$, we get a central charge by putting $Z(\bar{x})=\zeta_1\dim V_1 + \zeta_2 \dim V_2$. Assume first that $\arg(\zeta_1)=\arg(\zeta_2)$. Then, all objects are semistable of the same slope $\mu=-\cot( \arg \zeta_1)$, and we have to face the old problems. Choose for instance $\dim V_1=\dim V_2=1$. The isomorphism type of such objects is determined by the choice of $r$ complex numbers $x_1,\ldots, x_r$ up to rescaling by $(g_1,g_2)\in \Gl(V_1)\times \Gl(V_2)=\mathbb{C}^\ast\times \mathbb{C}^\ast$ via $g_1x_ig_2^{-1}$. As the diagonal group $\{(g,g)\mid g\in\mathbb{C}^\ast\}$ acts trivially, we have the take the GIT quotient of $\AA^r$ by $\mathbb{C}^\ast$ which is just a point as $\Spec \mathbb{C}[x_1,\ldots,x_r]^{\mathbb{C}^\ast}=\Spec \mathbb{C}$. This corresponds to the fact that all objects have the same Jordan--H\"older factors $x_i=0:V_1 \to 0$ and $x_i=0:0 \to V_2$. Thus, all objects are S-equivalent to ``$V_1\oplus V_2$''$ = V_1\xrightarrow{0} V_2$. If $\arg \zeta_2 > \arg \zeta_1$, non of our objects with $\dim V_1=\dim V_2=1$ are semistable as the central charge $\zeta_2$ of the subobject $0:0 \to V_2$ has a bigger argument than the central charge $\zeta_1+\zeta_2$ of our given object. If, however, $\arg \zeta_2 < \arg \zeta_1$, all objects except for the semisimple $V_1\oplus V_2$ corresponding to $x_i=0$ for all $1\le i\le r$ are semistable of slope $\mu=- \Re e(\zeta_1+\zeta_2)/\Im m(\zeta_1+\zeta_2)$, and even stable. The moduli space $\mathcal{M}^{\zeta_1,\zeta_2}_{(1,1)}=\AA^r\!\setminus\!\!\{0\}/\mathbb{C}^\ast= \mathbb{P}^{r-1}$ parameterizing isomorphism classes of simple objects in $\mathcal{C}_\mu$ of dimension vector $(\dim V_1,\dim V_2)=(1,1)$ is even a fine moduli space if we identify two families of $r$-tuples of line bundle morphisms $x_i:V_1\to V_2$ on a parameter space $S$ as soon as they become isomorphic after twisting $V_1$ and $V_2$ with some line bundle $L$.
\end{example}
Note that coarse moduli spaces parameterizing S-equivalence classes of objects in $\mathcal{C}_\mu$ might not exist for all central charges, but one can show the existence for generic central charges and reasonable abelian categories. \\
We should also keep in mind that we paid a price for getting a refined version of S-equivalence, namely S-equivalence in subcategories. Indeed, coarse moduli spaces of (S-equivalence classes of) semistable objects can only ``see'' semistable objects but no objects with a non-trivial Harder--Narasimhan filtration. Hence, the construction of (coarse) moduli spaces remains unsatisfying.
\subsection{Moduli stacks}
There is, however, an alternative way to overcome all the problems seen in the previous examples. Following this approach, one can construct a fine moduli ``space'' with a universal family parameterizing all objects - not only simple or stable ones - up to isomorphism. According to the conservative law of mathematical difficulties, we also have to pay a price for getting such a beautiful solution of our moduli problem. It is hidden in the word ``space''. In fact, we have to leave our comfort zone of varieties or schemes and have to dive into the universe of more general spaces known as ``Artin stacks''.\\
Recall that a scheme $X$ is uniquely characterized by its set-valued functor $h_X:S\mapsto \Mor(S,X)$ of points. We have seen many set-valued functors before while studying moduli problems. The general pattern was the following. We considered set-valued contravariant functors $F:S\longmapsto \{ \mbox{families of objects in }\mathcal{C} \}/_\sim$ and a fine moduli space would be a scheme $\mathcal{M}$ such that $F\cong h_\mathcal{M}$, while a coarse moduli space is a scheme $\mathcal{M}$ together with a map $F\to h_\mathcal{M}$ which is universal with respect to all maps $F\to h_X$ of functors. One possibility of generalizing the concept of a space is to consider set-valued functors satisfying similar properties like the functor $h_X$. Note that if one has a collection of morphisms $U_i\to X$ defined on open subsets $U_i$ covering $S$ such that the maps agree on overlaps, one can glue the maps to form a global morphism $S\to X$. This sheaf property should also be satisfied by a general set-valued functor to be a reasonable generalization of a scheme. Such set-valued functors are also often called ``spaces''. A generalized space is called algebraic if it can be written as the ``quotient'' $X/_\sim$ of a scheme $X$ by an (\'{e}tale) equivalence relation. In other words, algebraic spaces are not to far away from schemes and many results for schemes can be generalized to algebraic spaces. In our situation of forming moduli spaces, this is still not the right approach to take, but shows already into the right direction. Indeed, the problems arising in the construction of universal families are related to the presence of (non-trivial) automorphisms. Thus, we should take automorphisms and isomorphisms more seriously into account. \\
Recall that a set with isomorphisms between points was just a groupoid studied at the beginning of this section. Hence, instead of looking at set-valued functors on the category of schemes, we should consider groupoid-valued contravariant functors. These functors should satisfy some gluing property which looks a bit more complicated than in the set-theoretic context. The best idea of remembering the gluing property is by looking at an example which is - as before - the baby example for all Artin stacks.
\begin{example} \rm \label{vector_bundle}
Consider the groupoid-valued functor $\Vect$ which maps any scheme $S$ to the groupoid of vector bundles (the objects) and isomorphisms between them (the morphisms). By pulling back vector bundles along morphisms $f:S'\to S$, we get indeed a contravariant functor.\footnote{Strictly speaking, we only get a pseudofunctor as $g^\ast\circ f^\ast$ is only equivalent to $(f\circ g)^\ast$, but we will ignore this technical problem as one can always resolve it.} Given two vector bundles $V, V'$ and an open cover $\cup_{i\in I}U_i=S$ of $S$ together with isomorphisms\footnote{We will always denote the pull-back along an inclusion $U\hookrightarrow S$ of an open subset by $|_U$.} $\phi_i:V|_{U_i} \to V'|_{U_i}$ on the open subsets $U_i$ such that they agree after restriction to the overlaps, i.e.\ $\phi_i|_{U_{ij}}=\phi_j|_{U_{ij}}$ with $U_{ij}=U_i\cap U_j$, we can
always find a unique global isomorphism $\phi:V\to V'$ such that $\phi_i=\phi|_{U_i}$. On the other hand, if we have vector bundles $V_i$ on $U_i$ and isomorphisms
$\phi_{ij}:V_i|_{U_{ij}} \to V_j|_{U_{ij}}$ such that the only possible composition $V_i|_{U_{ijk}} \to V_j|_{U_{ijk}} \to V_k|_{U_{ijk}} \to V_i|_{U_{ijk}}$ of their
restrictions to the triple overlaps $U_{ijk}=U_i\cap U_j\cap U_k$ is the identity (cocycle condition), one can use the transition isomorphisms $\phi_{ij}$ to glue the $V_i$ together, i.e.\ there is a vector bundle $V$ on $S$ and a family of isomorphisms $\phi_i:V|_{U_i}\to V_i$ such that the only possible composition $V|_{U_{ij}}\to V_i|_{U_{ij}} \to V_j|_{U_{ij}} \to V|_{U_{ij}}$ of their restrictions with $\phi_{ij}$ is the identity. This was the gluing property for isomorphisms and objects, and if we replace the word ``vector bundle'' with ``object'', we get the general form of the gluing property for a groupoid-valued functor.
\end{example}
\begin{definition}
A stack is a groupoid-valued contravariant functor\footnote{Again, we ignore the fact that $g^\ast\circ f^\ast$ might only be equivalent to $(f\circ g)^\ast$ for a pair $S''\xrightarrow{g} S' \xrightarrow{f} S$ of composable morphisms.} on the category of schemes satisfying the gluing property for isomorphisms and objects as seen in Example \ref{vector_bundle}
\end{definition}
In that perspective, a stack is like a (generalized) space with set-valued functors replaced with groupoid-valued functors.
\begin{exercise} Thinking of a set as a special groupoid with no nontrivial isomorphisms, show that every generalized space is a stack.
\end{exercise}
\begin{exercise} \label{moduli_algebra}
Fix a $\mathbb{C}$-algebra $A$. Show that the functor $A\rep$ associating to every scheme $S$ the groupoid of vector bundles $V$ with algebra homomorphisms $A\to \Gamma(S,\mathcal{E} nd(V))$ (the objects) and isomorphisms of vector bundles compatible with the algebra homomorphisms (the morphisms) is a stack. Prove the same for bundles $\mathcal{E}$ of matrix algebras and algebra homomorphisms $A\to \Gamma(S,\mathcal{E})$ as in Example \ref{projectivization}.
\end{exercise}
\begin{exercise}
Fix a scheme/variety/manifold $X$ over $\mathbb{C}$. Show that the functor $\Coh^X$ associating to every scheme $S$ the groupoid of coherent sheaves $E$ on $S\times X$ flat over $S$ (the objects) and isomorphisms between them (the morphisms) is a stack.
\end{exercise}
\begin{exercise}
Fix an algebraic group $G$. Show that the functor $\Spec \mathbb{C}/G$ associating to every scheme the groupoid of principal $G$-bundles (the objects) and isomorphisms between them (the morphisms) is a stack.
\end{exercise}
\begin{example} \rm
The following example is a generalization of the previous exercise. Fix an algebraic group $G$ and a scheme $X$ with a (right) $G$-action. There is a stack $X/G$ associating to every scheme $S$ the groupoid of pairs $(P\to S, m:P\to X)$, where $P\to S$ is a principal $G$-bundle and $m:P\to X$ is a $G$-equivariant map, with morphisms being given by $G$-bundle isomorphisms $u:P\to P'$ satisfying $m'\circ u =m$. The pull-back along a morphism $f:S'\to S$ is given by $(S'\times_S P \to S', m\circ \pr_P)$. The morphism $m:P\to X$ can also be interpreted as a section of the $X$-bundle $P\times_G X \to S$. The stack $X/G$ is called the quotient stack of $X$ with respect to the $G$-action.
\end{example}
When is comes to locally trivial families, there is some choice involved, namely the choice of the underlying (Grothendieck) topology. Intuitively, one would start with the Zariski topology, but the \'{e}tale or even the smooth topology have their advantages, too. In fact, the quotient stack $X/G$ defined above is usually taken with respect to the smooth or, equivalently, \'{e}tale topology. However, for so-called ``special'' groups $G$ like $\Gl(n)$ we could equivalently take the Zariski topology as every \'{e}tale locally trivial principal $G$-bundle is then already Zariski locally trivial. Notice that $\PGl(d)$ is not special and we should better take the \'{e}tale topology when it comes to principal $\PGl(d)$-bundles and quotient stacks $X/\PGl(d)$.
\begin{definition}
A 1-morphism (or morphism for short) from a stack $F$ to a stack $F'$ is a natural transformation $\eta:F\to F'$, i.e.\ a family of functors $\eta_S:F(S) \to F'(S)$ compatible with pull-backs along $f:S'\to S$ up to equivalence of functors. In other words, the functors $F'(f)\circ \eta_S$ and $\eta_{S'}\circ F(f)$ from $F(S)$ to $F'(S')$ are equivalent. A 2-morphism $\alpha:\eta\to \eta'$ between 1-morphisms is an invertible natural transformation $\alpha_S:\eta_S\to \eta'_S$ for every scheme $S$, compatible with pull-backs. In particular, given two stacks $F,F'$, we get a groupoid of morphisms $\MOR(F,F')$ with 1-morphisms being the objects and 2-morphisms being the morphisms. Hence, the category of stacks is a 2-category.
\end{definition}
Thinking of a set as being a groupoid having only identity morphisms, we can associate to every scheme $X$ a contravariant functor $h_X:S\mapsto \Mor(S,X)$. As we can glue morphisms, $h_X$ is indeed a stack. The following lemma is very important.
\begin{lemma}[Yoneda-Lemma] The covariant functor $h:X\mapsto h_X$ from schemes to stacks provides a full embedding of the category of schemes into the (2-)category of stacks. Moreover, there is an equivalence of groupoids $\MOR(h_X,F)\cong F(X)$ for every scheme $X$ and every stack $F$, natural in $X$ and $F$.
\end{lemma}
\begin{exercise}
Try to prove the Yoneda-Lemma.
\end{exercise}
The lemma is basically saying that the 2-category of stacks is an enlargement of the category of schemes, and we will drop the functor $h$ from notation. Though the definition of a stack looks very abstract, the reader should not think of a stack $F$ as a complicated functor, but rather as some object of a bigger 2-category containing the category of schemes. The groupoid-valued functor associated to $F$ can be (re)constructed by taking $X\mapsto \MOR(X,F)$. In other words, assume that you have a 2-category $\mathcal{C}$ with 2-morphisms being invertible, containing $\Sch_\CC$ as a full subcategory, and such that 1-morphisms starting at schemes and 2-morphisms between such 1-morphisms can be glued in a natural way. To every object $F\in \Obj(\mathcal{C})$ we can associate the groupoid-valued functor $\MOR(-,F)|_{\Sch_\CC}$ on the category of schemes. It satisfies the gluing axioms given above, and, hence, defines a stack. Thus, we get a covariant functor from $\mathcal{C}$ to the category of stacks showing that stacks form some sort of ``natural'' enlargement.
\begin{exercise} \label{stack_morphisms} \hfill
\begin{enumerate}
\item Let $X$ be a scheme with a right action of an algebraic group $G$. Consider the trivial principal $G$-bundle $\pr_X: X\times G\to X$ on $X$ and the $G$-equivariant map $m:X\times G\to X$ given by the group action. According to our definition of a quotient stack, the pair $(X\times G\to X,m)$ defines an element $\rho$ in $X/G(X)$. Check the Yoneda lemma by constructing a morphisms $\rho:X\to X/G$ which is called the ``standard atlas'' of $X/G$.
\item Given two schemes $X, Y$ and two algebraic groups $G, H$ acting on $X$ and $Y$ respectively. Assume that $\phi:G\to H$ is a group homomorphism and that $f:X\to Y$ is a morphism of schemes satisfying $f(xg)=f(x)\phi(g)$ for all $g\in G$ and $x\in X$. Construct a natural 1-morphism $\mathfrak{f}:X/G\to Y/H$ of stacks such that
\[ \xymatrix { X\ar[d]^{\rho_X} \ar[r]^f & Y \ar[d]^{\rho_Y} \\ X/G \ar[r]^{\mathfrak{f}} & Y/H} \]
commutes.\textbf{Warning:} Not every morphism $X/G\to Y/H$ is of this form.
\item Consider the special case $H=\{1\}$, and show that $\mathfrak{f}\mapsto f:=\mathfrak{f}\circ \rho_X$ defines an equivalence from $\MOR(X/G,Y)$ to the set $\Mor(X,Y)^G$ of $G$-invariant morphisms, thought as a groupoid.
\item More general, given a scheme $Y$ and a stack $F$, show that $\MOR(F,Y)$ is essentially a set, i.e.\ the only 2-morphisms are the identity morphisms.
\end{enumerate}
\end{exercise}
Let us come back to moduli spaces. The moduli problem of classifying $G$-homogeneous spaces $P$ together with $G$-equivariant maps $P\to X$ for some fixed scheme $X$ with an action of an algebraic group $G$, has a natural generalized ``moduli space'', namely the quotient stack $X/G$. This is not a deep insight, but just the definition of the associated moduli functor. Note that the isomorphism classes of the $\CC$-valued points of $X/G$, i.e.\ $X/G(\Spec\CC)/_\sim$, is the set of $G$-orbits in $X$ justifying the notation.\\
Quotient stacks are also very helpful when it comes to other moduli problems as the following example shows, and their usefulness cannot be overestimated.
\begin{example} \label{stack_example_2} \rm
Consider the stack of finite dimensional representations of a $\CC$-algebra $A$. Assume that $A$ is finitely presented, i.e.\ $A$ is generated by a set\footnote{The notation in this example has been chosen with an eye towards the next section.} $Q_1$ of finitely many elements $\alpha_1, \ldots, \alpha_n$ satisfying a finite set of relations $R=\{r_1,\ldots,r_m\}$. Fix a ``dimension'' $d\in \mathbb{N}$ and put $X_d =\Hom_\CC(\CC^d,\CC^d)^n=\prod_{\alpha\in Q_1}\Hom_\CC(\CC^d,\CC^d)$ and $X^R_d=\{(M_\alpha)_{\alpha\in Q_1}\mid r_j(M_{\alpha_1},\ldots,M_{\alpha_n})=0 \mbox{ for all }1\le j\le m\}$. We claim that the moduli stack $A\rep_d$ of $d$-dimensional representations of $A$ is equivalent to the quotient stack $X^R_d/\Gl(d)$ with $\Gl(d)$ acting by conjugation on $\Hom_\CC(\CC^d,\CC^d)$. Indeed, a family of $d$-dimensional representations on $S\in \Sch_\CC$ is uniquely determined by a vector bundle $V$ of rank $d$ on $S$ and vector bundle endomorphisms $\hat{\alpha}$ associated to $\alpha\in Q_1$ satisfying the relations $r_1,\ldots,r_m$. Consider the frame bundle $\Fr(V)=\{ (s,\tau) \mid s\in S, \tau\in \Hom_\CC(\CC^d,V_s)\mbox{ is invertible}\}$ of $V$ parameterizing all possible choices of a basis in all possible fibers of $V$. It comes with a projection to $S$ and a right action of $\Gl(d)$ by composition with $\tau\in \Hom_\CC(\CC^d,V_s)$.
\begin{exercise}
Show that $\Fr(V)$ is a principal $\Gl(d)$-bundle.
\end{exercise}
There is also a $\Gl(d)$-equivariant map $m(V,(\hat{\alpha})_{\alpha\in Q_1})$ from $\Fr(V)$ into $X^R_d$ mapping a pair $(s,\tau)$ to $(M_\alpha:=\tau^{-1}\circ \hat{\alpha}|_{V_s} \circ \tau)_{\alpha\in Q_1}$.
\begin{exercise}
Convince yourself that the map $(V,(\hat{\alpha})_{\alpha\in Q_1})\longmapsto \big(\Fr(V),m(V,(\hat{\alpha})_{\alpha\in Q_1})\big)$ extends to a functor from the groupoid of families of $d$-dimensional $A$-representations into the groupoid $X^R_d/\Gl(d)(S)$. Show furthermore that this functor is compatible with pull-backs, and, thus, defines a morphism $A\rep_d\to X^R_d/\Gl(d)$ of stacks.
\end{exercise}
Conversely, given a principal $\Gl(d)$-bundle $P$ on $S$ and a $\Gl(d)$-equivariant map $m:P\to X^R_d$, we can consider the trivial vector bundle $P\times \CC^d$ on $P$ which comes with a natural $\Gl(d)$-action compatible with the projection to $P$. Moreover, picking the component of $m$ associated to $\alpha\in Q_1$, we get an endomorphism $\hat{\alpha}$ of this trivial bundle. $\Gl(d)$-equivariance of $m$ ensures that $\hat{\alpha}$ commutes with the $\Gl(d)$-action on $P\times \CC^d$. By taking the $\Gl(d)$-quotient, we obtain a vector bundle $V=P\times_{\Gl(d)}\CC^d$ of rank $d$ on $S$ along with vector bundle endomorphisms $\hat{\alpha}$ satisfying the relations $r_1,\ldots,r_m$.
\begin{exercise}
Show that this construction extends to a functor between groupoids, compatible with pull-backs. Hence, we obtain a morphism from the quotient stack $X^R_d/\Gl(d)$ to $A\rep_d$. Prove that this morphism is an inverse (up to 2-isomorphism) of the morphism constructed above.
\end{exercise}
Thus, the claim is proven and the stack $A\rep$ of $A$-representations is isomorphic to $\sqcup_{d\in\mathbb{N}} X^R_d/\Gl(d)$.
\end{example}
\begin{exercise} \label{stack_projective_reps} Use a similar idea of frame bundles parameterizing tuples $(s\in S,\Mat_\mathbb{C}(r,r)\xrightarrow{\sim}\mathcal{E}_s)$ for a locally trivial family $\mathcal{E}$ on $S$ of $\mathbb{C}$-algebras isomorphic to $\Mat_\mathbb{C}(r,r)$ to show that the stack of projective $A$-representations is given by $\sqcup_{d\in\mathbb{N}} X^R_d/\PGl(d)$. As in Exercise \ref{stack_morphisms}(2), we obtain a morphism $X^R_d/\Gl(d)\longrightarrow X^R_d/\PGl(d)$ by means of the group homomorphism $\Gl(d)\to \PGl(d)$. Show that this morphism is mapping the $A$-representation on $V$ to the projective $A$-representation on $\mathbb{P}(V)$, in other words, forget $V$ and keep $\mathcal{E} nd(V)$ together with the algebra homomorphism $A\to \Gamma(S,\mathcal{E} nd(V))$.
\end{exercise}
\begin{example} \rm
The ``geometry'' of the moduli stack $\Coh^{X}$ of coherent sheaves on a smooth projective variety $X$ is more involved. First of all, it decomposes into components $\Coh^{X}_c$ indexed by numerical data like Chern classes similar to the dimension of a representation. Unfortunately, a component can not be written as a quotient stack. However, every component $\Coh^{X}_{c}$ is the nested union of ``open'' substacks $\Coh^X_{c,i}, i\in \mathbb{N},$ which can be written as a quotient stack $Y_{c,i}/\Gl(n_{c,i})$. Note that $n_{c,i}$ grows with $i$. More details can be found is section 9 of \cite{JoyceI}.
\end{example}
The following definition of a fiber product is very important.
\begin{definition}[fiber product]
Given two morphisms $\mathfrak{f}:\mathfrak{X}\to \mathfrak{Z}$ and $\mathfrak{g}:\mathfrak{Y}\to \mathfrak{Z}$ of groupoid-valued functors, we define the fiber product $\mathfrak{X}\times_\mathfrak{Z} \mathfrak{Y}$ as the groupoid-valued functor such that
\[ \Obj\mathfrak{X}\times_\mathfrak{Z} \mathfrak{Y}(S)=\{ (x,y,w)\mid x\in \Obj\mathfrak{X}(S), y\in \Obj\mathfrak{Y}(S), w\in \Mor_{\mathfrak{Z}(S)}(\mathfrak{f}_S(x),\mathfrak{g}_S(y)) \}, \] and
\begin{eqnarray*} \lefteqn{\Mor_{\mathfrak{X}\times_\mathfrak{Z}\mathfrak{Y}(S)}\Big((x,y,w),(x',y',w')\Big)} \\&=& \{ (u,v)\in \Mor_{\mathfrak{X}(S)}(x,x')\times \Mor_{\mathfrak{Y}(S)}(y,y') \mid \xymatrix @C=0.6cm @R=0.6cm{ \mathfrak{f}_S(x) \ar[r]^w \ar[d]_{\mathfrak{f}_S(u)} & \mathfrak{g}_S(y) \ar[d]^{\mathfrak{g}_S(v)} \\ \mathfrak{f}_S(x') \ar[r]^{w'} & \mathfrak{g}_S(y') } \mbox{commutes } \}
\end{eqnarray*}
for every $S\in \Sch_\CC$.
\end{definition}
\begin{exercise} Show the main properties of the fiber product.
\begin{enumerate}
\item Prove that $\mathfrak{X}\times_\mathfrak{Z} \mathfrak{Y}$ is a stack if $\mathfrak{X},\mathfrak{Y},\mathfrak{Z}$ were stacks.
\item Construct two morphisms $\pr_\mathfrak{X}: \mathfrak{X} \times_\mathfrak{Z} \mathfrak{Y} \longrightarrow \mathfrak{X}$ and $\pr_\mathfrak{Y}: \mathfrak{X} \times_\mathfrak{Z} \mathfrak{Y} \longrightarrow \mathfrak{Y}$ of groupoid-valued functors and a 2-morphism $\omega:\mathfrak{f}\circ\pr_\mathfrak{X}\to \mathfrak{g}\circ\pr_\mathfrak{Y}$. Show that the following universal property holds. Given a groupoid-valued functor $\mathfrak{T}$, two morphisms $\mathfrak{p}:\mathfrak{T}\to \mathfrak{X}$, $\mathfrak{q}:\mathfrak{T}\to \mathfrak{Y}$ and a 2-morphism $\eta:\mathfrak{f}\circ\mathfrak{p} \to \mathfrak{g}\circ\mathfrak{q}$, there is a unique morphism $\mathfrak{r}:\mathfrak{T}\to \mathfrak{X}\times_\mathfrak{Z} \mathfrak{Y}$ such that $\pr_\mathfrak{X}\circ\mathfrak{r}=\mathfrak{p}$ and $\pr_\mathfrak{Y}\circ\mathfrak{r}=\mathfrak{q}$.
\[ \xymatrix { \mathfrak{T} \ar@{.>}[dr]^{\mathfrak{r}} \ar@/^1pc/[drr]^{\mathfrak{p}} \ar@/_1pc/[ddr]_{\mathfrak{q}} & & \\ & \mathfrak{X}\times_\mathfrak{Z} \mathfrak{Y} \ar[r]^{\pr_\mathfrak{X}} \ar[d]_{\pr_\mathfrak{Y}} & \mathfrak{X} \ar[d]^\mathfrak{f} \ar@{=>}[dl]^\omega \\ & \mathfrak{Y} \ar[r]_\mathfrak{g} & \mathfrak{Z} } \]
\end{enumerate}
\end{exercise}
When it comes to quotient stacks, the following examples are very useful.
\begin{exercise} \hfill
\begin{enumerate}
\item Assume $\mathfrak{X}=X, \mathfrak{Y}=Y\in \Sch_\CC$ and $\mathfrak{Z}=Z/G$ for some algebraic group $G$ acting on a scheme $Z$. The morphisms $\mathfrak{f}:X\to Z/G$ and $\mathfrak{g}:Y \to Z/G$ are given by principal $G$-bundles $P\to X$ and $Q\to Y$ together with $G$-equivariant morphisms $f:P\to Z$ and $g:Q\to Z$ respectively. Show that the fiber product $X\times_{Z/G} Y$ is given by the scheme $Iso_{f,g}(P,Q)\subseteq Iso(P,Q)$ over $X\times Y$ given by $\{(x,y,w)\mid x\in X,y\in Y, w:P_x\xrightarrow{\sim}Q_y \;G\mbox{-equivariant such that }f|_{P_x}=g|_{Q_x}\circ w\}$.
\item Assume furthermore $X=Z$ and $P=X\times G\xrightarrow{\pr_X}X$ with $f:X\times G\to X$ being the group action. Hence, $\mathfrak{f}$ is the standard atlas $\rho:X\to X/G$. Show that $Iso_{f,g}(P,Q)$ is isomorphic to $Q$, and
\[ \xymatrix { Q \ar[r]^g \ar[d] & X \ar[d]^\rho \\ Y \ar[r]^{\mathfrak{g}} & X/G }\]
is the fiber product diagram, i.e.\ a cartesian square. Hence, $\rho:X\to X/G$ is the universal principal $G$-bundle.
\end{enumerate}
\end{exercise}
\begin{example} \label{example_fiber_product} \rm
Let $\phi:G\to K$ and $\psi:G\to K$ be homomorphisms between algebraic groups $G,H,K$ acting on $X,Y$ and $Z$ respectively. Moreover, let $f:X\to Z$ and $g:Y\to Z$ be two morphisms such that $f(xg)=f(x)\phi(g)$ and $g(yh)=g(y)\psi(h)$ for all $x\in X,y\in Y,g\in G,h\in H$. As we have seen in Exercise \ref{stack_morphisms}, this induces morphisms $\mathfrak{f}:X/G\to Z/K$ and $\mathfrak{g}:Y/H\to Z/K$. Then, $X/G\times_{Z/K} Y/H$ is the quotient stack $(X\times Y)\times_{(Z\times Z)} (Z\times K)/(G\times H)$ using the group actions $(x,y)(g,h)=(xg,yh)$, $(z,k)(z\phi(g),\phi(g)^{-1}k\psi(h))$, $(z_1,z_2)(g,h)=(z_1\phi(g),z_2\psi(h))$ of $G\times H$ on $X\times Y$, $Z\times K$, $Z\times Z$ and the $G\times H$-equivariant morphisms $X\times Y\ni (x,y)\mapsto (f(x),g(y))\in Z\times Z$, $Z\times K\ni(z,k) \mapsto (z,zk) \in Z\times Z$.
\end{example}
\begin{exercise} \label{fiber_projectivization}
Use the previous example to show that every fiber of the morphism $X^R_d/\Gl(d)\longrightarrow X^R_d/\PGl(d)$ constructed in Exercise \ref{stack_projective_reps} is isomorphic to $\Spec\mathbb{C}/\mathbb{G}_m$. Interpret this result in terms of (projective) $A$-representations. (cf.\ Example \ref{projectivization})
\end{exercise}
The following definition is slightly stronger than the one used in the literature as we do not have algebraic spaces at our disposal. However, it will be sufficient for our purposes.
\begin{definition} A morphism $\mathfrak{f}:\mathfrak{X}\to \mathfrak{Z}$ is called representable if for every morphisms $\mathfrak{g}:Y\to \mathfrak{Z}$ from a scheme $Y$ into $\mathfrak{Z}$, the fiber product $\mathfrak{X}\times_\mathfrak{Z} Y$ is (represented by) a scheme. In such a situation, we call $\mathfrak{f}$ smooth, surjective etc.\ if $\mathfrak{X}\times_\mathfrak{Z} Y \xrightarrow{\pr_Y} Y$ is smooth, surjective etc.
\end{definition}
\begin{exercise} \label{representable} Here are some examples of representable morphisms.
\begin{enumerate}
\item Show that every morphism between schemes is representable.
\item Prove that the standard atlas $\rho:X\to X/G$ is representable, smooth and surjective. Hint: Every algebraic group is a smooth scheme.
\item Use Example \ref{example_fiber_product} to show that $\mathfrak{f}:X/G\to Z/K$ is representable if $\phi:G\to K$ is injective. Give a counterexample for the converse statement.
\item Prove that the diagonal $\Delta_\mathfrak{Z}:\mathfrak{Z} \to \mathfrak{Z}\times_{\Spec\CC}\mathfrak{Z}$ is representable if and only if every morphism $\mathfrak{f}:X\to \mathfrak{Z}$ from a scheme $X$ is representable. Hint: $X\times_\mathfrak{Z} Y= (X\times Y)_{(\mathfrak{Z}\times \mathfrak{Z})} \mathfrak{Z}$.
\end{enumerate}
\end{exercise}
\begin{definition}
A stack $\mathfrak{X}$ is called algebraic or an Artin stack if
\begin{enumerate}
\item[(i)] $\Delta_\mathfrak{X}:\mathfrak{X}\to \mathfrak{X}\times \mathfrak{X}$ is representable (cf.\ Exercise \ref{representable}(4)) and
\item[(ii)] there is a smooth, surjective morphism $\rho:X\to \mathfrak{X}$ from a scheme $X$.
\end{enumerate}
In such a situation, we call $\rho:X\to \mathfrak{X}$ an atlas of $\mathfrak{X}$.
\end{definition}
In a suitable sense, the algebraic stack $\mathfrak{X}$ is a quotient of its atlas $X$ similar to the concept of an algebraic space. However, the quotient is taken in the category of groupoids and not in the category of sets as before. As we have seen in Exercise \ref{representable}, every quotient stack is an Artin stack with standard atlas $\rho:X\to X/G$. By taking $X^R=\sqcup_{d\in \mathbb{N}}X^R_d\to A\rep$, the moduli stack of finite dimensional representations of a finitely represented $\mathbb{C}$-algebra $A$ is also algebraic. Finally, using $Y=\sqcup_{c,i} Y_{c,i}\to \Coh^X$, we see that the moduli stack of coherent sheaves on a smooth projective variety $X$ is also an Artin stack.
\section{Quiver representations and their moduli}
\subsection{Quivers and $\CC$-linear categories}
Recall that a groupoid is a category generalizing groups and sets. Similarly, there is a categorical concept interpolating between $\CC$-algebras and sets.
These are the so-called $\CC$-linear categories. A category $\mathcal{A}$ is called $\CC$-linear if the morphism sets $\Mor_{\mathcal{A}}(x,y)$ have the structure of a $\CC$-vector space such that the composition of morphisms is $\CC$-bilinear.
As usual, we write $\Hom_{\mathcal{A}}(x,y)$ for the $\CC$-vector space of morphisms from $x$ to $y$ and $\End_{\mathcal{A}}(x)=\Hom_{\mathcal{A}}(x,x)$ for the $\CC$-algebra of endomorphisms of $x\in \Obj(\mathcal{A})$. A $\CC$-linear category with one object is just a $\CC$-algebra. On the other hand, $\CC$-linear categories with as less morphisms as possible are uniquely classified by their set of objects since any morphism must be zero or a multiple of the identity of some object.
Another standard example of a $\CC$-linear category is given by the category $\Vect_\CC$ of finite dimensional $\CC$-vector spaces. A finite dimensional representation of a $\CC$-linear category $\mathcal{A}$ is simply given by a functor $V:\mathcal{A}\to \Vect_\CC$. Indeed, if the category $\mathcal{A}$ has only one object $\star$, $V(\star)$ is just a finite dimensional representation of the endomorphism algebra $\End_\mathcal{A}(\star)$. As we have seen in the previous section, generators of algebras are very useful when it comes to the construction of moduli stacks. The analogue in the context of $\CC$-linear categories is called a quiver. A quiver consists of a set of objects $Q_0$ and a set of ``arrows'' $Q_1$ along with maps $s,t:Q_1 \to Q_0$ indicating the \underline{s}ource and the \underline{t}arget of an arrow. We do not require a composition law nor identity morphisms.
Given a $\CC$-linear category $\mathcal{A}$, a quiver in $\mathcal{A}$ satisfies $Q_0\subseteq\Obj(\mathcal{A}), Q_1\subseteq \Mor(\mathcal{A})$ and $s,t$ are given by restriction of the corresponding maps on $\Mor(\mathcal{A})$ to $Q_1$. We say that $\mathcal{A}$ is generated by a quiver $Q$, if the smallest $\CC$-linear category containing $Q$ is $\mathcal{A}$ which implies $Q_0=\Obj(\mathcal{A})$. There is a biggest $\CC$-linear category generated by a given quiver $Q$, the so-called path category $\CC Q$ of $Q$. A morphism of $\CC Q$ from $x\in Q_0$ to $y\in Q_0$ is a $\CC$-linear combination of chains $x=x_1\to x_2\to \ldots \to x_{n-1}\to x_n=y$ of composable arrows in $Q_1$. We also need to add an identity morphism and its $\CC$-linear multiples.
\begin{exercise} Construct a category of quivers such that $Q\mapsto \CC Q$ is a functor from this category to the category of (small) $\CC$-linear categories. Construct a right adjoint of this functor.
\end{exercise}
\begin{exercise} Show that there is a bijection between representations of $\CC Q$ and representations $V$ of $Q$ associating to every $i\in Q_0$ a vector space $V_i$ and to every arrow $\alpha:i\to j$ in $Q_1$ a $\CC$-linear map $V(\alpha):V_i\to V_j$.
\end{exercise}
Given a $\CC$-linear category $\mathcal{A}$ and a generating quiver $Q$ in $\mathcal{A}$, we get a full functor $\CC Q \twoheadrightarrow \mathcal{A}$ which is a bijection on the set of objects. The kernel is a $\CC$-linear subcategory $\mathcal{I}$ in $\CC Q$ which has the property $a\circ b\in \Mor(\mathcal{I})$ if $a\in \Mor(\mathcal{I})$ or $b\in \Mor(\mathcal{I})$ categorifying the concept of an ideal. A generating quiver for $\mathcal{I}$ is uniquely determined by its set of arrows
$R\subseteq \Mor(\CC Q)$ which are called relations. Conversely, every quiver $Q$ with relations give rise to a $\CC$-linear category $\CC Q/(R)$ uniquely defined up to isomorphism. Conversely, every $\CC$-linear category $\mathcal{A}$ can be written like this (up to isomorphism) in many ways.\\
Throughout this paper we will only consider finite quivers, i.e.\ $|Q_0|<\infty$ and $|Q_1|<\infty$ and similarly for the relations. Hence, the $\CC$-linear categories $\mathcal{A}$ which can be described by a finite quiver with finitely many relations are exactly the finitely presented $\CC$-linear categories.
\begin{exercise}
Show that the category of $\CC$-linear categories $\mathcal{A}$ with finite set of objects is equivalent to the category of $\CC$-algebras together with a distinguished finite set $\{e_i\}_{i\in I}$ of mutually orthogonal idempotent elements $e_i$ such that $1=\sum_{i\in I}e_i$. Hint: Put $A:=\oplus_{i,j\in \Obj(\mathcal{A})} \Hom_\mathcal{A}(i,j)$ and $e_i=\id_i$ for all $i\in \mathcal{A}$. Moreover, prove that the category of representations of such a $\CC$-linear category is isomorphic to the category of representations of the associated algebra.
\end{exercise}
Using the last exercise, we can also talk about the path $\CC$-algebra of a quiver with finite set $Q_0$ and its representations. Note that the path $\CC$-algebra has a distinguished family $(e_i)_{i\in Q_0}$ of mutually orthogonal idempotent elements summing up to $1$.
\subsection{Quiver moduli spaces and stacks}
Generalizing the moduli functor $A\rep$ of finite dimensional representations of a given $\CC$-algebra $A$ (see Example \ref{moduli_algebra}), we define the moduli functor $\mathcal{A}\rep$ of finite dimensional representations of a $\CC$-linear category $\mathcal{A}$ as follows. To every scheme $S$ over $\CC$ we associate the isomorphism groupoid $\mathcal{A}\rep(S)$ of the category of functors $\mathcal{A}\to \Vect_S$, where $\Vect_S$ is the category of vector bundles on $S$.
\begin{exercise}
Show that $\mathcal{A}\rep$ is a stack, i.e.\ satisfies the gluing axiom for groupoid-valued functors.
\end{exercise}
If $\mathcal{A}$ is represented by a quiver $Q$ with relations $R\subseteq \Mor(\CC Q)$, the category $\mathcal{A}\rep(S)$ is equivalent to the category of families $(V_i)_{i\in Q_0}$ of vector bundles on $S$ together with vector bundle morphisms $\hat{\alpha}=V(\alpha):V_i\to V_j$ such that $V(r)=r\big((\hat{\alpha})_{\alpha\in Q_1}\big)=0$ for all $r\in R$, where we extended $V$ from $Q_1$ to $\Mor(\CC Q_1)\supseteq Q_1$.
Let us assume that $Q_0,Q_1$ and $R$ are finite sets. Using Example \ref{stack_example_2}, it should not come as a surprise that $\mathcal{A}\rep$ is isomorphic to a disjoint union $\mathfrak{M}^R:=\sqcup_{d\in \mathbb{N}^{Q_0}} \mathfrak{M}^R_d$ of quotient stacks $\mathfrak{M}^R_d:=X^R_d/G_d$ with
\[X^R_d:=\{ (M_{\alpha})_{\alpha\in Q_1}\in X_d \mid r\big((M_\alpha)_{\alpha\in Q_1}\big)=0 \,\forall \, r\in R\} \subseteq X_d:= \prod_{Q_1\ni\alpha:i\to j} \Hom_\CC(\CC^{d_i},\CC^{d_j})\] and $G_d=\prod_{i\in Q_0}\Gl(d_i)$ acting on $X_d$ by simultaneous conjugation. The ``dimension vector'' $d\in \mathbb{N}^{Q_0}$ is fixing $\dim V=(\rk V_i)_{i\in Q_0}$. \\
Similarly, given a sequence of dimension vectors $d^{(1)},\ldots,d^{(r)}$ we denote with $X_{d^{(1)},\ldots,d^{(r)}}\subseteq X_{d^{(1)}+\ldots+d^{(r)}}$ the affine subvariety parameterizing linear maps preserving the standard flag $0 \subseteq \CC^{d^{(1)}_i} \subseteq \CC^{d^{(1)}_i}\oplus\CC^{d^{(2)}_i} \subseteq \ldots \subseteq \CC^{d^{(1)}_i}\oplus\ldots\oplus \CC^{d^{(r)}_i}$ for every $i\in Q_0$. The subgroup $G_{d^{(1)},\ldots,d^{(r)}}\subseteq G_{d^{(1)}+\ldots+d^{(r)}}$ is defined in the same way. Finally, we put $X^R_{d^{(1)},\ldots,d^{(r)}}:=X_{d^{(1)},\ldots,d^{(r)}}\cap X^R_{d^{(1)}+\ldots+d^{(r)}}$.
\begin{exercise}
Show that the stack of all successive extensions
\begin{eqnarray*}
&0\to V^{(1)}\to \hat{V}^{(2)} \to V^{(2)} \to 0, &\\
&0\to \hat{V}^{(2)}\to \hat{V}^{(3)} \to V^{(3)} \to 0, &\\
& \vdots & \\
&0\to \hat{V}^{(r-1)}\to \hat{V}^{(r)} \to V^{(r)} \to 0 &
\end{eqnarray*}
of quiver representations satisfying the relations $R$ and with $\dim V^{(j)}=d^{(j)}$ for all $1\le j\le r$ is given by the quotient stack $\mathfrak{M}^R_{d^{(1)},\ldots,d^{(r)}}=X^R_{d^{(1)},\ldots,d^{(r)}}/G_{d^{(1)},\ldots,d^{(r)}}$. Hint: The standard flag introduced above defines a standard successive extension of $Q_0$-graded vector spaces of prescribed dimension vectors. Given a family of successive extensions, consider the principal $G_{d^{(1)},\ldots,d^{(r)}}$-bundle parameterizing all isomorphism from the standard extension to the fibers of the family, and proceed as usual.
\end{exercise}
We are mainly interested in the following type of relations. A potential $W$ is an element of the vector space $\CC Q/[\CC Q,\CC Q]$, where $[\CC Q,\CC Q]$ denotes the $\CC$-linear span (and not the spanned ideal) of all commutators. Note that $\CC Q/[\CC Q,\CC Q]$ is the $0$-th Hochschild homology of the $\CC$-linear category $\CC Q$. Convince yourself that $W$ is essentially just a $\CC$-linear combination of equivalence classes of cycles in $Q$ with two cycles being equivalent if they can be transformed into each other by a cyclic permutation.
\begin{example} \rm
The three elements $[x,y]z=xyz-yxz$, $[z,x]y$ and $[y,z]x$ in $\CC Q^{(3)}$ of the 3-loop quiver $Q^{(3)}$
\[ \xymatrix { \bullet \ar@(u,r)^y \ar@(dr,dl)^z \ar@(l,u)^x }\]
define the same potential $W$.
\end{example}
For a fixed potential $W=\sum_{l=1}^L a_l \cdot[C_l]$ we define relations $\partial W/\partial \alpha\in \Hom_{\CC Q}(j,i)$ for every $\alpha:i\to j$ in $Q_1$ as follows.
\[ \frac{\partial W}{\partial \alpha}:=\sum_{l=1}^L a_l \cdot\sum_{C_l=u\alpha v} vu \]
with $a_l\in \CC$, where the second sum is over all occurrences of $\alpha$ in a fixed representative of an equivalence class $[C_l]$ of cycles in $Q$.
\begin{exercise}
Show that the definition of $\partial W/\partial \alpha$ is independent of the choice of the representative $C_l\in [C_l]$ for all $1\le l\le L$.
\end{exercise}
\begin{example} \rm
Using the potential $W=[x,y]z=xyz-yxz$ from the previous example, we compute
\begin{eqnarray*}
\frac{\partial W}{\partial x} & = & yz-zy\; =\;[y,z], \\
\frac{\partial W}{\partial y} & = & zx-xz\; =\;[z,x], \\
\frac{\partial W}{\partial z} & = & xy-yx\; =\;[x,y].
\end{eqnarray*}
Convince yourself that $W=[z,x]y$ and $W=[y,z]x$ provide the same relations.
\end{example}
Given a dimension vector $d\in \mathbb{N}^{Q_0}$ and a potential $W=\sum_{l=1}^L a_l \cdot[C_l]$ with $C_l=\alpha_l^{(1)}\circ \ldots \circ\alpha^{(n_l)}_l$, we define the following function
\[ \Tr(W)_d:X_d\ni (M_\alpha)_{\alpha\in Q_1} \longmapsto \sum_{l=1}^L a_l\cdot \Tr\big(M_{\alpha_l^{(1)}}\cdot \ldots \cdot M_{\alpha^{(n_l)}_l}\big) \in {\mathbb{A}^1} \]
which is independent of the choice of the representative $C_l\in [C_l]$ as the trace is invariant under cyclic permutation. By the same argument, $\Tr(W)_d$ is $G_d$-invariant, and induces a function $\mathfrak{Tr}(W)_d:\mathfrak{M}_d\to {\mathbb{A}^1}$ on the quotient stack.
\begin{exercise}
Let us take the relations $R=\{\partial W/\partial \alpha \mid \alpha\in Q_1\}$. Show that $X^R_d=\Crit(\Tr(W)_d)$ is the critical locus of $\Tr(W)_d$, and similarly $\mathfrak{M}^R_d=\Crit(\mathfrak{Tr}(W)_d)$.
\end{exercise}
Throughout the paper we will use the superscript $W$ instead of the superscript $R$ for $R=\{\partial W/\partial \alpha \mid \alpha\in Q_1\}$, and no superscript if $W=0$. We will also use the notation $\Jac(Q,W)$ for the so-called Jacobi algebra $\CC Q/(R)$.\\
The moduli stack $\mathfrak{M}^R_d$ has a coarse moduli space $\mathcal{M}^{W,ssimp}_d$ parameterizing semisimple (direct sums of simple) representations of dimension vector $d$. It is an affine scheme given by $\Spec \CC[X^R_d]^{G_d}$ with $\CC[X^R_d]^{G_d}$ denoting the $G_d$-invariant regular functions on the affine scheme $X^R_d$.
\begin{example} \rm
For the 3-loop quiver $Q^{(3)}$ with potential $W=[x,y]z$, the scheme $X^W_d$ parametrizes triples of commuting $d\times d$-matrices $M_x,M_y,M_z$. Hence, a simple representation of the Jacobi algebra $\Jac(Q^{(3)},W)=\CC[x,y,z]$ is one-dimensional and determined by $(M_x,M_y,M_z)\in \AA^3$. Therefore, $\mathcal{M}^W_d=\Sym^d(\AA^3)=(\AA^3)^d/S_d$.
\end{example}
Let us finally introduce a stability condition by choosing a tuple $\zeta\in \mathbb{H}_+^{Q_0}$ of complex numbers in the (extended) upper half plane $\mathbb{H}_+$ giving rise to the ``central charge'' $Z(V):=\zeta\cdot \dim V=\sum_{i\in Q_0}\zeta_i\dim V_i \in \mathbb{H}_+$ for every representation $V$ of $Q$.
\begin{definition} A representation $V\not=0$ of a quiver $Q$ (with relations) is called $\zeta$-semistable if
\[ \arg Z( V') \le \arg Z( V) \]
for all proper subrepresentations $V'\subset V$. If the inequality is strict, $V$ is called $\zeta$-stable. The real number $\mu(V):=-\cot(\arg Z(V))$ is called the slope of $V$. Hence, $V$ is semistable if and only if $\mu(V')\le \mu(V)$ for all proper subrepresentations $V'\subset V$.
\end{definition}
\begin{exercise} \label{semistable_reps} Let us show that semistable representations of the same slope $\mu$ form a nice full subcategory.
\begin{enumerate}
\item Consider a morphism $f:V^{(1)}\to V^{(2)}$ of semistable representations of slopes $\mu(V^{(1)})>\mu(V^{(2)})$. Show that $f=0$. Hint: Relate the slope of $V/\ker(f)=\im(f)$ to $\mu(V^{(1)})$ and to $\mu(V^{(2)})$ by drawing the central charges of all objects involved.
\item Using the notation of the first part, let us assume $\mu(V^{(1)})=\mu(V^{(2)})$ for the semistable representations $V^{(1)},V^{(2)}$. Show that $\ker(f)$ and $\coker(f)$ are also semistable of the same slope $\mu(V^{(1)})$. In particular, the semistable representations of a fixed slope $\mu$ form a full abelian subcategory.
\item Show that the stable objects of slope $\mu$ are the simple objects in the full abelian subcategory of semistable representations of slope $\mu$.
\item Prove that the extension of two semistable representations of slope $\mu$ is again semistable of the same slope.
\end{enumerate}
\end{exercise}
Every representation $V$ of a quiver (with relations) has a unique Harder--Narasimhan filtration, i.e.\ a finite filtration $0\subset V^{(1)} \subset \ldots\subset V^{(r)}=V$ such that the subquotients $V^{(i)}/V^{(i-1)}$ are semistable of slope $\mu^{(i)}$ satisfying $\mu^{(1)}>\ldots> \mu^{(r)}$.
\begin{exercise} Let us prove the last statement in three steps.
\begin{enumerate}
\item Show that $V$ has a maximal nonzero subrepresentation of maximal slope. Hint: Show that the set of slopes of subrepresentations of $V$ has a maximal element. Use Exercise \ref{semistable_reps}(4) to construct a maximal subrepresentation of maximal slope.
\item Use Exercise \ref{semistable_reps}(1) to construct a Harder--Narasimhan filtration. Hint: Let $V^{(1)}$ be the subrepresentation constructed in the first step, and let $V^{(2)}$ be the preimage of a maximal subrepresentation in $V/V^{(1)}$ of maximal slope. Proceed in this way, and use the previous exercise to estimate the slopes.
\item Prove the uniqueness of this filtration by applying Exercise \ref{semistable_reps}(1) once more.
\end{enumerate}
\end{exercise}
We denote by $X^{R,\zeta-ss}_d$ the subscheme of linear maps $(M_\alpha)_{\alpha\in Q_1}$ such that the induced quiver representation on $(\CC^{d_i})_{i\in Q_0}$ is $\zeta$-semistable. It is open and stable under the $G_d$-action. Hence, we can form the quotient stack $\mathfrak{M}^{R,\zeta-ss}_d=X^{R,\zeta-ss}_d/G_d$ of $\zeta$-semistable representations of dimension vector $d$. The open subscheme $X^{R,\zeta-st}_d\subseteq X^{R,\zeta-ss}_d$ and the open substack $\mathfrak{M}^{R,\zeta-st}_d\subset \mathfrak{M}^{R,\zeta-ss}_d$ of $\zeta$-stable representations are defined accordingly. \\
If $\zeta_i=-\theta_i+\sqrt{-1}$ with $\theta_i\in \mathbb{Z}$ for all $i\in Q_0$, one can linearize the $G_d$-action on the trivial line bundle over $X_d$ using the character \[G_d\ni (g_i)_{i\in Q_0} \longmapsto \prod_{i\in Q_0}\det(g_i)^{\theta\cdot d-|d|\theta_i} \in \mathbb{G}_m\]
with $\theta\cdot d=\sum_{i\in Q_0}\theta_id_i$ and $|d|:=\sum_{i\in Q_0}d_i$. A.\ King showed in \cite{King} that $X^{R,\zeta-ss}_d$ is the subscheme of semistable points in $X^R_d$ with respect to this linearization. Hence, a GIT-quotient $X^{R,\zeta-ss}_d/\!\!/G_d=\mathcal{M}^{R,\zeta-ss}_d$ with stable sublocus $X^{R,\zeta-st}_d/G_d=\mathcal{M}^{R,\zeta-st}_d$ exists. Using this, one can show that all moduli stacks $\mathfrak{M}^{R,\zeta-ss}_d$ have a coarse moduli space $\mathcal{M}^{R,\zeta-ss}_d$ parameterizing S-equivalence classes of $\zeta$-semistable objects, or, equivalently, isomorphisms classes of $\zeta$-polystable objects of dimension vector $d$ if $\zeta$ is in the complement of a countable union of real hypersurfaces in $\mathbb{H}_+^{Q_0}$. (see \cite{Meinhardt4} Example 3.32) A stability condition $\zeta$ having coarse moduli spaces $\mathcal{M}^{\zeta-ss}_d$ for all $d\in \mathbb{N}^{Q_0}$ is called geometric. In case $\zeta_i=\sqrt{-1}$ for all $i\in Q_0$, i.e.\ $\theta=0$, we write $\mathcal{M}^{R,ssimp}_d$ for $\mathcal{M}^{R,\zeta-ss}_d$ as its points correspond to isomorphism classes of semisimple $\mathbb{C} Q$-representations satisfying the relations $R$.
\begin{remark} \rm
Notice that $\mathbb{G}_m$, embedded into $G_d$ diagonally, acts trivially on $X_d$, and $G_d$ induces a $PG_d:=G_d/\mathbb{G}_m$-action on $X_d$. The character given above descends to a character on $PG_d$, and $X^{R,\zeta-ss}_d$ is also the semistable locus of $X^{R}_d$ with respect to the $PG_d$-linearization. Hence, $\mathcal{M}^{R,\zeta-ss}_d=X^{R,\zeta-ss}_d/\!\!/PG_d$ is also the coarse moduli space for $X^{R,\zeta-ss}_d/PG_d$, the stack of ``$d$-dimensional'' projective semistable quiver representations satisfying the relations $R$. It is not difficult to see that $\mathcal{M}^{R,\zeta-st}_d$ is in fact a fine moduli space for $X^{R,\zeta-st}_d/PG_d$, in other words, there is an isomorphism $X^{R,\zeta-st}_d/PG_d\cong \mathfrak{M}^{R,\zeta-st}_d$ of stacks. In particular, $\mathcal{M}^{R,\zeta-st}$ carries a universal family $\mathcal{P}$ of projective stable quiver representations satisfying our relations $R$. The reader should compare this with our final remarks in Example \ref{projectivization} and the two lessons we have mentioned after Example \ref{S_equivalence}. The morphism $X^{R,\zeta-st}_d/G_d\longrightarrow X^{R,\zeta-st}_d/PG_d\cong \mathcal{M}^{R,\zeta-st}_d$ is not an isomorphism. It is not hard to see that this map has a right inverse, i.e.\ a section, if $\gcd(d):=\gcd(d_i:i\in Q_0)=1$. (See \cite{Reineke5}, Section 5.4 for more details) Such a section is nothing else than a family $V=\bigoplus_{i\in Q_0}V_i$ of stable quiver representations on $\mathcal{M}^{R,\zeta-st}_d$ such that $\mathcal{P}=\mathbb{P}(V)$. The section and within the family is not unique. Any two sections corresponding to $V^{(1)}$ and $V^{(2)}$ differ (up to isomorphism) by a line bundle $L$ on $\mathcal{M}^{R,\zeta-st}$ with $V^{(2)}\cong V^{(1)}\otimes_{\mathcal{O}_{\mathcal{M}^{R,\zeta-st}_d}} L$ as in Example \ref{projectivization}. (See also Exercise \ref{fiber_projectivization}) Therefore, $V$ on $\mathcal{M}^{R,\zeta-ss}$ is only universal up to this weaker equivalence. It has been shown in \cite{Reineke6}, Thm.\ 3.4 that under some mild conditions on the pair $(d,\zeta)$ the moduli space $\mathcal{M}^{\zeta-st}_d$ has no ``universal family'' $V$, i.e.\ $X^{\zeta-st}_d/G_d\to \mathcal{M}^{\zeta-st}_d$ has no section, if $\gcd(d)>1$.
\end{remark}
\begin{definition} \label{generic_stability}
A stability condition $\zeta$ is called generic if $\langle d,d'\rangle=0$ for all $d,d'\in \Lambda^\zeta_\mu:=\{e\in \mathbb{N}^{Q_0}\mid e=0 \mbox{ or }e\mbox{ has slope }\mu\}$ and all $\mu\in \mathbb{R}$, where $\langle d,d' \rangle=(d,d')-(d',d)$ denotes the antisymmetrized Euler pairing
\[ (d,d')=\sum_{i\in Q_0}d_id'_i - \sum_{\alpha:i\to j}d_id'_j \]
satisfying $(\dim V,\dim V')=\dim \Hom_{\CC Q}(V,V')-\dim \Ext^1_{\CC Q}(V,V')$ for all $\CC Q$-representations $V,V'$.
\end{definition}
\section{From constructible functions to motivic theories}
\subsection{Constructible functions}
Let us start by recalling some facts about constructible functions. A constructible function is a function $a:X(\mathbb{C}) \to \mathbb{Z}$ on the set of (closed) points of a scheme/variety/manifold $X$ over $\mathbb{C}$ with only finitely many values on each connected component of $X$ and such that the level sets of $a$ are the (closed) points of locally closed subsets of $X$. We denote with $\Con(X)$ the group of constructible functions on $X$.
\begin{exercise}
Assume that $X$ is connected. Show that the map associating to every irreducible closed subset $V$ of $X$ its characteristic function extends to an isomorphism $\oplus_{x\in X}\mathbb{Z} x \cong \oplus_{V\subset X} \mathbb{Z} V\xrightarrow{\sim} \Con(X)$, where the first sum is over all not necessarily closed points $x\in X$, and the second sum is taken over all irreducible closed subsets $V\subset X$.
\end{exercise}
Apparently, we can pull back constructible functions and also multiply them pointwise. Contrary to the usual notation, we denote the pointwise product with $a\cap b$, i.e.\ $(a\cap b)(x)=a(x)b(x)$. The constant function $\mathbbm{1}_X(x)=1$ for all $x\in X(\mathbb{C})$ is the unit for the $\cap$-product. There is another product, the external product $a\boxtimes b =\pr_X^\ast(a)\cap\pr_Y^\ast(b)$ of two functions $a\in \Con(X)$ and $b\in \Con(Y)$ on $X\times Y$ such that $a\cap b=\Delta_X^\ast (a\boxtimes b)$ if $Y=X$. The unit for the $\boxtimes$-product is $1\in \mathbb{Z}=\Con(\Spec\mathbb{C})$. Moreover, we can define a push-forward of a constructible function
$a\in \Con(X)$ along a morphism $u:X\to Y$ of finite type by\footnote{For every scheme $X$ locally of finite type over $\mathbb{C}$, we denote with $X^{an}$ the ``analytification'' of $X$ which is an analytic space locally isomorphic to the vanishing locus of holomorphic functions on $\mathbb{C}^n$. If $X$ is smooth, $X^{an}$ is a complex manifold. In any case $X^{an}$ carries the analytic topology which is much finer than the Zariski topology on $X$.} $u_!(a)(y):=\int_{u^{-1}(y)^{an}} a\, d\chi_c:= \sum_{m\in \mathbb{Z}}m\chi_c\{x\in X\mid u(x)=y, a(x)=m\}^{an}$ for $y\in Y$. Here $\chi_c$ denotes the Euler characteristic with compact support, i.e.\ the alternating sum of the dimensions of the compactly supported cohomology. One can think of $\chi_c$ as a signed measure $\chi_c^X$ on $X$, even though it is only additive and not $\sigma$-additive. Given a constructible function $a$ on $X$, we get a new measure $a\cdot\chi_c^X$ on $X$ of density $a$ with respect to $\chi_c^X$. A push-forward of a measure is well-defined and $u_!(a\chi_c^X)$ has density $u_!(a)$ with respect to $\chi_c^Y$. Using the push-forward, one can define a third product for constructible functions on a monoidal scheme, i.e.\ a scheme $X$ with two maps $0:\Spec\CC\to X$ and $+:X\times X\to X$ of finite type satisfying an associativity and unit law. The convolution product is given by $a b=+_!(a\boxtimes b)$ and is commutative if $+$ is commutative. The unit is given by $0_!(1)$ with $1\in \Con(\Spec \CC)= \mathbb{Z}$ being the unit for the $\boxtimes$-product. If we had taken $X={\mathbb{A}^1}$, the convolution product is just the ``constructible version'' of the usual convolution product of integrable functions. The free commutative monoid generated by a scheme $X$ is given by $\Sym(X) = \sqcup_{n\in \mathbb{N}} \Sym^n(X)$ with $\Sym^n(X)=X^n/\!\!/S_n$, and $\oplus:\Sym(X) \times \Sym(X)\longrightarrow \Sym(X)$ is just the concatenation of unordered tuples of (geometric) points of $X$. The unit $0:\Spec\mathbb{C}=:\Sym^0(X) \hookrightarrow \Sym(X)$ is given by the ``empty tuple''. We can apply the definition of the convolution product $ab:=\oplus_!(a\boxtimes b)$ to $\Con(\Sym(X))$ making it into a commutative ring. This (convolution) ring has even more structure. Indeed, there is a family of maps $\sigma^n:\Con(X) \to \Con(\Sym^n(X))$ mapping the characteristic function of $V\subseteq X$ to the characteristic function of $\Sym^n(V)\subseteq \Sym^n(X)$.
\begin{example}\rm
Consider the example $X=\Spec \CC$. Then $\Sym(X)\cong\mathbb{N}$, and $\Con(X)\cong \mathbb{Z}[[t]]$ follows. The convolution product is just the ordinary product of power series and $\Sym^n(at)={ a+n-1 \choose n}t^n$. The pointwise $\cap$-product is known as the Hadamard product of two power series.
\end{example}
Let us collect the main properties of the structures described above.
\begin{proposition} \label{motivic_theories}
By taking pull-backs and push-forwards of constructible functions, we obtain a functor $\Con$ from the category $\Sch_\CC$ to the category of abelian groups which is both contravariant with respect to all morphisms and covariant with respect to all morphisms of finite type, i.e.\ for every morphism $u:X\to Y$
there is a group homomorphism $u^\ast:\Con(Y)\longrightarrow \Con(X)$, and if $u$ is of finite type, there is also a group homomorphism $u_!:\Con(X)\longrightarrow \Con(Y)$. Moreover, there is an ``exterior'' product
\[ \boxtimes: \Con(X)\otimes \Con(Y) \longrightarrow \Con(X\boxtimes Y) \]
defined for every pair $X,Y\in \Sch_\CC$ which is associative, symmetric\footnote{If $\tau:X\boxtimes Y \stackrel{\sim}{\to} Y\boxtimes X$ is the transposition, being symmetric means $\tau_!(a\boxtimes b )=\tau^\ast(a\boxtimes b) = b\boxtimes a$} and has a unit $1\in \Con(\Spec \CC )$. Finally, there are also operations
\[ \sigma^n: \Con(X) \longrightarrow \Con(\Sym^n( X)) \]
for $n\in \mathbb{N}$ such that $\sigma^n(1)=1$ holds for all $n\in \mathbb{N}$. Additionally, we have the following properties.
\begin{enumerate}
\item[(i)] Considered as a functor from $\Sch_\CC^{op}$ to abelian groups, $\Con$ commutes with all (not necessarily finite) products, i.e.\ the morphism
\[ \Con(X) \longrightarrow \prod_{X_i\in\pi_0(X)} \Con(X_i) \]
given by restriction to connected components is an isomorphism for all $X\in \Sch_\CC$.
\item[(ii)] ``Base change'' holds, i.e.\ for every cartesian diagram
\[ \xymatrix { X\times_Z Y \ar[r]^{\tilde{v}} \ar[d]_{\tilde{u}}& X \ar[d]^u \\ Y \ar[r]_v & Z } \]
with $u$ and, therefore, also $\tilde{u}$ of finite type, we have $\tilde{u}_!\circ \tilde{v}^\ast = v^\ast \circ u_!$.
\item[(iii)] The functor $\Con$ commutes with exterior products and $\sigma^n$, i.e.\ \\ $(u\boxtimes v)^\ast(a\boxtimes b)=u^\ast(a)\boxtimes v^\ast(b)$
for all $u:X\to X', v:Y\to Y', a\in \Con(X'), b\in \Con(Y')$. If $u, v$ are of finite type, then $(u\boxtimes v)_!(a\boxtimes b)=u_!(a)\boxtimes v_!(b)$ and $\Sym^n(u)_!(\sigma^n(a))=\sigma^n(u_!(a))$ for all $a\in \Con(X), b\in \Con(Y), n\in \mathbb{N}$.
\item[(iv)] Using the convolution product $ab=\oplus_!(a\boxtimes b)$ and thinking of $\Con(\Sym^n(X))$ as being a subgroup of $\Con(\Sym(X))$ by means of $\big(\Sym^n(X)\hookrightarrow\Sym(X)\big)_!$, we have
\[ \sigma^n(a+b)=\sum_{l=0}^n \sigma^l(a)\sigma^{n-l}(b) \]
with $\sigma^1(a)=a$ and $\sigma^0(a)=1\in \Con(\Spec\CC)\hookrightarrow \Con(\Sym(X))$ for all $a,b\in \Con(X)$.
\item[(v)] The ``motivic property'' holds, i.e.\ for every $X$ and every closed subscheme $Z\subseteq X$ giving rise to inclusions $i:Z \hookrightarrow X$ and $j:X\!\setminus\! Z \hookrightarrow X$, we have $i^\ast i_!=\id_{\Con(Z)}, j^\ast j_!=\id_{\Con(X\!\setminus\! Z)}, j^\ast i_!=i^\ast j_!=0$ and
\[ a=i_!i^\ast(a) + j_!j^\ast(a) \quad\forall a\in \Con(X).\]
\item[(vi)] The equation $\sigma^n(\mathbbm{1}_X)=\mathbbm{1}_{\Sym^n(X)}$ holds for all $X$ and all $n\in \mathbb{N}$ with $\mathbbm{1}_X=(X\to \Spec\mathbb{C})^\ast(1)$ and similarly for $\mathbbm{1}_{\Sym^n(X)}$.
\end{enumerate}
\end{proposition}
\begin{exercise}
Show that the projection formula $u_!(a\cap u^\ast(b))=u_!(a)\cap b$ holds for all $u:X\to Y$ of finite type and all $a\in R(X), b\in R(Y)$ by using the properties mentioned in Proposition \ref{motivic_theories} and $a\cap b=\Delta_X^\ast(a\boxtimes b)$. Hint: Consider the diagram
\[ \xymatrix { X \ar[d]_u \ar[r]^{\Delta_X} & X\times X \ar[r]^{\id_X \times u} & X\times Y \ar[d]^{u\times \id_Y} \\ Y \ar[rr]^{\Delta_Y} & & Y\times Y.} \]
\end{exercise}
\subsection{Motivic theories for schemes}
Generalizing constructible functions, we define a motivic theory\footnote{Motivic theories are special cases of reduced motivic $\lambda$-ring $(\Sch,ft)$-theories defined in \cite{DavisonMeinhardt3}. Every reduced motivic $\lambda$-ring $(\Sch,ft)$-theory is a motivic theory in our sense if $\sigma^n(\mathbbm{1}_X)=\mathbbm{1}_{\Sym^n(X)}$ holds for all $X$ and all $n\in \mathbb{N}$. (cf.\ Proposition \ref{motivic_theories}.(vi))} to be a rule associating to every scheme $X$ an abelian group $R(X)$, like $\Con(X)$, along with pull-backs $u^\ast:R(Y)\to R(X)$ for all morphisms $u:X\to Y$ and push-forwards $u_!:R(X)\to R(Y)$ if $u$ is of finite type. Moreover, there should be some associative, symmetric exterior product $\boxtimes:R(X)\times R(Y) \to R(X\times Y)$ with unit element $1\in R(\Spec\CC)$, and some operations $\sigma^n:R(X) \to R(\Sym^n(X))$ for all $n\in \mathbb{N}$, satisfying exactly the same properties as $\Con(X)$ given in Proposition \ref{motivic_theories}. Similar to the case $\Con$, we can construct a $\cap$-product $a\cap b=\Delta_X^\ast(a\boxtimes b)$ with unit $\mathbbm{1}_X=(X\to \Spec\mathbb{C})^\ast(1)$ and a convolution product $ab=+_!(a\boxtimes b)$ with unit $1_0=0_!(1)$ if, additionally, $(X,+,0)$ is a (commutative) monoid with $+$ being of finite type. Note that all these products coincide on $R(\Spec\CC)$.
\begin{exercise} \label{Euler_characteristics}
Given a motivic theory $R$ and a scheme $X$ with morphism $c:X\to \Spec\CC$, we define $[X]_R:=c_!c^\ast(1)\in R(\Spec\CC)$.
\begin{enumerate}
\item[(i)] Show that $[X]_R=[Z]_R+[X\!\setminus\! Z]_R$ (cut and paste relation) for every closed subscheme $Z\subseteq X$ and $[X\times Y]_R=[X]_R[Y]_R$ by applying the defining properties of Proposition \ref{motivic_theories}. In particular, $X\mapsto [X]_R\in R(\Spec\CC)$ is a generalization of the classical Euler characteristic $\chi_c:\Sch_\CC \to \mathbb{Z}=\Con(\Spec\CC)$.
\item[(ii)] Use {\rm (i)} to show $[\mathbb{P}^n]_R=\mathbb{L}_R^n+\ldots+\mathbb{L}_R+1=(\mathbb{L}^{n+1}_R-1)/(\mathbb{L}_R-1)$ with $\mathbb{L}_R:=[{\mathbb{A}^1}]_R.$
\end{enumerate}
\end{exercise}
\begin{exercise} \label{fiber_bundle} We use the notation introduced in the previous exercise.
\begin{enumerate}
\item[(i)] Assume $Y\to X$ is a Zariski locally trivial fiber bundle with fiber $F$. Use the cut and paste relation to prove $[Y]_R=[F]_R[X]_R$.
\item[(ii)] Use {\rm (i)} applied to the projection onto the first column and induction over $n\in \mathbb{N}$ to show $[\Gl(n)]_R=\prod_{i=0}^{n-1} (\mathbb{L}_R^n-\mathbb{L}_R^i)$.
\item[(iii)] Use {\rm (i)} to prove $[\Gr(k,n)]_R=[\Gl(n)]_R/[\Gl(k)]_R[\Gl(n-k)]_R$.
\end{enumerate}
\end{exercise}
A morphism $\eta:R\to R'$ between motivic theories is a collection of group homomorphisms $\eta_X:R(X)\to R'(X)$ commuting with pull-backs, push-forwards and exterior products. It is called a $\lambda$-morphism, if it additionally commutes with the $\sigma^n$-operations. Thus, we obtain a category of motivic theories containing the subcategory of motivic theories with $\lambda$-morphisms.\\
The rule $X\mapsto R(X)=0$ is the terminal object in the category of motivic theories. Moreover, the following holds.
\begin{lemma} \label{initial_object}
The category of motivic theories has an initial object given by the (completed) relative Grothendieck group $\underline{\Ka}_0(\Sch):X\mapsto \underline{\Ka}_0(\Sch_X)$ as constructed below. The unique morphisms starting at $\underline{\Ka}_0(\Sch)$ are even $\lambda$-morphisms.
\end{lemma}
Instead of giving an ad hoc definition of $\underline{\Ka}_0(\Sch)$, let us motivate the construction by looking at constructible functions. Starting with the constant function $1$ on $\Spec\CC$, we take the pull-back $c^\ast(1)=:\mathbbm{1}_X$ for the constant map $c:X \to \Spec\CC$ which is the constant function with value $1$ on $X$, but more importantly, it is the unit object for the $\cap$-product. For any morphism $v:V\to X$ of finite type, consider the function $v_!(\mathbbm{1}_V)\in \Con(X)$ and denote it by\footnote{We already introduced the shorthand $[V]_{\Con}$ for $[V\to \Spec\CC]_{\Con}$ in Exercise \ref{Euler_characteristics}.} $[V\xrightarrow{v} X]_{\Con}$. If $v:V\hookrightarrow X$ is the embedding of a locally closed subscheme, $[V\hookrightarrow X]_{\Con}$ is just the characteristic function of $V$. \\
Using Proposition \ref{motivic_theories}, we get
\begin{eqnarray}
&& \label{eq1} [V\to X]_{\Con}=[Z\to X]_{\Con} + [V\!\setminus\! Z \to X]_{\Con} \mbox{ for all closed }Z\subset V, \\
&& \label{eq2} 1=[\Spec\CC\xrightarrow{\id} \Spec \CC]_{\Con}, \\
&& u^\ast([W\to Y]_{\Con})=[X\times_Y W \to X]_{\Con} \mbox{ for all }u:X\to Y, \\
&& u_!([V\xrightarrow{v} X]_{\Con})=[V\xrightarrow{u\circ v} Y]_{\Con} \mbox{ if }u:X\to Y\mbox{ is of finite type}, \\
&& \label{eq5} [V\xrightarrow{v}X]_{\Con}\boxtimes[W\xrightarrow{w}Y]_{\Con}= [V\times W \xrightarrow{v\times w} X\times Y]_{\Con}, \\
&& \label{eq6} \sigma^n([V\to X]_{\Con})= [\Sym^n(V) \to \Sym^n(X)]_{\Con}.\\
&& \label{eq8} [V\xrightarrow{v}X]_{\Con}=[V'\xrightarrow{v'}X]_{\Con} \mbox{ if there is an isomorphism } \\
&& \nonumber v'':V\to V' \mbox { such that }v=v'\circ v'',
\end{eqnarray}
Obviously, the same must hold in every motivic category as they share the same properties, and so the same applies to the initial object if it exists. Moreover, for connected $X$ the group $\Con(X)$ is generated by all classes $[V\to X]_{\Con}$ satisfying relation (\ref{eq1}). The same must be true for the initial motivic theory since otherwise the subgroup spanned by the elements $[V\to X]_{init}$ for connected $X$ and extended by Property \ref{motivic_theories}(i) for non-connected $X$ is a proper subtheory of the initial theory which will lead to a contradiction. However, there are more relations in $\Con(X)$ as for example $[\mathbb{G}_m\xrightarrow{z^d}\mathbb{G}_m]=d[\mathbb{G}_m\xrightarrow{\id} \mathbb{G}_m]$ which might not hold in other motivic theories as for example the initial one. Dropping the subscript ``init'' we will, therefore, define our (hopefully) initial theory by associating to every connected scheme $X$ the group $\Ka_0(\Sch_X)$ generated by symbols $[V\xrightarrow{v}X]$ for every isomorphism class (due to equation (\ref{eq8})) of morphisms $v:V\to X$ of finite type, subject to the relation (\ref{eq1}). For non-connected $X$ we simply put
\[ \underline{\Ka}_0(\Sch_X):=\prod_{X_i\in \pi_0(X)} \Ka_0(\Sch_{X_i}). \]
To obtain a motivic theory $\underline{\Ka}_0(\Sch)$, we must define $1\in\Ka_0(\Sch_\CC), u^\ast, u_!,\boxtimes$ and $\sigma^n$ as in equations (\ref{eq2})--(\ref{eq6}), at least over connected components. It has been shown in \cite{GLMH1} that $\sigma^n$-operations satisfying these properties do indeed exist. Moreover, the authors prove that $\sigma^n(a\mathbb{L})=\sigma^n(a)\mathbb{L}^n$ holds for every $a\in \underline{\Ka}_0(X)$ and every $n\in \mathbb{N}$, were $\mathbb{L}=c_!c^\ast(1)=[\AA^1\xrightarrow{c} \Spec\mathbb{C}]\in \Ka_0(\Sch_\CC)$ is considered as an element of $\underline{\Ka}_0(\Sch_{\Sym(X)})$ via the embedding $0_!:\Ka_0(\Sch_\CC)\hookrightarrow \underline{\Ka}_0(\Sch_{\Sym(X)})$.
\begin{exercise} Use the properties of a morphism between motivic theories to show that $[V\to X] \mapsto [V\to X]_R$ defines a homomorphism $\eta_X:\Ka_0(\Sch_X)\to R(X)$ for connected $X$ which extends to a morphism $\eta:\underline{\Ka}_0(\Sch)\to R$ of motivic theories. Prove that this morphism is the only possible one. Hence, $\underline{\Ka}_0(\Sch)$ is the initial object in the category of motivic theories. Moreover, show that $\eta$ is a $\lambda$-morphism.
\end{exercise}
The initial property of $\underline{\Ka}_0(\Sch)$ is just a generalization of the well-known property that $X\to [X]\in \Ka_0(\Sch_\CC)$ is the universal Euler characteristics.\\
Let $R^{gm}(X)\subset R(X)$ be the subgroup generated by all elements $[V\to X]_R$ if $X$ is connected and $R^{gm}(X):=\prod_{X_i\in \pi_0(X)}R^{gm}(X_i)$ for general $X$. One should think of elements in $R^{gm}(X)$ as ``geometric'' since they are $\mathbb{Z}$-linear combinations of elements obtained by geometric constructions, namely pull-backs and push-forwards of the unit $1\in R(\Spec\mathbb{C})$.
\begin{exercise} \label{geometric_part} Show that $X\mapsto R^{gm}(X)$ defines a subtheory of $R$. By construction, it is the image of the $\lambda$-morphism $\eta:\underline{\Ka}_0(\Sch)\to R$ obtained in the previous exercise. Show that $\Con^{gm}=\Con$ and $\underline{\Ka}_0(\Sch)^{gm}=\underline{\Ka}_0(\Sch)$. Prove that $\sigma^n(a\mathbb{L}_R)=\sigma^n(a)\mathbb{L}_R^n$ holds for every $a\in R^{gm}(X)$ and every $n\in \mathbb{N}$.
\end{exercise}
\subsection{Motivic theories for quotient stacks}
In the previous section we generalized constructible functions and the classical Euler characteristic to more refined ``functions'' and invariants. When it comes to moduli problems, we should also be able to compute refined invariants of quotient stacks as they occur naturally in moduli problems. Hence, we need to extend motivic theories to disjoint unions of quotient stacks $X/G$ for schemes $X$ locally of finite type over $\mathbb{C}$ and linear algebraic groups $G$. This is the topic of this subsection.
\begin{exercise} \label{special}
Given a closed embedding $G\hookrightarrow \Gl(n)$ of a linear algebraic group $G$ and a $G$-action on a scheme $X$. Show that the morphism $X/G \to (X\times_G \Gl(n))/\Gl(n)$ induced by $X\ni x\mapsto (x,1)\in X\times_G\Gl(n)$ and $G\hookrightarrow \Gl(n)$ as in Exercise \ref{stack_morphisms} is in fact an isomorphism of quotient stacks. Hint: Given a principal $\Gl(n)$-bundle $P\to S$ and a $\Gl(n)$-equivariant morphism $\psi:P\to X\times_G\Gl(n)$, show that $\psi^{-1}(X)\to S$ for $X\hookrightarrow X\times_G\Gl(n)$ is a principal $G$-bundle over $S$. To prove local triviality of $\psi^{-1}(X)\to S$ one has to construct local sections of $\psi^{-1}(X)\to S$. For this, one can take a local section $\nu:U\to P$ of $P\to S$ and a lift $(f,g):\tilde{U} \to X\times \Gl(n)$ of $\psi\circ \nu:U\to X\times_G \Gl(n)$ on a possibly smaller \'{e}tale neighborhood $s\in \tilde{U}\subset U$ of $s\in S$. Then $\tilde{\nu} :\tilde{U}\ni t \longmapsto\to \nu(t)g(t)^{-1}\in \psi^{-1}(X)$ is a local section of $\psi^{-1}(X)\to S$. Note that if $G$ is special, these \'{e}tale neighborhoods $U$ and $\tilde{U}$ can even be replaced with Zariski neighborhoods.
\end{exercise}
\begin{definition} A stacky motivic theory $R$ is a rule associating to every disjoint union $\mathfrak{X}=\sqcup_{i\in I}X_i/G_i$ of quotient stacks with linear algebraic groups $G_i$ an abelian group $R(\mathfrak{X})$ along with pull-backs $u^\ast:R(\mathfrak{Y})\to R(\mathfrak{Y})$ for all (1-)morphisms $u:\mathfrak{X} \to \mathfrak{Y}$ and push-forwards $u_!:R(\mathfrak{X})\to R(\mathfrak{Y})$ if $u$ is of finite type. Moreover, there should be some associative, symmetric exterior product $\boxtimes:R(\mathfrak{X})\times R(\mathfrak{Y}) \to R(\mathfrak{X}\times \mathfrak{Y})$ with unit element $1\in R(\Spec\CC)$, and some operations $\sigma^n:R(X) \to R(\Sym^n(X))$ for all $n\in \mathbb{N}$ and all schemes $X$, satisfying the stacky analogue of the properties of $\Con(-)$ given in Proposition \ref{motivic_theories}.
\end{definition}
\begin{remark} \rm
There are two technical difficulties to overcome when we try to generalize Proposition \ref{motivic_theories}, which serves as our definition of a (stacky) motivic theory, to disjoint unions of quotient stacks. First of all, we need to explain what the correct generalization of a finite type morphism ought to be. For us, this is a (1-)morphism $u:\mathfrak{X}\to \mathfrak{Y}$ of algebraic stacks such the preimage of each ``connected component'' $Y/H$ (with connected $Y$) consists of only finitely many connected components $X_i/G_i$ of $\mathfrak{X}$. Secondly, we need to define $\Sym^n(X/G)$ for quotient stacks. There is an obvious candidate given by the quotient stack $X^n/(S_n\ltimes G^n)$. However, if $G=\{1\}$ is the trivial group, we get the quotient stack $X^n/S_n$ which is different from its coarse ``moduli space'' $\Sym^n(X)=X^n/\!\!/S_n$. To avoid these problems, we only require the existence of $\sigma^n$-operations for schemes $\mathfrak{X}=X$ and not for general disjoint unions of quotient stacks.
\end{remark}
\begin{example}\rm
There is no stacky motivic theory $R$ with $R|_{\Sch_\CC}=\Con$ such that the pull-back
$\rho^\ast:R(X/G) \to R(X)=\Con(X)$ is an embedding. Indeed, consider the case $X=\Spec\CC$ and $G=\mathbb{G}_m$. Then $X \xrightarrow{\rho} X/G \to \Spec\CC$ is the identity, and $\rho_!(\mathbbm{1}_X)$ cannot be zero. By assumption, $\rho^\ast\rho_!(\mathbbm{1}_X)$ is also nonzero. However, applying base change to the diagram
\[ \xymatrix { X\times G \ar[r]^m \ar[d]_{\pr_X} & X \ar[d]^\rho \\ X \ar[r]_\rho & X/G }\]
with $m:X\times G \to X$ denoting the (trivial) group action, $\rho^\ast\rho_!(\mathbbm{1}_X)=\pr_{X\, !}(\mathbbm{1}_{X\times G})=\chi_c(G)\mathbbm{1}_X =0$, a contradiction.
\end{example}
Applying the functoriality of the pull-back to the previous diagram, we obtain $\pr_X^{\ast}(b)=m^\ast(b)$ for $b=\rho^\ast(a)$. In other words, for every stacky motivic theory $R$, the image of $\rho^\ast$ is contained in the subgroup $R(X)^G:=\{a\in R(X)\mid \pr_X^{\ast}(b)=m^\ast(b)\}$ of ``$G$-invariant'' elements.
Despite the negative result given by the previous example, we will provide a functorial construction which associates to every motivic theory $R$ satisfying
\begin{equation} \label{eq7}
\sigma^n(a\mathbb{L}_R)=\sigma^n(a)\mathbb{L}_R^n \quad\forall \;a\in R(X)
\end{equation}
another motivic theory $R^{st}$ such that $R^{st}$ extends to a stacks motivic theory, also denoted with $R^{st}$, for which $\rho^\ast:R^{st}(X/G)\to R^{st}(X)^G$ is an isomorphism. Moreover, there is a morphism $R\to R^{st}|_{\Sch_\CC}$ of motivic theories satisfying the property that every morphism $R\to R'|_{\Sch_\CC}$ with $R'$ being a stacky motivic theory satisfying $\rho^\ast:R'(X/G) \xrightarrow{\sim} R'(X)^G$ must factorize though $R\to R^{st}$. In particular, the restriction functor from the category of stacky motivic theories $R'$ satisfying (\rm \ref{eq7}) and $\rho^\ast:R'(X/G) \xrightarrow{\sim} R'(X)^G$ has a left adjoint given by $R\to R^{st}$. As we will see, $\Con^{st}=0$.
Recall that a linear algebraic group $G$ was called special if every \'{e}tale locally trivial principal $G$-bundle is already Zariski locally trivial. In particular, given a closed embedding $G\hookrightarrow \Gl(n)$, the map $\Gl(n)\to \Gl(n)/G$ must be a Zariski locally trivial principal $G$-bundle. On can show that this property is already sufficient for being special. Hence, $\Gl(n)$ is special for every $n\in \mathbb{N}$. As a result of Exercise \ref{fiber_bundle} we get $[\Gl(n)]_R=[G]_R[\Gl(n)/G]_R$ in $R(\Spec\CC)$ for every motivic theory $R$. In particular, $[G]_R$ is invertible for every special group $G$ if and only if $[\Gl(n)]_R$ is invertible for every $n\in \mathbb{N}$.
\begin{definition}
Given a group $G$, a (1-)morphism $u:\mathfrak{P}\to \mathfrak{X}$ of stacks is called a principal $G$-bundle on $\mathfrak{X}$ if $u$ is representable and the pull-back $\tilde{u}:X\times_\mathfrak{X} \mathfrak{P} \longrightarrow X$ of $u$ along every morphism $X\to \mathfrak{X}$ with $X$ being a scheme is a principal $G$-bundle on $X$.
\end{definition}
\begin{exercise}
Given a stacky motivic theory $R$. We want to show in several steps that the condition $\rho^\ast: R(X/G)\to R(X)^G$ being an isomorphism for every special group $G$ is equivalent to the condition that $[\mathfrak{P}\xrightarrow{u} \mathfrak{X}]_R:=u_!(\mathbbm{1}_\mathfrak{P})=[G]\mathbbm{1}_\mathfrak{X}$ for every special group $G$ and every principal $G$-bundle $u:\mathfrak{P}\to \mathfrak{X}$ in the category of disjoint unions of quotient stacks.
\begin{enumerate}
\item Given a principal $G$-bundle $\mathfrak{P}\to \mathfrak{X}$ and assume for simplicity $\mathfrak{X}=X/\Gl(n)$. Consider the cartesian diagram
\[ \xymatrix {P \ar[r]^{\tilde{\rho}} \ar[d]^{\tilde{u}} & \mathfrak{P} \ar[d]^u \\ X \ar[r]^\rho & \mathfrak{X}.} \]
By assumption, $\tilde{u}:P\to X$ is a principal $G$-bundle. Use injectivity of $\rho^\ast$ to prove $u_!(\mathbbm{1}_\mathfrak{P})=[G]_R\mathbbm{1}_\mathfrak{X}$ (Hint: Use base change and Exercise \ref{fiber_bundle}.) Extend this result to arbitrary disjoint unions of quotient stacks.
\item Conversely, assume that $u_!(\mathbbm{1}_\mathfrak{P})=[G]_R\mathbbm{1}_\mathfrak{X}$ for every principal $G$-bundle in the category of quotient stacks.
Show first that $[G]_R$ is invertible in $R(\Spec\CC)$ with inverse $[\Spec\CC/G]_R$. (Hint: Consider the principal $G$-bundle $\Spec\CC \to \Spec\CC/G$.)
\item Secondly, prove that $\rho^\ast:R(X/G)\to R(X)^G$ is invertible by showing that $\rho_!(-)/[G]_R$ is an inverse. (see \cite{DavisonMeinhardt3}, Lemma 5.13 if you need help)
\end{enumerate}
\end{exercise}
For connected $X$ we define $R^{st}(X):=R(X)[ [\Gl(n)]^{-1}_R \mid n\in \mathbb{N}]$ using the $R(\Spec\CC)$-module structure of $R(X)$ by means of the $\boxtimes$-product. We extend it via $R^{st}(X)=\prod_{X_i\in \pi_0(X)}R^{st}(X_i)$ to non-connected $X$. The morphism $R(X)\to R^{st}(X)$ is obvious, and it is also easy to see how to extend $u_!, u^\ast$ for $u:X\to Y$ and the $\boxtimes$-product. The only nontrivial part is the extension of $\sigma^n$. For $X\in \Sch_\CC$ and $a\in R(X)$ define
\[ \sigma_t(a):=\sum_{n\in \mathbb{N}} \sigma^n(a)t^n \in R(\Sym(X)[[t]] \]
The Adams operations $\psi^n:R(X)\to R(\Sym^n(X))\subseteq R(\Sym(X))$ are defined by means of the series
\[ \psi_t(a):=\sum_{n\ge 1} \psi^n(a)t^{n} := \frac{d\log \sigma_t(a)}{d\log t}=t\sigma_t(-a)\frac{d\sigma_t(a)}{dt},\]
where the product is the convolution product in $R(\Sym(X))$. Using the properties of $\sigma^n$, we observe $\sigma_t(0)=1$ and $\sigma_t(a+b)=\sigma_t(a)\sigma_t(b)$. Thus, $\psi_t(0)=0$ as well as $\psi_t(a+b)=\psi_t(a)+\psi_t(b)$ follows. Property {\rm (\ref{eq7}) } implies $\psi^n(a P(\mathbb{L}_R))=\psi^n(a)P(\mathbb{L}_R^n)$ for every polynomial $P(x)\in \mathbb{Z}[x]$. Due to Exercise \ref{fiber_bundle}(ii), we have to extend $\psi^n$ to $R^{st}(X)$ by means of
\[ \psi^n\left( \frac{a}{\prod_{i\in I} [\Gl(m_i)]_R }\right):= \frac{\psi^n(a)}{\prod_{i\in I} P_{m_i}(\mathbb{L}^n_R)} \]
using the polynomial $P_m(x)=\prod_{j=0}^{m-1}(x^m-x^j)$ satisfying $[\Gl(m)]_R=P_m(\mathbb{L}_R)$. Having extended the Adams operations, we can also extend the $\sigma^n$-operations by putting
\[ \sigma_t(a)=\exp\left(\int \psi_t(a)\frac{dt}{t}\right). \]
Note that the last expression involves rational coefficient, but one can show that the rational coefficients disappear in the expression for
\[ \sigma_t\left(\frac{a}{\prod_{i\in I}[\Gl(m_i)]_R}\right)=\exp\left(\sum_{n\ge 1} \frac{\psi^n(a)t^n}{n\prod_{i\in I} P_{m_i}(\mathbb{L}_R^n)} \right) \]
if we express $\psi^n(a)$ in terms of $\sigma^m(a)$ for $1\le m\le n$. (See \cite{DavisonMeinhardt3}, Appendix B for more details.)
\begin{exercise}
Show that $\Con^{st}(X)=0$ for all $X$.
\end{exercise}
Now, as we have constructed $R^{st}$ on schemes, we will put $R^{st}(\mathfrak{X}):=R^{st}(X)^G$ for a quotient stack $\mathfrak{X}=X/G$ with special group $G$ and connected $X$. We have to show that this definition is independent of the presentation of the quotient stack. For this let $\mathfrak{X}\cong Y/H$ be another presentation with a special group $H$. Let us form the cartesian square
\[ \xymatrix @C=1.5cm @R=1.5cm { X\times_\mathfrak{X} Y\times G\times H \ar@/_1pc/[d] \ar@/^1pc/[d] \ar@/^1pc/[r] \ar@/_1pc/[r] & X\times_\mathfrak{X} Y \times H \ar@/^1pc/[d] \ar@/_1pc/[d] \ar[r]^{\rho''} & Y\times H \ar@/^1pc/[d]^{m_Y} \ar@/_1pc/[d]_{\pr_Y} \\
X \times_\mathfrak{X} Y \times G \ar[d]^{\tau''} \ar@/^1pc/[r] \ar@/_1pc/[r] & X\times_\mathfrak{X} Y \ar[d]^{\tau'} \ar[r]^{\rho'} & Y \ar[d]^\tau \\
X\times G \ar@/^1pc/[r]^{m_X} \ar@/_1pc/[r]_{\pr_X} & X \ar[r]^\rho & \mathfrak{X} }
\]
with $\rho',\rho''$ and $\tau',\tau''$ being $G$- respectively $H$-principal bundles. The other maps are either projections or actions of $G$ or $H$. Applying $R^{st}$, we get the following diagram with exact rows and columns by construction of $R^{st}$, where $K$ denotes the kernel of say $\pr_Y^\ast-m_Y^\ast$.
\[ \xymatrix @C=1.3cm { & 0 \ar[d] & 0 \ar[d] & 0 \ar[d] \\
& K \ar[r] \ar[d] & R^{st}(X) \ar[r]^{\pr_X^\ast-m_X^\ast} \ar[d]^{\tau'^\ast} & R^{st}(X\times G) \ar[d]^{\tau''^\ast} \\
0 \ar[r] & R^{st}(Y) \ar[r]^{\rho'^\ast} \ar[d]^{\pr_Y^\ast-m_Y^\ast} & R^{st}(X\times_\mathfrak{X} Y) \ar[r] \ar[d] & R^{st}(X\times_\mathfrak{X} Y \times G) \ar[d] \\
0 \ar[r] & R^{st}(Y\times H) \ar[r]_{\rho''^\ast} & R^{st}(X\times_\mathfrak{X} Y \times H) \ar[r] & R^{st}(X\times_\mathfrak{X} Y \times G\times H) }
\]
Hence, $R^{st}(X)^{G}\cong K\cong R^{st}(Y)^{H}$ showing that $R^{st}(\mathfrak{X})$ is independent of the choice of a presentation.
Given a morphism $u:X/G \to Y/H$ of quotient stacks, we form the cartesian diagram
\[ \xymatrix { X\times_{Y/H} Y \ar[d]_{\tilde{\tau}} \ar[r]^(0.4){\tilde{\rho}} & X/G\times_{Y/H} Y \ar[r]^(0.6){\tilde{u}} \ar[d] & Y \ar[d]^\tau \\
X \ar[r]^\rho & X/G \ar[r]^u & Y/H. } \]
For $a\in R^{st}(X/G)\cong R^{st}(X)^G$ and $b\in R^{st}(Y)^H$ we put
\begin{eqnarray*}
u_!(a)&:=&(\tilde{u}\circ \tilde{\rho})_!\tilde{\tau}^\ast(a)/[G]_R\in R^{st}(Y)^H\mbox{ and }\\
u^\ast(b)&:=&\tilde{\tau}_!(\tilde{u}\circ\tilde{\rho})^\ast(b)/[H]_R\in R^{st}(X)^G\\
a\boxtimes b&:=& a\boxtimes b\in R^{st}(X\times Y)^{G\times H}.
\end{eqnarray*}
For disjoint unions $\mathfrak{X}=\sqcup_{i\in I}X_i/G_i$ of connected quotient stacks, we can always assume that $G_i$ is special for all $i\in I$ due to Exercise \ref{special}. Then, we need to define $R^{st}(\mathfrak{X}):=\prod_{i\in I}R^{st}(X_i/G_i)$ according to Proposition \ref{motivic_theories}(i), and extend $u_!, u^\ast$ and $\boxtimes$ in the natural way.
Given a morphism $\eta:R\to R'|_{\Sch_\CC}$ with $\rho^\ast:R'(X/G)\to R'(X)^G$ being an isomorphism, we define $\eta_{X/G}:R^{st}(X/G)=R(X)^G \xrightarrow{\eta_X} R'(X)^G\xrightarrow{\rho^{\ast\, -1}} R'(X/G)$ with $\rho^{\ast\, -1}(a)=\rho_!(a)/[G]_R$ which is the only possible choice to extend $\eta$ to a morphism $R^{st}\to R'$ of stacky motivic theories. More details in a more general context are given in \cite{DavisonMeinhardt3}, Section 5.
\section{Vanishing cycles}
The aim of this section is to introduce the notion of a vanishing cycle taking values in a (stacky) motivic theory $R$. We start by considering vanishing cycles of morphisms $f:X\to {\mathbb{A}^1}$ defined on smooth schemes $X$.
\subsection{Vanishing cycles for schemes}
\begin{definition}
Given a motivic theory $R$, a vanishing cycle\footnote{The definition given here differs from the one given in \cite{DavisonMeinhardt3} for the sake of simplicity. We do not require the support property but a blow-up formula.} (with values in $R$) is a rule associating to every regular function $f:X\to {\mathbb{A}^1}$ on a smooth scheme/variety or complex manifold $X$ an element $\phi_f\in R(X)$ such that the following holds.
\begin{enumerate}
\item If $u:Y\to X$ is a smooth, then $\phi_{f\circ u}=f^\ast(\phi_f)$.
\item Let $X$ be a smooth variety containing a smooth closed subvariety $i:Y\hookrightarrow X$. Denote by $j:E\hookrightarrow \Bl_Y X$ the exceptional divisor of the blow-up $\pi:\Bl_Y X \to X$ of $X$ in $Y$. Then the formula \[ \pi_!\big( \phi_{f\circ \pi} - j_! \phi_{f\circ \pi\circ j}\big) = \phi_f - i_!\phi_{f\circ i}\] holds for every $f:X\to {\mathbb{A}^1}$.
\item Given two morphisms $f:X\to {\mathbb{A}^1}$ and $g:Y\to {\mathbb{A}^1}$ on smooth $X$ and $Y$, we introduce the notation $f\boxtimes g:X\times Y\xrightarrow{f\times g} {\mathbb{A}^1}\times {\mathbb{A}^1}\xrightarrow{+} {\mathbb{A}^1}$. Then $\phi_{f\boxtimes g}=\phi_f\boxtimes \phi_g$ in $R(X\times Y)$. Moreover, $\phi_{\Spec\CC\xrightarrow{0}{\mathbb{A}^1}}(1)=1$.
\end{enumerate}
\end{definition}
\begin{lemma} \label{vanishing_cycle_morphism}
A collection of elements $\phi_f\in R(X)$ for regular functions $f:X\to {\mathbb{A}^1}$ on smooth schemes $X$ satisfying the properties (1),(2) and (3) is equivalent to a collection of group homomorphisms\footnote{The collection of group homomorphisms $\phi_f$ is what is called a morphisms of (motivic) ring $(Sm,prop)$-theories over ${\mathbb{A}^1}$ in \cite{DavisonMeinhardt3}.} $\phi_f:\underline{\Ka}_0(\Sch_X)\to R(X)$ for all regular functions $f:X\to {\mathbb{A}^1}$ on arbitrary schemes $X$ such that the following diagrams commute
\[ \xymatrix @C=1.5cm{ \underline{\Ka}_0(\Sch_X) \ar[r]^{\phi_f} \ar[d]^{u^\ast} & R(X) \ar[d]^{u^\ast} \\ \underline{\Ka}_0(\Sch_Y) \ar[r]^{\phi_{f\circ u}} & R(Y) } \qquad\mbox{if }u:Y\to X\mbox{ is smooth,} \]
\[ \xymatrix @C=1.5cm{ \underline{\Ka}_0(\Sch_Y) \ar[r]^{\phi_{f\circ u}} \ar[d]^{u_!} & R(Y) \ar[d]^{u_!} \\ \underline{\Ka}_0(\Sch_X) \ar[r]^{\phi_{f}} & R(X) } \qquad\mbox{if }u:Y\to X\mbox{ is proper,} \]
\[ \xymatrix @C=2.5cm { \underline{\Ka}_0(\Sch_X)\otimes\underline{\Ka}_0(\Sch_Y) \ar[r]^{\phi_{f}\otimes \phi_g} \ar[d]^{\boxtimes} & R(X)\otimes R(Y) \ar[d]^{\boxtimes} \\ \underline{\Ka}_0(\Sch_{X\times Y}) \ar[r]^{\phi_{f\boxtimes g}} & R(X\times Y) } \]
and $\phi_{\Spec\mathbb{C}\xrightarrow{0}{\mathbb{A}^1}}(1)=1$.
\end{lemma}
\begin{exercise}
Proof the lemma using the following fact (see \cite{Bittner04}, Thm.\ 5.1). The group $\Ka_0(\Sch_Z)$ can also be written as the abelian group generated by symbols $[X\xrightarrow{p} Z]$ with smooth $X$ and proper $p$ subject the ``blow-up relation'': If $i:Y\hookrightarrow X$ is a smooth subvariety and $\pi:\Bl_Y X \to X$ the blow-up of $X$ in $Y$ with exceptional divisor $j:E\hookrightarrow \Bl_Y X$, then $[\Bl_Y X \xrightarrow{p\pi} Z] - [E \xrightarrow{p\pi j} Z] = [X\xrightarrow{p}Z]-[Y\xrightarrow{p i} Z]$. Hint: Given a function $f:Z\to {\mathbb{A}^1}$, try the Ansatz $\phi_f([X\xrightarrow{p} Z]):=p_!\phi_{f\circ p}$ for a proper morphism $X\xrightarrow{p}Z$ on a smooth scheme $X$, where $\phi_{f\circ p}\in R(X)$ on the right hand side is given by our family of elements. In particular, $\phi_f(\mathbbm{1}_X)=\phi_f\in R(X)$ for a regular function $f$ on a smooth scheme $X$.
\end{exercise}
We need to apologize for using the same symbol $\phi_f$ with two different meanings. However, with a bit of practice it should be clear from the context which interpretation is used.
\begin{example}\rm
For every motivic theory there is a canonical vanishing cycle such that $\phi^R_{can,f}=\mathbbm{1}_{X}\in R(X)$ for every $f:X\to {\mathbb{A}^1}$. Hence, it does not depend on $f$, and the map $\phi_f:\underline{\Ka}_0(\Sch_X)\to R(X)$ is just the morphism constructed in Lemma \ref{initial_object}.
\end{example}
Let us look at the following more interesting examples.
\begin{example} \rm
Let $R=\Con$. For $x\in X$ we fix a metric on a an analytic neighborhood of $x\in X^{an}$ for example by embedding such a neighborhood into $\mathbb{C}^n$. We form the so-called Milnor fiber $\MF_{f}(x):=f^{-1}(f(x)+\delta)\cap B_\varepsilon(x)$, where $B_\varepsilon(x)$ is a small open ball around $x\in X$ and $0<\delta \ll \varepsilon \ll 1$ are small real parameters. Notice, that the Milnor fiber depends on the choice of the metric and the choice of $\delta,\varepsilon$. However, its reduced cohomology and its Euler characteristic $\chi(\MF_f(x))$ are independent of the choices made. We finally define $\phi^{con}_f(x):=1-\chi(\MF_f(x))$ for sufficiently small $\delta,\varepsilon$. One can show that the properties listed above are satisfied. Moreover, $\phi^{con}_f$ agrees with the Behrend function of $\Crit(f)$ up to the sign $(-1)^{\dim X}$.
\end{example}
\begin{example} \rm
The previous example has a categorification. For a closed point $t\in {\mathbb{A}^1}(\mathbb{C})$ consider the cartesian diagram
\[ \xymatrix @C=1.5cm { X^{an} \ar[d]^{f} & X^{an}\times_\mathbb{C} \mathbb{C} \ar[d]^{\tilde{f}} \ar[l]^{q_t} \\ \mathbb{C} & \ar[l]^{t+\exp(-)} \mathbb{C}.}\]
Denoting the inclusion of the fiber $X_t=f^{-1}(t)$ into $X$ by $\iota_t$, the (classical) vanishing cycle $\phi^{perv}_f$ is defined via
\[ \oplus_{t\in {\mathbb{A}^1}(\mathbb{C})} \iota_{t\,!}\Cone(\mathbb{Q}_{X_t}\longrightarrow \iota_t^\ast q_{t\,\ast} q_t^\ast \mathbb{Q}_X), \]
in the derived category $D^b(X,\mathbb{Q})$ of sheaves of $\mathbb{Q}$-vector spaces on $X$. Here, $\mathbb{Q}_X$ respectively $\mathbb{Q}_{X_t}=\iota_t^\ast \mathbb{Q}_X$ denotes the locally constant sheaf on $X$ respectively $X_t$ with stalk $\mathbb{Q}$. The morphism is the restriction to $X_t$ of the adjunction $\mathbb{Q}_X \longrightarrow q_{t\,\ast} q_t^\ast \mathbb{Q}_X$. Spelling out the definition we see that the stalk of $\phi^{perv}_f$ at $x$ is given by the reduced cohomology of the Milnor fiber $\MF_f(x)$ shifted by $-1$. \\
Associating to every connected $X$ the Grouthendieck group $\Ka_0(D^b_{con}(X,\mathbb{Q}))=\Ka_0(\Perv(X))$ of the triangulated subcategory $D^b_{con}(X,\mathbb{Q})\subset D^b(X,\mathbb{Q})$ consisting of complexes of sheaves of $\mathbb{Q}$-vector spaces with constructible cohomology, we obtain a motivic theory $\underline{\Ka}_0(D^b_{con}(-,\mathbb{Q}))$ with $\underline{\Ka}_0(D^b_{con}(X,\mathbb{Q})):=\prod_{X_i\in \pi_0(X)} \Ka_0(D^b_{con}(X_i,\mathbb{Q}))$. Since $\phi_f^{perv}$ turns out to be a complex with constructible cohomology, we can take its class in $\underline{\Ka}_0(D^b_{con}(X,\mathbb{Q}))$ and get a vanishing cycle satisfying all required properties.
\end{example}
\begin{example} \rm
The previous example has a refinement $\phi_f^{mhm}\in D^b(\MHM(X)_{mon})$ involving (complexes of) ``monochromatic mixed Hodge modules with monodromy groups of the form $\mu_n$, the group of n-th roots of unity, for some $n\in \mathbb{N}$. Forgetting the Hodge and the monodromy structure, we get a functor $D^b(\MHM(X)_{mon})\longrightarrow D^b_{con}(X,\mathbb{Q})$ mapping $\phi_f^{mhm}$ to $\phi_f^{perv}$. By passing to Grothendieck groups, we get a vanishing cycle $\phi^{mhm}_f$ with values in $\underline{\Ka}_0(D^b(\MHM(X))_{mon})=\underline{\Ka}_0(\MHM(X)_{mon}):=\prod_{X_i\in \pi_0(X)} \Ka_0(\MHM(X_i)_{mon})$.
\end{example}
In the remaining part of this subsection we will construct vanishing cycles depending functorially on $R$. First of all we need to enlarge $R$ by defining a new motivic theory $R(-\times {\mathbb{A}^1})$ mapping $X$ to $R(X\times {\mathbb{A}^1})$ and using the exterior product
\[ R(X \times {\mathbb{A}^1})\otimes R(Y\times {\mathbb{A}^1}) \xrightarrow{\boxtimes} R( X\times Y \times \AA^2) \xrightarrow{(\id\times +)_!} R(X\times Y\times {\mathbb{A}^1}) \]
with unit $1':=0_!(1)\in R(\Spec\mathbb{C}\times{\mathbb{A}^1})=R({\mathbb{A}^1})$ and the $\sigma^n$-operations
\[ R(X\times {\mathbb{A}^1}) \xrightarrow{\sigma^n} R(\Sym^n(X\times {\mathbb{A}^1}))\longrightarrow R(\Sym^n(X)\times \Sym^n({\mathbb{A}^1})) \xrightarrow{(\id\times +)_!} R(\Sym^n(X)\times{\mathbb{A}^1}). \]
\begin{exercise}
Check the properties of a motivic theory given in Proposition \ref{motivic_theories}.
\end{exercise}
Given a scheme $X$, let $\mathbb{G}_m$ act on $X\times {\mathbb{A}^1}$ via $g(x,z)=(x,gz)$. For connected $X$ we denote with $R^{gm}_{\mathbb{G}_m}(X\times {\mathbb{A}^1})$ the subgroup of $R(X\times {\mathbb{A}^1})$ generated by elements $[Y\xrightarrow{f} X\times {\mathbb{A}^1}]_R$ such that $Y$ carries a good\footnote{An action of $\mathbb{G}_m$ on $Y$ is called good if every point $y\in Y$ has an affine $\mathbb{G}_m$-invariant neighborhood.} $\mathbb{G}_m$-action for which $f$ is homogeneous of some degree $d> 0$, i.e.\ $f(gy)=g^df(y)$. Notice, that such a $Y$ will carry many actions for which $f$ is homogeneous. Indeed, given $0\not=n\in \mathbb{N}$, let $\mathbb{G}_m$ act on $Y$ via $g\star y:=g^n y$ using the old action on the right hand side. Then, $f$ is homogeneous of degree $dn$ with respect to the new action. In particular, given finitely many generators $[Y_i\xrightarrow{f_i} X\times {\mathbb{A}^1}]$, we can always assume that the degrees of $f_i$ are equal. Finally, we put $R^{gm}_{\mathbb{G}_m}(X\times {\mathbb{A}^1}):=\prod_{X_i\in \pi_0(X)} R^{gm}_{\mathbb{G}_m}(X_i\times {\mathbb{A}^1})$ for non-connected $X$.
\begin{lemma} \label{monodromy_extension}
The subgroup $R^{gm}_{\mathbb{G}_m}(X\times {\mathbb{A}^1})\subset R(X\times {\mathbb{A}^1})$ is invariant under pull-backs, push-forwards, exterior products and the $\sigma^n$-operations. Moreover, $\pr_X^\ast$ maps $R^{gm}(X)$ onto a ``$\lambda$-ideal'' $I^{gm}_X\subseteq R^{gm}_{\mathbb{G}_m}(X\times {\mathbb{A}^1})$, i.e.\ $a\boxtimes b \in I^{gm}_{X\times Y}$ for $a\in I^{gm}_X, b\in R^{gm}_{\mathbb{G}_m}(Y\times {\mathbb{A}^1})$ and $\sigma^n(a)\in I^{gm}_{\Sym^n(X)}$ for $a\in I^{gm}_X$.
\end{lemma}
\begin{exercise}
Check the first sentence of the previous lemma.
\end{exercise}
\begin{proof}
To show that $I^{gm}_X$ is a $\lambda$-ideal, it suffices to look at generators $[V\times{\mathbb{A}^1}\xrightarrow{f\times \id_{{\mathbb{A}^1}}} X\times{\mathbb{A}^1}]$ and $[W \xrightarrow{(g,h)} Y\times {\mathbb{A}^1}]$ of $I^{gm}_X$ and $R^{gm}_{\mathbb{G}_m}(Y\times{\mathbb{A}^1})$ respectively with $[V\xrightarrow{f} X]\in R^{gm}(X)$. (cf.\ Exercise \ref{generators}) For the $\boxtimes$-product
\[ [V\times {\mathbb{A}^1}\to X\times {\mathbb{A}^1}] \boxtimes [W \to Y\times {\mathbb{A}^1}] = [V\times{\mathbb{A}^1}\times W\xrightarrow{u} X\times Y\times {\mathbb{A}^1}] \]
with $u(v,z,w)=(f(v),g(w),z+h(w))$ we use the isomorphism
\[ V\times {\mathbb{A}^1} \times W \ni(v,z,w)\longmapsto (v,w,z+h(w))\in V\times W\times {\mathbb{A}^1}\]
to show $[V\times{\mathbb{A}^1}\times W\xrightarrow{u} X\times Y\times {\mathbb{A}^1}]=\pr^\ast_{X\times Y}([V\times W\xrightarrow{f\times g} X\times Y])\in I^{gm}_{X\times Y}$.
We also have
\[ \sigma^n[V\times {\mathbb{A}^1}^1 \xrightarrow{f\times\id_{{\mathbb{A}^1}}} X\times {\mathbb{A}^1}]=[ \Sym^n(V\times {\mathbb{A}^1}) \xrightarrow{\tilde{p}} \Sym^n(X)\times {\mathbb{A}^1} ] \]
with $\tilde{p}$ being induced by the $S_n$-invariant morphism $p:(V\times {\mathbb{A}^1})^n\to \Sym^n(X)\times {\mathbb{A}^1}$ with $p\big((v_1,z_1),\ldots,(v_n,z_n)\big)=\big((f(v_1),\ldots,f(v_n)),z_1+\ldots + z_n\big)$. We define
\[ (V^n\times \AA^n)^0:=\{(v_1,\ldots,v_n,z_1,\ldots,z_n) \in V^n\times \AA^n \mid z_1 + \ldots z_n=0 \} \]
for all $n>0$. There is a $S_n$-equivariant isomorphism $\psi:(V^n\times \AA^n)^0\times {\mathbb{A}^1} \longrightarrow (V \times {\mathbb{A}^1})^n$ sending $((v_1,\ldots,v_n,z_1,\ldots,z_n),z)$ to $\big((v_1,z_1+z/n),\ldots,(v_n, z_n + z/n)\big)$. Then, $p\circ \psi=q\times \id_{\mathbb{A}^1}: (V^n\times \AA^n)^0\times {\mathbb{A}^1}\longrightarrow \Sym^n(X)\times {\mathbb{A}^1}$ for the $S_n$-invariant morphism $q:(V^n\times \AA^n)^0\twoheadrightarrow (V^n\times \AA^n)^0/\!\!/S_n\xrightarrow{\tilde{q}} \Sym^n(X)$ with $q(v_1,\ldots,v_n,z_1,\ldots,z_n)=(f(v_1),\ldots,f(v_n))$. Modding out the $S_n$-action, we see that
\[ [ \Sym^n(V\times {\mathbb{A}^1}) \xrightarrow{\tilde{p}} \Sym^n(X)\times {\mathbb{A}^1} ]=\pr^\ast_{\Sym^n}([(V^n\times {\mathbb{A}^1}^n)^0/\!\!/S_n \xrightarrow{\tilde{q}} \Sym^n(X)])\]
is indeed in $I^{gm}_{\Sym^n(X)}$.
\end{proof}
\begin{exercise} \label{generators} Convince yourself using the formula $\sigma^n(a+b)=\sum_{l=0}^n\pi^{(l)}_!(\sigma^l(a)\boxtimes \sigma^{n-l}(b))$ for the motivic theory $R(-\times {\mathbb{A}^1})$ with $\pi^{(l)}:\Sym^l(X)\times \Sym^{n-1}(X)\longrightarrow \Sym^n(X)$ being the natural map, that $a\in I^{gm}_X$ implies $\sigma^n(a)\in I^{gm}_{\Sym^n(X)}$ is indeed true if it already holds for generators $a=\pr^\ast_X([V\xrightarrow{f} X])$ of $I^{gm}_X$.
\end{exercise}
Due to Lemma \ref{monodromy_extension}, we can form the quotient $R^{gm}_{mon}(X)=R^{gm}_{\mathbb{G}_m}(X\times {\mathbb{A}^1})/\pr_X^\ast R^{gm}(X)$ and obtain a new motivic theory together with a morphism $R^{gm}\to R^{gm}_{mon}$ of motivic theories (cf.\ Exercise \ref{geometric_part}) given by $R^{gm}(X) \xrightarrow{(\id\times 0)_!} R^{gm}_{\mathbb{G}_m}(X\times {\mathbb{A}^1}) \twoheadrightarrow R^{gm}_{mon}(X)$.
\begin{exercise}
Fix a motivic theory $R$ and consider the map $R^{gm}_{\mathbb{G}_m}(X\times {\mathbb{A}^1}) \longrightarrow R^{gm}(X)$ given by $a\longmapsto (\id\times 0)^\ast(a)-(\id\times 1)^\ast(a)$, where $0,1:\Spec\CC \to {\mathbb{A}^1}$ are the obvious maps. As $\pr_X^\ast R^{gm}(X)$ is in the kernel, we obtain a well-defined group homomorphism $R^{gm}_{mon}(X)\to R^{gm}(X)$. Note that the composition $R^{gm}(X) \rightarrow R^{gm}_{mon}(X) \rightarrow R^{gm}(X)$ is the identity. Hence, $R^{gm}(X)$ is a direct summand of $R^{gm}_{mon}(X)$. However, show that the retraction $R^{gm}_{mon}(X)\to R^{gm}(X)$ is not a morphism of motivic theories and $R^{gm}$ is not a direct summand of the motivic theory $R^{gm}_{mon}$.
\end{exercise}
\begin{exercise}
Using the notation of the previous exercise, show that the kernel of $\Con^{gm}_{mon}(X) \longrightarrow \Con^{gm}(X)=\Con(X)$ is trivial, i.e.\ $\Con(X)=\Con^{gm}_{mon}(X)$. On the other hand, show that the kernel is nonzero for $X=\Spec\CC$ and $R=\underline{\Ka}_0(D^b(-,\mathbb{Q}))$, $R=\underline{\Ka}_0(\MHM(-)_{mon})$ and $R=\underline{\Ka}_0(\Sch)$.
\end{exercise}
If $R=R^{gm}$, we suppress the superscript ``gm'' from notation. This applies for instance to $\underline{\Ka}_0(\Sch)$ but also to $R^{gm}$ as $(R^{gm})^{gm}=R^{gm}$.
\begin{exercise}
Check that $R^{gm}_{mon}=(R^{gm})^{gm}_{mon}=(R^{gm})_{mon}$ using our convention for the last equation.
\end{exercise}
If $R^{gm}\subsetneq R$ is a proper subtheory, as for example for $\underline{\Ka}_0(D^b_{con}(-,\mathbb{Q}))$ or for $\underline{\Ka}_0(\MHM(-)_{mon})$, we can nevertheless define a theory $R_{mon}$ under the assumption that the formula
\[ \sigma^n(a\boxtimes b) =\sum_{\lambda \dashv n} \pi^{(n)}_!\Big(P^\lambda(\sigma^1(a),\ldots,\sigma^n(a))\boxtimes P^\lambda(\sigma^1(b),\ldots,\sigma^n(b))\Big) \]
holds in $R(\Sym^n(X\times Y))$ for all $0\not=n\in \mathbb{N}$, all $a\in R(X),b\in R(Y)$ and all $X,Y$, where the sum is taken over all partitions $\lambda=(\lambda_1\ge\ldots \ge \lambda_n\ge 0)$ of $n$ and
\[ P^\lambda(x_1,\ldots,x_n)=\det(x_{\lambda_i}+j-i)_{1\le i,j\le n}=\left|\begin{array}{cccc} x_{\lambda_1} & x_{\lambda_1+1} & \ldots & x_{\lambda_{1}+n-1} \\ x_{\lambda_2-1} & x_{\lambda_2} & \ldots & x_{\lambda_2+n-2} \\ \vdots & \vdots & \ddots & \vdots \\ x_{\lambda_n-n+1} & x_{\lambda_n-n+2} & \ldots & x_{\lambda_n} \end{array} \right| \]
is the polynomial from the Jacobi-Trudi formula with the convention $x_0=1$ and $x_m=0$ for $m<0$ or $m>n$. Here, $\pi^{(n)}:\Sym^n(X)\times \Sym^n(Y)\longrightarrow \Sym^n(X\times Y)$ is the obvious map. The expression $P^\lambda(\sigma^1(a),\ldots,\sigma^n(a))$ is computed in $R(\Sym(X))$ with respect to the convolution product, and is an element of $R(\Sym^n(X))$ since $\lambda_1 +\ldots + \lambda_n=n$. Similarly for $P^\lambda(\sigma^1(b),\ldots,\sigma^n(b))$. It can be show that this formula holds\footnote{The formula is a direct consequence of the assumption that $R(\Sym(X\times Y))$ is a special $\lambda$-ring which is true for any ``decategorification''.} whenever $R$ has a ``categorification'' as for example $\underline{\Ka}_0(D^b(\MHM_{mon}))$ or $\underline{\Ka}_0(D^b_{con}(-,\mathbb{Q}))$. In this case, we may replace $R^{gm}_{\mathbb{G}_m}(X\times {\mathbb{A}^1})$ with the $R(X)$-submodule $R_{\mathbb{G}_m}(X\times {\mathbb{A}^1})$ of $R(X\times {\mathbb{A}^1})$ generated by $R^{gm}_{\mathbb{G}_m}(X\times {\mathbb{A}^1})$. Note that $R(X\times {\mathbb{A}^1})$ is an $R(X)$-module using the convolution product and the embedding $R(X)\hookrightarrow R(X\times {\mathbb{A}^1})$ provided by the ``zero section'' $0_X=\id_X\times 0:X\hookrightarrow X\times {\mathbb{A}^1}$. The formula for $\sigma^n(a\boxtimes b)$ ensures that $R_{\mathbb{G}_m}(X\times {\mathbb{A}^1})$ is still closed under taking $\sigma^n:R(X\times {\mathbb{A}^1})\longrightarrow R(\Sym^n(X)\times {\mathbb{A}^1})$ and, thus, defines another motivic theory containing $R^{gm}_{\mathbb{G}_m}(-\times {\mathbb{A}^1})$ as a subtheory. Moreover, $\pr_X^\ast(R(X))=:I_X$ is a $\lambda$-ideal and the quotient $R_{mon}(X):=R_{\mathbb{G}_m}(X\times {\mathbb{A}^1})/I_X$ is a well-defined motivic theory which contains $R$ as a subtheory such that $R(X)\hookrightarrow R_{mon}(X)$ is a retract for every $X$. Moreover, the following diagram is cartesian
\[ \xymatrix { R^{gm} \ar@{^{(}->}[r] \ar@{^{(}->}[d] & R^{gm}_{mon} \ar@{^{(}->}[d] \\ R \ar@{^{(}->}[r] & R_{mon}.}\]
\begin{exercise} Show that $R^{gm}_{\mathbb{G}_m}(X\times {\mathbb{A}^1})$ is already an $R(X)$-module under the assumption $R=R^{gm}$. Also $I^{gm}_X=\pr^\ast_X (R(X))$ in this case. Hence, we do not get anything new by the previous construction whenever it applies, and putting $R_{mon}:=R^{gm}_{mon}$ for theories $R=R^{gm}$ will not cause any confusion.
\end{exercise}
\begin{example} \rm
There is a morphism $\underline{\Ka}_0(\MHM_{mon}) \longrightarrow \underline{\Ka}_0(\MHM)_{mon}$ of motivic theories. Roughly speaking, $\MHM_{mon}(X)$ is obtained by a categorification of the construction just described. One can show that for a regular function $f:X\to {\mathbb{A}^1}$ on a smooth scheme $X$ the image of $\phi^{mhm}_f\in \underline{\Ka}_0(\MHM_{mon}(X))$ under this morphism is already contained in the subgroup $\underline{\Ka}_0(\MHM(X))^{gm}_{mon}$. A similar statement holds for $\underline{\Ka}_0(D^b_{con}(-,\mathbb{Q}))$.
\end{example}
\begin{example} \rm
There is a morphism $\Ka^{\hat{\mu}}_0(\Sch_X)\to \underline{\Ka}_0(\Sch_X)_{mon}$ for every (connected) $X$ with $\Ka^{\hat{\mu}}_0(\Sch_X)$ being defined in \cite{DenefLoeser2}. In a nutshell, $\Ka^{\hat{\mu}}_0(\Sch_X)$ is constructed very similar to $\underline{\Ka}_0(\Sch_X)_{mon}$ by considering generators $[Y\xrightarrow{f} X\times {\mathbb{A}^1},\rho]$ with $Y$ carrying a $\mathbb{G}_m$-action $\rho:\mathbb{G}_m\times Y \to Y$ such that $f$ is homogeneous of degree $d> 0$. In contrast to our previous definition, the $\mathbb{G}_m$-action $\rho$ is part of the data, and $\Ka^{\hat{\mu}}_0(\Sch_X)\to \underline{\Ka}_0(\Sch_X)_{mon}$ forgets the $\mathbb{G}_m$-action $\rho$. In particular, given another homogeneous map $Y'\xrightarrow{f'} X\times {\mathbb{A}^1}$ with $Y'$ carrying a $\mathbb{G}_m$-action $\rho'$ and an isomorphism $\theta:Y'\xrightarrow{\sim} Y$ such that $f\theta=f'$ then $[Y\xrightarrow{f}X\times{\mathbb{A}^1}]=[Y'\xrightarrow{f'} X\times {\mathbb{A}^1}]$ in $\underline{\Ka}_0(\Sch_{X})_{mon}$, but the generators $[Y\xrightarrow{f}X\times {\mathbb{A}^1},\rho]$ and $[Y'\xrightarrow{f'}X\times {\mathbb{A}^1},\rho']$ of $\Ka^{\hat{\mu}}_0(\Sch_X)$ might be different unless $\theta$ is $\mathbb{G}_m$-equivariant. The relations in $\Ka^{\hat{\mu}}_0(\Sch_X)$ are the cut and paste relation for $\mathbb{G}_m$-invariant closed subschemes $Z\subset Y$ and $[Y\times {\mathbb{A}^1} \xrightarrow{u\times \id_{\mathbb{A}^1}} X\times {\mathbb{A}^1},\rho]=0$ for every $\mathbb{G}_m$-invariant morphism $u:Y\to X$ from a scheme $Y$ with a good $\mathbb{G}_m$-action. Here, $\rho$ is given by $g(y,z)=(gy,gz)$ using the $\mathbb{G}_m$-action on $Y$. There is a third relation dealing with linear actions of $\mu_d\subset \mathbb{G}_m$, the group of $d$-th roots of unity, which is also fulfilled in $\underline{\Ka}_0(\Sch_X)_{mon}$ as \'{e}tale locally trivial vector bundles are already Zariski locally trivial.
\end{example}
\begin{exercise} \label{functoriality}
Show that the construction $R\mapsto R^{gm}_{mon}$ is functorial in $R$.
\end{exercise}
The motivic theory $R^{gm}_{mon}$ will be the target of our vanishing cycle which we are going to construct now. For this let $f:X\to {\mathbb{A}^1}$ be a regular function on a smooth connected scheme and let $\mathcal{L}_n(X)$ be the scheme parameterizing all arcs of length $n$ in $X$, i.e.\ the scheme representing the set-valued functor $Y\mapsto \Mor(Y\times \Spec \CC[z]/(z^{n+1}), X)$. The standard action of $\mathbb{G}_m$ on ${\mathbb{A}^1}=\Spec \CC[z]$ given by $z\mapsto gz$ induces an action on $\Spec \CC[z]/(z^{n+1})$ and, hence, also on $\mathcal{L}_n(X)$. By functoriality applied to $f:X\to {\mathbb{A}^1}$, we also get a $\mathbb{G}_m$-equivariant morphism $\mathcal{L}_n(X)\xrightarrow{\mathcal{L}_n(f)} \mathcal{L}_n({\mathbb{A}^1})\cong \AA^{n+1}$, where $\mathbb{G}_m$ acts coordinatewise on the latter arc space with weights $0,1,\ldots,n$. Fix $t\in{\mathbb{A}^1}(\mathbb{C})$ and consider the map $f_n=\pr_{n+1}\circ \mathcal{L}_n(f):\mathcal{L}_n(X)|_{X_t} \longrightarrow {\mathbb{A}^1}$, a $\mathbb{G}_m$-equivariant map of degree $n$, and the projection $\pi_n:\mathcal{L}_n(X)\to X$ mapping an arc to its base point. By $\mathbb{G}_m$-equivariance, $[\mathcal{L}_n(X)|_{X_t} \xrightarrow{\pi_n\times f_n}X\times {\mathbb{A}^1}]$ is in $R^{gm}_{\mathbb{G}_m}(X_t\times {\mathbb{A}^1})$ and defines an element in $R^{gm}_{mon}(X_t)$. We form the generating series
\[ Z_{f,t}^R(T):=\sum_{n\ge 1} \mathbb{L}_R^{-n\dim X} [\mathcal{L}_n(X)|_{X_t} \xrightarrow{\pi_n\times f_n}X\times {\mathbb{A}^1}] \,T^n \mbox{ in }R^{gm}_{mon}(X_t)[[T]].\]
The following result is a consequence of Thm.\ 3.3.1 in the article \cite{DenefLoeser2} of Denef and Loeser or of Thm.\ 5.4 in Looijenga's paper \cite{Looijenga1}.
\begin{theorem}
The series $Z_{f,t}^R(T)$ is a Taylor expansion of a rational function in $T$. The latter has a regular value at $T=\infty$.
\end{theorem}
\begin{definition}
Using the same notation for the rational function, we define $\phi_{f,t}^R:=\mathbbm{1}_{X_t}+Z_{f,t}^R(\infty)\in R^{gm}_{mon}(X_t)$ and, finally, $\phi_f^R:=\sum_{t\in {\mathbb{A}^1}(\mathbb{C})} \iota_{t\,!}\phi_{f,t}^R \in R^{gm}_{mon}(X)$ to be the vanishing cycle of $f:X\to {\mathbb{A}^1}$. For non-connected $X$ we apply the definition to every connected component $X_i$ and define $\phi^R_f$ to be the family $(\phi^R_{f|_{X_i}})_{X_i\in \pi_{0}(X)}$ in $R^{gm}_{mon}(X)=\prod_{X_i\in \pi_0(X)}R^{gm}_{mon}(X_i)$.
\end{definition}
\begin{example} \rm One can show $\phi^{\Con}_f=\phi^{con}_f$ for all $f:X\to {\mathbb{A}^1}$ on smooth $X$.
\end{example}
\begin{example}\rm Let $f:X\to {\mathbb{A}^1}$ be a regular function on a smooth connected scheme $X$. Using the morphism $\Ka^{{\hat{\mu}}}_0(\Sch_X)\to \underline{\Ka}_0(\Sch_X)_{mon}$, the vanishing cycle $\phi^{mot}_f$ constructed by Denef and Loeser maps to $\phi^{\underline{\Ka}_0(\Sch)}_f$ up to the normalization factor $(-1)^{\dim X}$, and we will keep the shorter notation $\phi^{mot}_f$ for $\phi^{\underline{\Ka}_0(\Sch)}_f$.
\end{example}
\begin{example} \rm Let $f$ be a regular function on a smooth scheme as before. Using the map $\underline{\Ka}_0(\MHM(X)_{mon})\longrightarrow \underline{\Ka}_0(\MHM(X))_{mon}$ discussed earlier, the vanishing cycle $\phi^{mhm}_f$ maps to $\phi^{\underline{\Ka}_0(\MHM)}_f$, and we will keep the shorter notation $\phi^{mhm}_f$. Similarly for $\phi^{perv}_f$.
\end{example}
\begin{exercise} \label{functoriality2} Prove that $\phi^R_f$ is functorial in $R$. In particular, the diagram
\[ \xymatrix{ \underline{\Ka}_0(\Sch_X) \ar@{=}[r] \ar[d]^{\phi^{mot}} & \underline{\Ka}_0(\Sch_X) \ar[d]^{\phi^R_f} \\ \underline{\Ka}_0(\Sch_X)_{mon} \ar[r] & R^{gm}_{mon}(X) } \]
commutes where we used Lemma \ref{vanishing_cycle_morphism}, Lemma \ref{initial_object} and Exercise \ref{functoriality} to construct the corresponding morphisms. Conclude that $\phi^R_f$ is a vanishing cycle using the known fact that $\phi^{mot}$ is a vanishing cycle.
\end{exercise}
In order to compute the vanishing cycle in practice, we choose an embedded resolution of $X_t\subset X$, i.e.\ a smooth variety $Y$ together with a proper morphism $\pi:Y \rightarrow X$ such that $Y_t=(f\circ\pi)^{-1}(a)=\pi^{-1}(X_t)$ is a normal crossing divisor and $\pi: Y\!\setminus\! Y_t \xrightarrow{\sim} X\!\setminus\! X_t$. Denote the irreducible components of $Y_t$ by $E_i$ with $i\in J$ and let $m_i>0$ be the multiplicity of $f\circ \pi$ at $E_i$. Since $f\circ \pi$ is a section in $\mathcal{O}_Y(-\sum_{i\in J}m_iE_i)$, it induces a regular map to ${\mathbb{A}^1}$ from the total space of $\mathcal{O}_Y(\sum_{i\in I} m_iE_i)$ for any $\emptyset \neq I \subset J$. The latter space restricted to $E_I^\circ:=\cap_{i\in I}E_i \!\setminus\! \cup_{i\not\in I} E_i$ is just $\otimes_{i\in I} N_{E_i|Y}^{\otimes m_i}|_{E_I^\circ}$. By composition with the tensor product we get a regular map $f_I:N_I:=\prod_{i\in I}( N_{E_i|Y}\!\setminus\! E_i) |_{E_I^\circ} \longrightarrow {\mathbb{A}^1}$ which is obviously homogeneous of degree $m_i$ with respect to the $\mathbb{G}_m$-action on the factor $(N_{E_i|Y}\!\setminus\! E_i)|_{E_I^\circ}$ and homogeneous of degree $m_I:=\sum_{i\in I}m_i$ with respect to the diagonal $\mathbb{G}_m$-action. By composing with $\pi:Y \rightarrow X$, the projection $N_I \rightarrow E_I^\circ$ induces a map $\pi_I:N_I \rightarrow X_t$.
\begin{theorem}[\cite{DenefLoeser2} or \cite{Looijenga1}] \label{resolution}
Let $f:X\rightarrow \AA_k^1$ be a regular map and $\pi:Y \rightarrow X$ be an embedded resolution of $X_t$. In the notation just explained we have\footnote{This formula seems to differs from the one given in \cite{DenefLoeser2} or \cite{Looijenga1} by a sign. However, this is not true as the authors work with schemes over $X$ with good $\mu_d$-action. Given such a scheme $Y\xrightarrow{u} X$ with $\mu_d$-invariant $u$, the associated generator of $R^{gm}_{mon}(X)$ is $-[Y\times_{\mu_d} \mathbb{G}_m \ni (y,z)\mapsto (u(y),z^d)\in X\times {\mathbb{A}^1}]_R$. The sign here is chosen in such a way that if $Y$ carries a trivial $\mu_d$-action, then the generator is equivalent to $[Y\ni y \mapsto (u(y),0)\in X\times {\mathbb{A}^1}]$ in $R^{gm}(X)\subset R^{gm}_{mon}(X)$.}
\[ Z_{f,t}^R(\infty) \;= \sum_{\emptyset \neq I\subset J} (-1)^{|I|-1}[N_I\xrightarrow{\pi_I\times f_I} X_t\times {\mathbb{A}^1}] \,\in\, R^{gm}_{mon}(X_t). \]
\end{theorem}
\begin{corollary}[support property]\label{support_property}
Given a regular function $f:X\to {\mathbb{A}^1}$ on a smooth scheme $X$, the vanishing cycle $\phi^R_f$ is supported on $\Crit(f)$, i.e.\ $\phi^R_f$ is in the image of the embedding $ R(\Crit(f))^{gm}_{mon}\hookrightarrow R(X)^{gm}_{mon}$, where $\Crit(f)=\{df=0\}\subset X$ is the critical locus of $f$.
\end{corollary}
The following result might also be helpful when it comes to actual computations. Let $\int_X a \in R(\Spec\CC)$ be the short notation for $(X\to \Spec \CC)_!(a)$ for $a\in R(X)$.
\begin{theorem}[\cite{DaMe1}] \label{equivariant_case}
Let $X$ be a smooth variety with $\mathbb{G}_m$-action such that every point $x\in X$ has a neighborhood $U\subset X$ isomorphic to $\AA^{n(x)}\times U^{\mathbb{G}_m}$ with $\mathbb{G}_m$ acting by multiplication (with weight one) on $\AA^{n(x)}$. Let $f:X\to {\mathbb{A}^1}$ be a homogeneous function of degree $d> 0$. Then, $\int_X \phi^R_f=[X\xrightarrow{f}{\mathbb{A}^1}]$ in $R^{gm}_{mon}(\Spec\CC)$.
\end{theorem}
\begin{exercise} \label{square_root} \hfill
\begin{enumerate}
\item Prove that the scheme $\{(x,y)\in \AA^2\mid xy\not= 0\} \ni (x,y)\mapsto xy\in {\mathbb{A}^1}$ over ${\mathbb{A}^1}$ is isomorphic to the scheme $\mathbb{G}_m\times \mathbb{G}_m\ni (x,y) \to y \in {\mathbb{A}^1}$ over ${\mathbb{A}^1}$ and conclude $[ \{xy\not=0 \} \xrightarrow{xy} {\mathbb{A}^1}]_R=-[\mathbb{G}_m]_R=1-\mathbb{L}_R$ in $R^{gm}_{mon}(\Spec\mathbb{C})$.
\item Use this and any of the two previous theorems to show $\phi^R_{f,0}=\int_{\AA^2}\phi^R_f =\mathbb{L}_R$ for the function $f:\AA^2\ni (x,y)\mapsto xy\in {\mathbb{A}^1}$. Note that $\Crit(f)\cong\Spec\mathbb{C}$ is the origin $0\in \AA^2$. Hence, $\phi^R_f$ is located at the origin.
\item Use the result of the second part and the product formula for vanishing cycles to prove that $\phi_{z^2}^R$ for the function ${\mathbb{A}^1}\ni z\mapsto z^2\in {\mathbb{A}^1}$ is a square root $\mathbb{L}^{1/2}_R$ of $\mathbb{L}_R$.
\item Show that $\sigma^n(\mathbb{L}^{1/2}_R)=0$ for all $n\ge 2$. Hint: It is a well-known fact that $\AA^n\ni (z_1,\ldots,z_n)\longmapsto \big(\sum_{i=1}^n z_i^k\big)_{k=1}^n \in \AA^n$ has a factorization $\AA^n \twoheadrightarrow \Sym^n({\mathbb{A}^1}) \xrightarrow{\sim} \AA^n$ into the quotient map for the natural $S_n$-action on $\AA^n$ and an isomorphism. Remember that for every scheme $X$ the equation $[X\times {\mathbb{A}^1} \xrightarrow{\pr_{{\mathbb{A}^1}}} {\mathbb{A}^1}]=0$ holds in $R(\Spec\mathbb{C})_{mon}^{gm}$ by construction.
\end{enumerate}
\end{exercise}
\subsection{Vanishing cycles for quotient stacks}
The theory of vanishing cycles for regular functions $f:X\to {\mathbb{A}^1}$ on smooth schemes generalizes straight forward to functions $\mathfrak{f}:\mathfrak{X}\to {\mathbb{A}^1}$ on (disjoint unions of) smooth quotient stacks. Note that a quotient stack $X/G$ is called smooth if $X$ is smooth. A closed substack $\mathfrak{P}\subset \mathfrak{X}$ is given by $Y/G$ for a $G$-invariant closed subset $Y\subset X$. The blow-up of $\mathfrak{X}$ in $\mathfrak{P}$ is then simply given by the quotient stack $\Bl_\mathfrak{P} \mathfrak{X}=\Bl_Y X /G$ having exceptional divisor $\mathfrak{E}=E/G$. Given a quotient stack $X/G$ with smooth $X$ and a regular function $\mathfrak{f}:X/G\to {\mathbb{A}^1}$, we denote with $\Crit(\mathfrak{f})$ the quotient stack $\Crit(\mathfrak{f}\rho)/G$ with $\rho:X\to X/G$. The generalization to disjoint unions of quotient stacks is at hand.
\begin{definition}
Given a stacky motivic theory $R$, a stacky vanishing cycle (with values in $R$) is a rule associating to every regular function $\mathfrak{f}:\mathfrak{X}\to {\mathbb{A}^1}$ in a disjoint union of smooth quotient stacks an element $\phi_{\mathfrak{f}}\in R(\mathfrak{X})$ such that the following holds.
\begin{enumerate}
\item If $u:\mathfrak{P}\to \mathfrak{X}$ is a smooth, then $\phi_{\mathfrak{f}\circ u}=u^\ast(\phi_{\mathfrak{f}})$.
\item Let $\mathfrak{X}$ be a disjoint union of smooth quotients containing a smooth closed substack $i:\mathfrak{P}\hookrightarrow \mathfrak{X}$. Denote by $j:\mathfrak{E}\hookrightarrow \Bl_{\mathfrak{P}} \mathfrak{X}$ the exceptional divisor of the blow-up $\pi:\Bl_\mathfrak{P} \mathfrak{X} \to \mathfrak{X}$ of $\mathfrak{X}$ in $\mathfrak{P}$. Then the formula \[ \pi_!\big( \phi_{\mathfrak{f}\circ \pi} - j_! \phi_{\mathfrak{f}\circ \pi\circ j}\big) = \phi_{\mathfrak{f}} - i_!\phi_{\mathfrak{f}\circ i}\] holds for every $\mathfrak{f}:\mathfrak{X}\to {\mathbb{A}^1}$.
\item Given two morphisms $\mathfrak{f}:\mathfrak{X}\to {\mathbb{A}^1}$ and $\mathfrak{g}:\mathfrak{Y}\to {\mathbb{A}^1}$ with smooth $\mathfrak{X}$ and $\mathfrak{Y}$, we introduce the notation $\mathfrak{f}\boxtimes \mathfrak{g}:\mathfrak{X}\times \mathfrak{P}\xrightarrow{\mathfrak{f}\times \mathfrak{g}} {\mathbb{A}^1}\times {\mathbb{A}^1}\xrightarrow{+} {\mathbb{A}^1}$. Then $\phi_{\mathfrak{f}\boxtimes \mathfrak{g}}=\phi_{\mathfrak{f}}\boxtimes \phi_{\mathfrak{g}}$ in $R(\mathfrak{X}\times \mathfrak{P})$. Moreover, $\phi_{\Spec\CC\xrightarrow{0}{\mathbb{A}^1}}(1)=1$.
\end{enumerate}
\end{definition}
Recall, that we constructed a correspondence between motivic theories $R$ with $\mathbb{L}_R^{-1},(\mathbb{L}^n_R-1)^{-1}\in R(\Spec\CC)$ for all $0\not= n\in \mathbb{N}$ and stacky motivic theories satisfying $\rho^\ast:R(X/G)\xrightarrow{\sim} R(X)^G$ for every special group $G$.
\begin{lemma} \label{stacky_vanishing_cycles}
Let $R$ be a motivic theory such that $\mathbb{L}^{-1}_R, (\mathbb{L}^n_R-1)^{-1}\in R(\Spec\CC)$ for all $0\not= n\in \mathbb{N}$. The restriction to schemes provides a bijection between stacky vanishing cycles with values in $R^{st}$ and vanishing cycles with values in $R$.
\end{lemma}
\begin{proof}
By applying the first property of a stacky vanishing cycle to the smooth map $\rho:X\to X/G$, we see that $\phi_{\mathfrak{f}}$ is uniquely determined by $f=\mathfrak{f}\circ\rho:X\to {\mathbb{A}^1}$ as $\rho^\ast:R(X/G)\to R(X)$ is injective for special $G$.
Moreover, applying the first property once more to
\[ \xymatrix { X\times G \ar[r]^m \ar[d]^{\pr_X} & X \\ X} \]
we observe that the vanishing cycle $\phi_f$ of a $G$-invariant function $f:X\to {\mathbb{A}^1}$ is $G$-invariant. Hence, given a vanishing cycle with values in $R$, we can define $\phi^{st}_{\mathfrak{f}}$ to be the unique element in $R^{st}(X/G)$ mapping to $\phi_f=\phi_{\mathfrak{f}\circ \rho}$ under the isomorphism $\rho^\ast:R^{st}(X/G)\cong R(X)^G$. Alternatively, we can write $\phi^{st}_\mathfrak{f}=\rho_!(\phi_f)/[G]_R$ since $[G]_R^{-1}\rho_!$ is the inverse of $\rho^\ast$ on $R(X)^G$.
If $\mathfrak{f}:\mathfrak{X}\to {\mathbb{A}^1}$ is a function on a disjoint union of quotient stacks $\mathfrak{X}_i=X_i/G_i$, we may assume that $G_i$ is special for all $i$ (see Exercise \ref{special}) and define $\phi^{st}_\mathfrak{f}$ by means of the family $\phi^{st}_{\mathfrak{f}|_{\mathfrak{X}_i}}$ using the first property of a stacky motivic theory.
\end{proof}
\begin{exercise}
Complete the proof of the previous lemma by checking that $\phi^{st}_\mathfrak{f}$ on a quotient stack $\mathfrak{X}=X/G$ is independent of the choice of a presentation, i.e.\ if $X/G\cong Y/H$ for special groups $G$ and $H$, then $X$ is smooth if and only if $Y$ is smooth, and $\phi_{\mathfrak{f}\circ\rho_X}$ corresponds to $\phi_{\mathfrak{f}\circ\rho_Y}$ under the isomorphism $R(X)^G\cong R(Y)^H$ in this case. Moreover, show that $\phi^{st}$ satisfies the properties of a stacky vanishing cycle.
\end{exercise}
\begin{example} \rm
Given a motivic theory $R$ satisfying equation (\ref{eq7}) for all $a\in R(X), n\in \mathbb{N}$ and a vanishing cycle $\phi$ with values in $R$, we can adjoin inverses of $\mathbb{L}_R$ and $\mathbb{L}^n_R-1$ for all $n>0$. By applying the previous lemma to the new motivic theory and the induced vanishing cycle, we get a motivic theory $R^{st}$ and a vanishing cycle $\phi^{st}$ such that $\eta_X(\phi_f)=\phi^{st}_f$ for every $f:X\to {\mathbb{A}^1}$, where $\eta_X:R(X)\to R^{st}(X)$ is the adjunction morphism $R\to R^{st}|_{\Sch_\CC}$ from the previous section. In particular, we can apply this to $\phi^{R^{gm}}_{can}$ with values in $R^{gm}$ and also to $\phi^R$ with values in $R^{gm}_{mon}$ as equation (\ref{eq7}) holds in both cases (cf.\ Exercise \ref{geometric_part}). If $R$ satisfies equation (\ref{eq7}), we can also extend $\phi^R_{can}$ with values in $R$. Note that $\phi^{R,st}_{can, \mathfrak{f}} =\mathbbm{1}_{\mathfrak{X}}\in R^{st}(\mathfrak{X})$ for all $\mathfrak{f}:\mathfrak{X}\to {\mathbb{A}^1}$.
\end{example}
\section{Donaldson--Thomas theory}
After introducing a lot of technical notation, we are now in the position to provide the definition of Donaldson--Thomas functions and to state a couple of results in Donaldson--Thomas theory. We close this section by given a list of examples. There are basically three approaches to define Donaldson--Thomas functions (see \cite{DavisonMeinhardt3}). The one given here is due to Kontsevich and Soibelman.
\subsection{Definition and main results}
We start by fixing a stacky motivic theory $R$ satisfying $R(X/G)\cong R(X)^G$ for every quotient stack $X/G$ with special group $G$. Moreover, let $\phi$ be a stacky vanishing cycle with values in $R$ which is completely determined by its restriction to functions $f:X\to {\mathbb{A}^1}$ on smooth schemes $X$. (cf.\ Lemma \ref{stacky_vanishing_cycles}) As shown before, we could start with any vanishing cycle with values in a motivic theory and pass to the ``stackification''. Let us also assume that a square root $\mathbb{L}_R^{1/2}$ of $\mathbb{L}_R$ is contained in $R(\Spec\CC)$ such that $\sigma^n(\mathbb{L}_R^{1/2})=0$ for all $n\ge 2$.
\begin{example}\rm
Assume that $\sigma^n(a\mathbb{L}_R)=\sigma^n(a)\mathbb{L}^n_R$ holds for all $a\in R(X)$, all $n \in \mathbb{N}$ and all $X$. As shown in \cite{DavisonMeinhardt3}, Appendix B, one can extend the $\sigma^n$-operations to $R(X)[\mathbb{L}^{1/2}_R]$ such that $\sigma^n(-\mathbb{L}_R^{1/2})=(-\mathbb{L}_R^{1/2})^n$ for all $n\in \mathbb{N}$ or equivalently $\sigma^n(\mathbb{L}^{1/2}_R)=0$ for all $n\ge 2$. Thus, $R[\mathbb{L}^{1/2}_R]$ is a new motivic theory having the required square root of $\mathbb{L}_R$. We can apply the stackification to the canonical vanishing cycle $\phi^{R[\mathbb{L}_R^{1/2}]}_{can}$ of $R[\mathbb{L}_R^{1/2}]$ and obtain a pair $(R[\mathbb{L}^{1/2}_R]^{st}, \phi_{can}^{R[\mathbb{L}^{1/2}_R], st})$ satisfying our requirements. Note that the assumption on $R$ is always fulfilled if we replace $R$ with $R^{gm}$ (cf.\ Exercise \ref{geometric_part}).
\end{example}
\begin{example} \rm
For every motivic theory $R$ the element $[{\mathbb{A}^1}\ni z \mapsto z^2\in {\mathbb{A}^1}]_R\in R^{gm}_{mon}(\Spec \mathbb{C})$ is the required square root of $\mathbb{L}_{R^{gm}_{mon}}$ as shown in Exercise \ref{square_root}. The stackification of $\phi^R$ with values in $R^{gm}_{mon}$ will match our requirements. This applies in particular to $\phi^{mot}$ and $\phi^{mhm}$. Note that the stackification of $\Con=\Con^{gm}_{mon}$ and of $\underline{\Ka}_0(D^b_{con}(-,\mathbb{Q}))^{gm}_{mon}$ is zero as $[\Gl(n)]_R=0$ in both cases.
\end{example}
We fix a quiver $Q$ with potential $W$ and a geometric stability condition $\zeta$. Recall that $\mathfrak{M}^{\zeta-ss}$ was the stack of $\zeta$-semistable quiver representations with coarse moduli space $\mathcal{M}^{\zeta-ss}$ parameterizing polystable representations. Similarly $\mathfrak{M}$ was the stack of all quiver representations and $\mathcal{M}^{ssimp}$ its coarse moduli space parameterizing semisimple representations. There are various maps between these spaces as shown in the following diagram
\[ \xymatrix @C=2cm { \mathfrak{M}^{\zeta-ss} \ar@{^{(}->}[r] \ar[d]^{p^\zeta} & \mathfrak{M} \ar[d]^p & \\ \mathcal{M}^{\zeta-ss} \ar[r]^{q^\zeta} & \mathcal{M}^{ssimp} \ar[r]^{\dim\times \mathcal{T}r(W)} & \mathbb{N}^{Q_0}\times {\mathbb{A}^1}.}\]
Note that the maps in the lower horizontal row are homomorphisms of monoids with respect to (direct) sums. Moreover, $q^\zeta$ is proper. Denote the composition $\mathfrak{M}^{\zeta-ss} \hookrightarrow \mathfrak{M} \xrightarrow{\mathcal{T}r(W)\circ p} {\mathbb{A}^1}$ with $\mathfrak{Tr}(W)^\zeta$. For a fixed slope $\mu\in \mathbb{R}$, let us introduce the short hand $\phi_{\mathfrak{Tr}(W)^\zeta}(\mathcal{IC}_{\mathfrak{M}^{\zeta-ss}_\mu})$ for the object in $R(\mathfrak{M}^{\zeta-ss})\cong \prod_{d\in \mathbb{N}^{Q_0}} R(\mathfrak{M}^{\zeta-ss}_d)$ having components
\[ \mathbb{L}_R^{(d,d)/2}\phi_{\mathfrak{Tr}(W)^\zeta|_{\mathfrak{M}^{\zeta-ss}_d}}=\frac{\mathbb{L}_R^{(d,d)/2}}{[G_d]_R}\rho_{d!}\phi_{\Tr(W)_d|_{X_d^{\zeta-ss}}} \]
if $d$ has slope $\mu$ or $d=0$ and $0$ for the remaining dimension vectors $d$. The idea behind the notation is the following. The vanishing cycle $\phi$ defines a map $\underline{\Ka}_0(\Sch_{\mathfrak{M}^{\zeta-ss}})^{st} \longrightarrow R(\mathfrak{M}^{\zeta-ss})$ mapping $\mathbbm{1}_{\mathfrak{M}^{\zeta-ss}}=(\mathfrak{M}^{\zeta-ss}\to \Spec\CC)^\ast(1)$ to $\phi_{\mathfrak{Tr}(W)^\zeta}$. If we define $\mathcal{IC}_{\mathfrak{M}^{\zeta-ss}_\mu}$ to be the object in $\underline{\Ka}_0(\Sch_{\mathfrak{M}^{\zeta-ss}})[\mathbb{L}^{1/2}]^{st}$ whose restriction to $\mathfrak{M}^{\zeta-ss}_d$ is $\mathbb{L}^{(d,d)/2}\mathbbm{1}_{\mathfrak{M}^{\zeta-ss}_d}$ if $d$ has slope $\mu$ or $d=0$ and zero else, then $\phi_{\mathfrak{Tr}(W)^\zeta}(\mathcal{IC}_{\mathfrak{M}^{\zeta-ss}_\mu})$ is just the image of $\mathcal{IC}_{\mathfrak{M}^{\zeta-ss}_\mu}$ under the induced
|
map $\underline{\Ka}_0(\Sch_{\mathfrak{M}^{\zeta-ss}})[\mathbb{L}^{1/2}]^{st} \longrightarrow R(\mathfrak{M}^{\zeta-ss})$. One should think of $\mathcal{IC}_{\mathfrak{M}^{\zeta-ss}_\mu}$ as (the class of) the motivic intersection complex of $\mathfrak{M}^{\zeta-ss}_\mu\subseteq \mathfrak{M}^{\zeta-ss}$.
Let us define the convolution product on $R(\mathcal{M}^{\zeta-ss})$ by means of
\[ R(\mathcal{M}^{\zeta-ss})\otimes R(\mathcal{M}^{\zeta-ss}) \xrightarrow{\boxtimes} R(\mathcal{M}^{\zeta-ss}\times \mathcal{M}^{\zeta-ss}) \xrightarrow{\oplus_!} R(\mathcal{M}^{\zeta-ss}) \]
and operations $\Sym^n:R(\mathcal{M}^{\zeta-ss}) \to R(\mathcal{M}^{\zeta-ss})$ for $n\in \mathbb{N}$ via
\begin{equation}\label{lambda_operations} R(\mathcal{M}^{\zeta-ss})\xrightarrow{\sigma^n} R(\Sym^n \mathcal{M}^{\zeta-ss}) \xrightarrow{\oplus_!} R(\mathcal{M}^{\zeta-ss}). \end{equation}
\begin{lemma}
For $a\in R(\mathcal{M}^{\zeta-ss})$ with $a_0:=a|_{\mathcal{M}^{\zeta-ss}_0}=0$ the infinite sum $\Sym(a):=\sum_{n\in \mathbb{N}} \Sym^n(a)$ has only finitely many nonzero summands after restriction to $\mathcal{M}^{\zeta-ss}_d$ and, hence, defines a well defined element in $R(\mathcal{M}^{\zeta-ss})\cong\prod_{d\in \mathbb{N}^{Q_0}} R(\mathcal{M}^{\zeta-ss}_d)$. Conversely, every element $b\in R(\mathcal{M}^{\zeta-ss})$ with $b_0=1\in R(\mathcal{M}^{\zeta-ss}_0)=R(\Spec\CC)$ can be written uniquely as $\Sym(a)$. The map $\Sym(-)$ is a group homomorphism form the additive group $\{a\in R(\mathcal{M}^{\zeta-ss})\mid a_0=0\}$ to the multiplicative group $\{b\in R(\mathcal{M}^{\zeta-ss})\mid b_0=1\}$. The same holds true if we replace $\mathcal{M}^{\zeta-ss}$ with $\mathcal{M}^{ssimp}$ or with $\mathbb{N}^{Q_0}$.
\end{lemma}
\begin{proof} Fix $d\not= 0$. Since $\oplus$ maps $\Sym^n \mathcal{M}^{\zeta-ss}_e$ to $\mathcal{M}^{\zeta-ss}_{ne}$, we get $\Sym^n(a)|_{\mathcal{M}^{\zeta-ss}_d}=0$ for all $n>|d|=\sum_{i\in Q_0}d_i$, and the infinite sum is finite after restriction to $\mathcal{M}^{\zeta-ss}_d$. Conversely, given $b$ we set $a_0=0$. Suppose $a_e\in R(\mathcal{M}^{\zeta-ss}_e)$ has been constructed for all
dimension vectors\footnote{We write $e<d$ if $d=e+e'$ with $0\not=e'\in \mathbb{N}^{Q_0}$.} $e<d$. We put
\[ a_d:=b_d- \sum_{{n_1e_1+\ldots n_re_r=d\atop 0\not=n_j\in \mathbb{N}, 0\not=e_i\not=e_j\in \mathbb{N}^{Q_0}}} \prod_{j=1}^r \Sym^{n_j}(a_{e_j}).\]
Then $b=\Sym(a)$ for $a\in R(\mathcal{M}^{\zeta-ss})$ with $a|_{\mathcal{M}^{\zeta-ss}_d}=a_d$. Using the properties of $\sigma^n:R(\mathcal{M}^{\zeta-ss})\longrightarrow R(\Sym^n\mathcal{M}^{\zeta-ss})$ we get $\Sym(0)=1$ and $\Sym(a+b)=\Sym(a)\Sym(b)$. In particular, $\{b\in R(\mathcal{M}^{\zeta-ss})\mid b_0=1\}\cong\{a\in R(\mathcal{M}^{\zeta-ss})\mid a_0=0\}$ as groups. The proof for $\mathcal{M}^{ssimp}$ and $\mathbb{N}^{Q_0}$ is similar.
\end{proof}
\begin{exercise} \label{master_equation}
Let $\iota:\mathcal{M}^{\zeta-st}\hookrightarrow \mathcal{M}^{\zeta-ss}$ be the inclusion of the moduli space of $\zeta$-stable representations. Show that $\mathbbm{1}_{\mathcal{M}^{\zeta-ss}}=\Sym\big(\iota_!\mathbbm{1}_{\mathcal{M}^{\zeta-st}}\big)$ holds in $\underline{\Ka}_0(\Sch_{\mathcal{M}^{\zeta-ss}})$. Hint: Proof that the strata of the Luna stratification of $\mathcal{M}^{\zeta-ss}$ and the strata of the natural stratification of $\sqcup_{{n_1, \ldots, n_r\atop d_i\not=d_j \forall i\not= j}}\prod_{i=1}^r(\mathcal{M}_{d_i}^{\zeta-ss})^{n_i}/\!\!/S_{n_i}$ given by the conjugacy types of the $S_{n_i}$-stabilizers coincide. In other words, the canonical map $\oplus:\Sym(\mathcal{M}^{\zeta-st}) \longrightarrow \mathcal{M}^{\zeta-ss}$ is a ``constructible'' isomorphism.
\end{exercise}
\begin{exercise}
Show that the restriction of $p^\zeta_!\big(\phi_{\mathfrak{Tr}(W)^\zeta}(\mathcal{IC}_{\mathfrak{M}^{\zeta-ss}_\mu})\big)$ to $\mathcal{M}^{\zeta-ss}_0$ is $1$.
\end{exercise}
Using the last exercise and the previous lemma, the following definition makes sense.
\begin{definition}
The Donaldson--Thomas function $\mathcal{DT}(Q,W)^\zeta\in R(\mathcal{M}^{\zeta-ss})$ is the unique element with $\mathcal{DT}(Q,W)^\zeta|_{\mathcal{M}^{\zeta-ss}_0}=0$ such that $\mathcal{DT}(Q,W)^\zeta_\mu:=\mathcal{DT}(Q,W)|_{\mathcal{M}^{\zeta-ss}_\mu}$ solves the equation
\[ p^\zeta_!\big(\phi_{\mathfrak{Tr}(W)^\zeta}(\mathcal{IC}_{\mathfrak{M}^{\zeta-ss}_\mu})\big)=\Sym\left( \frac{\mathcal{DT}(Q,W)^\zeta_\mu}{\mathbb{L}^{1/2}_R-\mathbb{L}^{-1/2}_R} \right) \]
in $R(\mathcal{M}^{\zeta-ss})$ for all $\mu\in (-\infty,+\infty]$. We also use the notation $\mathcal{DT}(Q,W)^\zeta_d:=\mathcal{DT}(Q,W)^\zeta|_{\mathcal{M}^{\zeta-ss}_d}$. The element $\int_{\mathcal{M}^{\zeta-ss}_d} \mathcal{DT}(Q,W)^\zeta_d=:\DT(Q,W)^\zeta_d\in R(\Spec \CC)$ is called the Donaldson--Thomas invariant of $(Q,W)$ with respect to $\zeta$ for dimension vector $d$. If $W=0$, we simply write $\mathcal{DT}(Q)^\zeta_d$ and $\DT(Q)^\zeta_d$.
\end{definition}
In view of Exercise \ref{master_equation}, one might hope that $\mathcal{DT}(Q,W)^\zeta_d/(\mathbb{L}_R^{1/2}-\mathbb{L}_R^{-1/2})$ is something like $p^\zeta_! \phi_{\mathfrak{Tr}(W)^\zeta}(j_!\mathcal{IC}_{\mathfrak{M}^{\zeta-st}_\mu})$, where $j:\mathfrak{M}^{\zeta-st}_\mu\hookrightarrow \mathfrak{M}^{\zeta-ss}_\mu$ denotes the inclusion, and $\mathcal{IC}_{\mathfrak{M}^{\zeta-st}_\mu}$ is defined similarly to $\mathcal{IC}_{\mathfrak{M}^{\zeta-ss}_\mu}$. Let us assume, we were allowed to commute $p^\zeta_!$ with $\phi_{\mathfrak{Tr}(W)^\zeta}$ which is a priori not clear as $p^\zeta$ is not proper. Then
\[ p^\zeta_! \phi_{\mathfrak{Tr}(W)^\zeta}(j_!\mathcal{IC}_{\mathfrak{M}^{\zeta-st}_\mu})= \phi_{\mathcal{T}r(W)\circ q^\zeta}\left(\frac{\iota_! \mathcal{IC}_{\mathcal{M}^{\zeta-ss}_\mu}}{\mathbb{L}^{1/2}_R-\mathbb{L}_R^{-1/2}}\right). \]
for $\mathcal{IC}_{\mathcal{M}^{\zeta-st}_d}=\mathbb{L}_R^{((d,d)-1)/2}\mathbbm{1}_{\mathcal{M}^{\zeta-st}_d}$, and $\mathcal{DT}(Q,W)^\zeta_d=\phi_{\mathcal{T}r(W)\circ q^\zeta_d} (\iota_!\mathcal{IC}_{\mathcal{M}^{\zeta-ss}_d})$ follows. This is not quite true. It turns out that the extension $\iota_!\mathcal{IC}_{\mathcal{M}^{\zeta-st}_\mu}$ of $\mathcal{IC}_{\mathcal{M}^{\zeta-st}_d}$ by zero has to be replaced with the ``correct'' extension $\mathcal{IC}_{\overline{\mathcal{M}^{\zeta-st}_d}}$ which restricts to $\mathcal{IC}_{\mathcal{M}^{\zeta-st}_d}$, but might also be nonzero on the boundary of $\mathcal{M}^{\zeta-st}_d$ inside $\mathcal{M}^{\zeta-ss}_d$. However, we have not defined $\mathcal{IC}_{\overline{\mathcal{M}^{\zeta-st}_d}}$ yet.
\begin{definition}
We denote with $\mathcal{IC}^{mot}_{\overline{\mathcal{M}^{\zeta-st}}}\in \underline{\Ka}_0(\Sch_{\mathcal{M}^{\zeta-ss}})[\mathbb{L}^{1/2}]^{st}$ the Donaldson--Thomas function $\mathcal{DT}(Q)^\zeta$ computed with respect to the stackification of the canonical vanishing cycle $\phi^{\underline{\Ka}_0(\Sch)[\mathbb{L}^{1/2}]}_{can}$. Note that $\overline{\mathcal{M}^{\zeta-st}_d}=\mathcal{M}^{\zeta-ss}_d$ if $\mathcal{M}^{\zeta-st}_d\not=\emptyset$ and $\overline{\mathcal{M}^{\zeta-st}_d}=\emptyset$ else.
\end{definition}
The following result justifies the definition.
\begin{theorem}[\cite{MeinhardtReineke}] \label{main_result_1}
If $\zeta$ is generic (see Definition \ref{generic_stability}), the element $\mathcal{IC}^{mot}_{\overline{\mathcal{M}^{\zeta-st}}}$ maps to the classical intersection complex\footnote{Strictly speaking one has to normalize the class of the classical (shifted) intersection complex of $\mathcal{M}^{\zeta-ss}_d$ by multiplication with $(-\mathbb{L}^{1/2})^{(d,d)-1}$ which does not change the underlying perverse sheaf.} $\mathcal{IC}_{\overline{\mathcal{M}^{\zeta-st}}}$ of the closure of $\mathfrak{M}^{\zeta-st}$ inside $\mathcal{M}^{\zeta-ss}$ under the map
$\underline{\Ka}_0(\Sch_{\mathcal{M}^{\zeta-ss}})[\mathbb{L}^{1/2}]^{st}\longrightarrow \underline{\Ka}_0(\MHM(\mathcal{M}^{\zeta-ss}))[\mathbb{L}^{1/2}]^{st}$ constructed in Lemma \ref{initial_object}.
\end{theorem}
\begin{example} \rm
Assuming equation (\ref{eq7}) for all $a\in R(X)$ and all $n\in \mathbb{N}$ so that $\phi^R_{can}$ has a stacky extension $\phi^{R[\mathbb{L}^{1/2}_R]}_{can}$. In this case, $\mathcal{DT}(Q,W)^\zeta=\mathcal{DT}(Q)^\zeta$ is just the image of $\mathcal{IC}^{mot}_{\overline{\mathcal{M}^{\zeta-st}}}$ under the canonical map $\underline{\Ka}_0(\Sch_{\mathcal{M}^{\zeta-ss}})[\mathbb{L}^{1/2}]^{st} \longrightarrow R(\mathcal{M}^{\zeta-ss})[\mathbb{L}^{1/2}]^{st}$ of Lemma \ref{initial_object}, and we may define $\mathcal{IC}_{\overline{\mathcal{M}^{\zeta-st}}}:=\mathcal{DT}(Q)^\zeta\in R(\mathcal{M}^{\zeta-ss})[\mathbb{L}^{1/2}]^{st}$. As for mixed Hodge modules one can show that $\mathcal{IC}_{\overline{\mathcal{M}^{\zeta-st}}}$ has a lift in $R(\mathcal{M}^{\zeta-ss})[\mathbb{L}^{-1/2}]$ if the canonical map $\underline{\Ka}_0(\Sch)\to R$ factorizes trough $\underline{\Ka}_0(\MHM)$ or if $R$ is a sheaf in the \'{e}tale topology with $[\mathbb{P}^r]$ acting as a nonzero divisor in each group $R(X)$ for all $r\in \mathbb{N}$. Note that we can replace $R$ with $R^{gm}$ to ensure equation (\ref{eq7}).
\end{example}
One can also prove the following result which should be seen as the analogue of Exercise \ref{master_equation}.
\begin{theorem}[\cite{DavisonMeinhardt3}] \label{main_result_2}
Recall that every stacky vanishing cycle with values in $R$ satisfying our assumptions defines a map $\phi_{\mathcal{T}r(W)\circ q^\zeta}:\underline{\Ka}_0(\Sch_{\mathcal{M}^{\zeta-ss}})^{st}\longrightarrow R(\mathcal{M}^{\zeta-ss})$. If this map commutes with the $\Sym^n$-operations of equation (\ref{lambda_operations}) for every $n\in \mathbb{N}$, and if $\zeta$ is generic, then $\mathcal{DT}(Q,W)^\zeta=\phi_{\mathcal{T}r(W)\circ q^\zeta}\big(\mathcal{IC}^{mot}_{\overline{\mathcal{M}^{\zeta-st}}}\big)$.
\end{theorem}
\begin{example} \rm The assumption on $\phi_{\mathcal{T}r(W) q^\zeta}$ is true for $\phi=\phi^{mhm}$. Hence, if $\zeta$ is generic, $\mathcal{DT}(Q,W)^\zeta_d=\phi^{mhm}_{\mathcal{T}r(W)\circ q^\zeta_d}\big(\mathcal{IC}_{\mathcal{M}^{\zeta-st}_d}\big)$ if $\mathcal{M}^{\zeta-st}_d\not=\emptyset$ and zero else. It is not known yet whether or not $\phi^{mot}_{\mathcal{T}r(W)\circ q^\zeta}$ commutes with the $\Sym^n$-operations, and we cannot apply the theorem. However, a counterexample is also not known, and conjecturally the theorem also holds for $\phi^{mot}$.
\end{example}
\begin{exercise} Use the support property of $\phi^R$ (see Corollary \ref{support_property}) to show that $\mathcal{DT}(Q,W)^\zeta_d$ is supported on $\mathcal{M}^{W,\zeta-ss}_d$, i.e.\ is an element in $R(\mathcal{M}^{W,\zeta-ss}_d)$, where $\mathcal{M}^{W,\zeta-ss}_d$ the moduli space parameterizing $\zeta$-polystable $\mathbb{C} Q$-representations $V$ of dimension $d$ such that $\partial W/\partial \alpha=0$ on $V$ for all $\alpha\in Q_1$.
\end{exercise}
\subsection{Examples}
The aim of this section is to provide some examples of (motivic) Donaldson--Thomas invariants.
\subsubsection{\rm \textbf{The m-loop quiver}}
Let us consider the quiver $Q^{(m)}$ with one vertex and $m$ loops.
The choice of a stability condition is irrelevant as $\mathfrak{M}^{\zeta-ss}=\mathfrak{M}$. We take the canonical vanishing cycle of $\underline{\Ka}_0(\Sch)[\mathbb{L}^{1/2}]$ and are only interested in Donaldson--Thomas invariants. As $\underline{\Ka}(\Sch_\mathbb{N})[\mathbb{L}^{1/2}]^{st}\cong \Ka(\Sch_\CC)[\mathbb{L}^{-1/2},(\mathbb{L}^n-1)^{-1}:n\in \mathbb{N}_\ast][[t]]$, we end up with the following power series
\[ A^{(m)}(t):=(\dim \circ p)_!\big(\phi_{\mathfrak{Tr}(W)}(\mathcal{IC}_{\mathfrak{M}})\big)=\sum_{d\ge 0}\frac{\mathbb{L}^{(m+1)d^2/2}}{[\Gl(d)]}t^d=\sum_{d\ge 0} \frac{\mathbb{L}^{(md^2+d)/2}}{\prod_{i=1}^d(\mathbb{L}^i-1)}t^d.\]
Note that the series is also well-defined for $m\in \mathbb{Z}$.
\begin{exercise}
Prove the identity $A^{(m)}(\mathbb{L} t)-A^{(m)}(t)=\mathbb{L}^{(m+1)/2}\,t\, A^{(m)}(\mathbb{L}^mt)$ for all $m\in \mathbb{Z}$.
\end{exercise}
For $m\in \mathbb{N}$ we introduce the series
\[ B^{(m)}(t):=A^{(m)}(\mathbb{L} t)/A^{(m)}(t)=\Sym\big( \sum_{d\ge 1} \mathbb{L}^{1/2}[\mathbb{P}^{d-1}]\DT(Q^{(m)})_d\,t^d\big), \]
where we used the properties of $\Sym$ and the fact that $\dim_!$ commutes with $\Sym$. Moreover, due to the previous exercise
\[ B^{(m)}(t)=1+\mathbb{L}^{(m+1)/2}t \prod_{i=0}^{m-1}B^{(m)}(\mathbb{L}^it). \]
For $m=0$ the empty product on the right hand side is $1$, and we obtain $B^{(0)}(t)=1+\mathbb{L}^{1/2}t$ as well as
\[ \DT(Q^{(0)})_d=\begin{cases} 1 & \mbox{for }d=1, \\ 0&\mbox{else.} \end{cases} \]
This is in fully agreement with Theorem \ref{main_result_1} as $\mathcal{M}^{simp}_d=\Spec\CC$ for $d=1$ and $\mathcal{M}^{simp}_d=\emptyset$ else.\\
For $m=1$, we get $B^{(1)}(t)=1+\mathbb{L} tB^{(1)}(t)$, and $B^{(1)}(t)=1/(1-\mathbb{L} t)=\sum_{d\in \mathbb{N}} \mathbb{L}^dt^d$ follows. Hence,
\[ \DT(Q^{(1)})_d=\begin{cases} \mathbb{L}^{1/2} & \mbox{for }d=1, \\ 0&\mbox{else.} \end{cases} \]
Again, this is in fully agreement with Theorem \ref{main_result_1} as $\mathcal{M}^{simp}_d={\mathbb{A}^1}$ for $d=1$ and $\mathcal{M}^{simp}_d=\emptyset$ else.\\
Solving the pseudo-algebraic equation for $m\ge 2$ is much more complicated, but the answer is given as follows. Note that $\mathbb{Z}/(d)=:C_d$ acts on the set $U_d:=\{ (a_1,\ldots,a_d)\in \mathbb{N}^d\mid a_1+\ldots+a_d=(m-1)d\}$ by cyclic permutation. We call $a=(a_1,\ldots,a_d)$ primitive if $\Stab_{C_d}(a)=\{0\}$, and almost primitive if $a$ is primitive or $m\equiv 0 (2), d\equiv 2 (4)$ and $a=(b_1,\ldots,b_{d/2},b_1,\ldots,b_{d/2})$ for some primitive $(b_1,\ldots,b_{d/2})$. The subset $U^{ap}_d:=\{ a\in U_d\mid a\mbox{ is almost primitive}\}$ is obviously stable under the $C_d$-action. Define $\deg(a)=\sum_{i=1}^d(d-i)a_i$ and $\deg(C_d\cdot a)=\min\{ \deg(a')\mid a'\in C_d\cdot a\}$.
\begin{theorem}[\cite{Reineke4}]
Let $d\ge 1$ and $m\ge 2$. Then $\dim(\mathcal{M}^{simp}_d)=(m-1)d^2+1$ and
\[\DT(Q^{(m)})_d=\mathbb{L}^{\frac{(m-1)d^2+1}{2}}\frac{1-\mathbb{L}^{-1}}{1-\mathbb{L}^{-d}}\sum_{C_d\cdot a\in U^{ap}_d/C_d} \mathbb{L}^{-\deg(C_d\cdot a)}. \]
In particular, $\chi_c(\DT(Q^{(m)})_d)=\frac{(-1)^{(m-1)d^2+1}}{d}|U^{as}_d/C_d|$.
\end{theorem}
\begin{exercise}
By taking the Euler characteristic of every coefficient of $B^{(m)}(t)$, we get a series $\beta^{(m)}(t)\in \mathbb{Z}[[t]]$ satisfying $\beta^{(m)}(t)=1+(-1)^{m+1}\beta^{(m)}(t)^m$, but also $\beta^{(m)}(t)=\prod_{d\ge 1}(1-t^d)^{d\Omega^{(m)}_d}$ using the shorthand $\Omega^{(m)}_d:=\chi_c(\DT(Q^{(m)})_d)$. Prove that
\[ \Omega^{(2)}_d=\chi_c(\DT(Q^{(2)})_d)=\frac{1}{2d^2}\sum_{n|d} (-1)^{n+1}\mu(d/n){2n \choose n} \]
for $d\ge 1$, where $\mu(-)$ denotes the M\"obius function. Hint: Solve the quadratic equation for $\beta^{(2)}(t)$ and use the logarithmic derivative to prove the formula
\[ \sum_{d\ge 1}2d^2\Omega^{(2)}_d \frac{t^d}{1-t^d}=\sum_{n\ge 1} \Big(\sum_{d|n} 2d^2\Omega^{(2)}_d\Big)t^n= 1-\frac{1}{\sqrt{1+4t}}. \]
The Taylor expansion of $1/\sqrt{1+4t}$ is $\sum_{n\in \mathbb{N}}(-1)^n{2n\choose n}t^n$. Finally, use the M\"obius function to solve for $\Omega^{(2)}_d$. \\
One can show $\Omega^{(2)}_d=F(d)$ for $F(d)$ being the coefficients introduced in \cite{JoyceMF}, equation (53).
\end{exercise}
\subsubsection{\rm\textbf{Dimension reduction}}
Let $Q$ be a quiver with potential $W$. Let $\zeta$ be a stability condition and $\mu\in [-\infty,\infty)$. Assume $\mathfrak{M}^{\zeta-ss}_d=\mathfrak{M}_d$ for all $d\in \Lambda_\mu$. As an example, we may take the King stability condition $\theta=0$ and $\mu=0$. Given a motivic theory $R$, the following arguments apply to $\phi^{R,st}$ with values in $R^{gm,st}_{mon}$. Without loss of generality, we consider the case $R=\underline{\Ka}_0(\Sch)$, i.e.\ $\phi=\phi^{mot,st}$. As in the previous example, we are only interested in Donaldson--Thomas invariants. A subset $C\subset Q_1$ such that every cycle in $W$ contains exactly one arrow in $C$ is called a cut of $W$. Let $\mathbb{G}_m$ act on $X_d=\prod_{\alpha:i\to j} \Hom(\CC^{d_i},\CC^{d_j})$ by multiplying a linear map corresponding to $\alpha\in C$ with $g\in \mathbb{G}_m$. By assumption, $\Tr(W)_d$ is homogeneous of degree one, and $\int_{X_d}\phi_{\Tr(W)_d}^{mot}$ is the residue class of $[X_d\xrightarrow{\Tr(W)_d}{\mathbb{A}^1}] $ in $\Ka_0(\Sch_\CC)_{mon}$ according to Theorem \ref{equivariant_case}. Consider the projection $\tau_d:X_d\to Y_d$ with $Y_d=\prod_{C\not\ni \alpha:i\to j}\Hom(\CC^{d_i},\CC^{d_j})$ which is a trivial vector bundle with fiber $F=\prod_{C\ni \alpha:i\to j}\Hom(\CC^{d_i},\CC^{d_j})$. As $\Tr(W)_d$ is linear along the fibers, we can think of it as being a section $\sigma^W_d$ of the dual bundle with fiber $F^\vee=\prod_{C\ni \alpha:i\to j}\Hom(\CC^{d_j},\CC^{d_i})$ using the trace pairing. Indeed, it maps a point $M=(M_\alpha)_{\alpha\not\in C}$ to $(\frac{\partial W}{\partial \alpha}(M))_{\alpha\in C}\in F^\vee$.
\begin{exercise}
Show that $[\tau_d^{-1}\{\sigma^W_d\not=0\} \xrightarrow{\Tr(W)_d}{\mathbb{A}^1}]$ is in $\pr_\CC^\ast \Ka_0(\Sch_\CC)\subset \Ka_{0,\mathbb{G}_m}(\Sch_{\mathbb{A}^1})$. Hint: Choose an open cover $\cup_{i\in I} U_i=\{\sigma^W_d\not= 0\}\subseteq Y_d$ such that $\tau_d: \tau_d^{-1}U_ i \to U_i$ splits into the kernel of $\Tr(W)_d$ and a complement of rank one.
\end{exercise}
Using the exercise, we obtain
\begin{eqnarray*} [X_d\xrightarrow{\Tr(W)_d}{\mathbb{A}^1}]&=&[\tau_d^{-1}\{\sigma^W_d=0\}\xrightarrow{0}{\mathbb{A}^1}]\;=\;[F][\{\sigma^W_d=0\}]\\ &=&\mathbb{L}^{\sum_{C\ni \alpha:i\to j}d_id_j}\left[\left\{\ M\in Y_d\mid \frac{\partial W}{\partial \alpha}(M)=0 \,\forall\, \alpha\in C\right\}\right]. \end{eqnarray*}
in $\Ka_0(\Sch_\mathbb{C})_{mon}$. Therefore,
\begin{eqnarray*} \lefteqn{ \Sym\left( \frac{\DT(Q,W)^\zeta_\mu}{\mathbb{L}^{1/2}-\mathbb{L}^{-1/2}} \right)\; =\; (\dim\circ p)_!\big(\phi_{\mathfrak{Tr}(W)}(\mathcal{IC}_{\mathfrak{M}_\mu})\big)} \\&=&\sum_{d\in \Lambda_\mu} \mathbb{L}^{(d,d)/2+\sum_{C\ni \alpha:i\to j}d_id_j}\frac{\left[\left\{M\in Y_d\mid \frac{\partial W}{\partial \alpha}(M)=0\, \forall \,\alpha\in C\right\}\right]}{[G_d]} \, t^d, \end{eqnarray*}
is actually in $\Ka_0(\Sch_\CC)[\mathbb{L}^{-1/2},(\mathbb{L}^n-1)^{-1}:n\in \mathbb{N}_\ast][[t_i\mid i\in Q_0]]$, where we used
\[\Ka_0(\Sch_\CC)[\mathbb{L}^{-1/2},(\mathbb{L}^n-1)^{-1}:n\in \mathbb{N}_\ast][[t_i: i\in Q_0]]=\underline{\Ka}(\Sch_{\mathbb{N}^{Q_0}})[\mathbb{L}^{1/2}]^{st} \subset \underline{\Ka}(\Sch_{\mathbb{N}^{Q_0}})^{st}_{mon}.\]
This reduction process is usually called dimension reduction, and \\
$\left\{M\in Y_d\mid \frac{\partial W}{\partial \alpha}(M)=0\, \forall \,\alpha\in C\right\}/G_d $
is the stack of $d$-dimensional representations of the algebra $\CC Q/(\alpha,\partial W/\partial \alpha\mid \alpha\in C)$.
\subsubsection{\rm\textbf{0-dimensional sheaves on a Calabi--Yau 3-fold}}
Let us illustrate the concept of dimension reduction using the quiver $Q^{(3)}$ with one vertex and three loops $x,y,z$. The choice of the stability condition does not matter. We take the potential $W=[x,y]z=xyz-yxz$, and $\CC Q/(\partial W/\partial\alpha\mid \alpha\in Q^{(3)}_1)=\CC[x,y,z]$ follows. Hence, representations of this algebra are just 0-dimensional sheaves of finite length $d$ on the Calabi--Yau 3-fold $\AA^3$. We can take $C=\{z\}$ and obtain $\CC Q/(z,\partial W/\partial z)=\CC[x,y]$, and representations of this algebra are 0-dimensional sheaves of finite length $d$ on the Calabi--Yau 2-fold $\AA^2$. Using $(d,d)^2=-2d^2$, we have to compute
\[ \sum_{d\in \mathbb{N}} \frac{\left[\left\{M\in Y_d\mid \frac{\partial W}{\partial \alpha}(M)=0\, \forall \,\alpha\in C\right\}\right]}{[\Gl(d)]}\,t^d \]
which has already been done by Feit and Fine half a century ago in \cite{FeitFine}. The answer is
\[ \Sym\Big( \frac{1}{\mathbb{L}-1}\sum_{d\ge 1} [\AA^2] t^d\Big), \]
and $DT(Q^{(3)},W)_d=\mathbb{L}^{3/2}=\int_{\AA^3} \mathcal{IC}^{mot}_{\AA^3}$ follows for all $d\ge 1$ if we define $\mathcal{IC}^{mot}_{X}=\mathbb{L}^{-\dim(X)/2}\mathbbm{1}_X\in \underline{\Ka}_0(\Sch_X)^{st}_{mon}$ for every smooth equidimensional variety $X$. This example has been generalized to arbitrary Calabi--Yau 3-folds by Behrend, Bryan, Szend
|
B &= \biggl\{ T_+-T \geq - 2 g(\gamma),T-T_- \geq - 2 g(\gamma) \; \biggr\}.
\end{align*}
Then we have
\begin{align}
\P (A^c) \sim e^{- \gamma \infty}, \notag \\
\P (A, B^c) \sim e^{- \gamma \infty}. \notag
\end{align}
\end{lem}
\begin{proof}
In the special case of $g(t) = e^t,$ we obtain the solution of $T_+$:
\begin{equation}
T_+
= \frac 1 {\sqrt \gamma}W_+ + \frac 1 \gamma \log\biggl\{ 1+ \int_0^t \gamma e^{\gamma(T_0 - \frac 1 {\sqrt \gamma} W_+) } ds \biggr\}. \notag
\end{equation}
\begin{equation}
\widetilde T_+
= \frac 1 {\sqrt \gamma}W_+ + \sup \biggl(T_0 - \frac{1}{\sqrt{\gamma}}W_+ \biggr)_+. \notag
\end{equation}
In the general case, notice $T_+$ and $\widetilde T_+$ are exponentially equivalent. We can replace $T_+$ by $\widetilde T_+.$
Therefore, we define
\begin{equation}
A_\epsilon = \biggl\{s \in [0,t]: T_0(s) -\frac{1}{\sqrt \gamma}W_+(s) - \sup_{[0,t]} (T_0 -\frac{1}{\sqrt \gamma}W_+)_+ \geq - \epsilon \biggr\},\notag
\end{equation}
and let $|A_\epsilon|$ denote the Lebesgue measure of $A_\epsilon$.
Set $\eta = e^{\gamma (- f + \epsilon - \frac{\log \gamma}{\gamma})}, \delta = \epsilon,$ we have
\begin{align}
&\biggl\{T_+(t) - T_0(t) \leq - f \biggr\}
\subset \biggl\{ \log |A_\epsilon| \leq \gamma(-f + \epsilon) - \log \gamma \biggr\} \notag \\
&\subset \biggl\{\sup_{t,s \in [0,1], |t-s| < \eta } |(T_0 - \frac 1 {\sqrt \gamma}W_+)(t) -(T_0 - \frac 1 {\sqrt \gamma}W_+)(s) | > \epsilon \biggr\}. \nonumber
\end{align}
Since $\frac {\sqrt \gamma} {\sqrt 2}(T_0 - \frac 1 {\sqrt \gamma}W_+)$ is a standard Brownian motion,
we have
\begin{align}
& \frac 1 \gamma \log \P\{T_+(t) - T_0(t) \leq - f \}
\leq \frac 1 \gamma \log\{\frac{1}{\epsilon \sqrt {\gamma \eta}}\} - \frac{\delta^{2}}{8 \eta} \notag
\end{align}
Setting $\epsilon = f$,
the right-hand side is $ \frac {\log {\gamma }} {3\gamma} - \frac{ \gamma ^ {\frac 1 3}} 8, $
which converges to $- \infty$ as $\gamma \to \infty$. Therefore, we have
\begin{equation}
\P \{\inf_{[0,1]}(T_+ - T_0) \leq -f(\gamma) = - \frac 1 {\sqrt \gamma}\} \sim e^{-\gamma \infty}.\nonumber
\end{equation}
Similarly, we can prove
\begin{equation}
\P \{\inf_{[0,1]}(T_0 - T_-) \leq -f(\gamma) = - \frac 1 {\sqrt \gamma}\} \sim e^{- \gamma \infty}.\nonumber
\end{equation}
Therefore, we conclude that
$\P \{A^c\} \sim e^{-\gamma \infty}. $
For those sample paths which satisfy $\inf_{[0,1]}(T_+ - T_0) \leq -f(\gamma) = -2g(\gamma)$, there is a stopping time $s$ such that
$$(T-T_+)(s) = g(\gamma). $$
Recall that for $t>s,$
\begin{align}
&T_+ (t) -T_+(s)= \frac 1 {\sqrt \gamma}( W_+ (t) - W_+(s))+ \int_s^t e^{ \gamma(T_0 - T_+) }. \nonumber
\end{align}
Then we have the lower bound:
$$T_+ (t) -T_+(s) \geq \frac 1 {\sqrt \gamma}( W_+ (t) - W_+(s)) . $$
Note that in $A$,
$$T_+ - T_- \geq -2f(\gamma).$$
For $T - T_+ \geq g(\gamma)$, we have
\begin{align*}
&T_- - T \leq 2f(\gamma) - g(\gamma)=0 \nonumber \\
&\P \biggl\{A, \sup_{[0,1]}\{T-T_+\} \geq 2 g(\gamma) \biggr\} \nonumber \\
&\leq \P \biggl\{ \sup_{t \in [s,1]} \biggl\{ \frac 1 {\sqrt \gamma} (W(t)-W(s)) + \sup_{[s,1]} \{(-e^{\gamma g(\gamma) }+1)(t-s) - T_+\Big|^t_s \biggr\} \geq g(\gamma) \biggr\}.
\end{align*}
We define
\begin{align*}
A_1(t) &= \frac 1 {\sqrt \gamma} (W(t)- W(s))+ \frac 1 2 (-e^{\gamma g(\gamma)}+ 1)(t-s), \\
A_2(t) &= -\frac 1 {\sqrt \gamma} (W_+(t)- W_+(s))+ \frac 1 2 (-e^{\gamma g(\gamma)}+ 1)(t-s) .
\end{align*}
It follows from the above arguments that
\begin{equation}
\P\biggl\{ \sup_{[s,1]}\{T-T_+\} \geq 2 g(\gamma)\biggr\} \nonumber \\
\leq \P\biggl\{ \sup_{[s,1]}A_1 \geq \frac 1 2 g(\gamma) \biggr\} + \P\biggl\{ \sup_{[s,1]}A_2 \geq \frac 1 2 g(\gamma)\biggr\}.
\end{equation}
By the strong Markov property,
\begin{align}
&\P\biggl\{ A_1 \geq \frac 1 2 g(\gamma) \biggr\}
\leq \P\biggl\{\sup_{[0,1]} A_1 |_{s=0} \geq \frac 1 2 g(\gamma) \biggr\} \nonumber \\
\leq &\P\biggl\{ \sup_{[0, \gamma ^{-k}]} A_1 |_{s=0} \geq \frac 1 2 g(\gamma) \biggr\} + \P\biggl\{ \sup_{[ \gamma ^{-k},1]} A_1|_{s=0} \geq \frac 1 2 g(\gamma) \biggr\} . \nonumber
\end{align}
For the first term,
\begin{align}
\P\biggl\{ \sup_{[0, \gamma ^{-k}]} A_1 \geq \frac 1 2 g(\gamma) \biggr\}
\leq &\P\biggl\{ \sup_{[0, \gamma ^{-k}]} \frac 1 {\sqrt \gamma} W(t) \geq \frac 1 2 g(\gamma) \biggr\} \notag \\
\leq &\frac{4 \gamma^{\frac {1+k} 2}}{\sqrt {2 \pi }g(\gamma)} e^{-\frac 1 2 g(\gamma) \gamma^{1+k}}. \nonumber
\end{align}
For the second term,
\begin{equation}
\P\biggl\{ \sup_{[\gamma^{-k},1]}A_1 \geq \frac 1 2 g(\gamma) \biggr\} \nonumber \\
\leq \frac 4 { (e^{\gamma g(\gamma) }-1) \gamma^{\frac 1 2 -k}} \exp\biggl\{ - \frac 1 8 \gamma^{1-2k} (e^{\gamma g(\gamma) }-1)^2 \biggr\}.
\end{equation}
Setting $ k = \frac 1 2,$ by our choice of $f,g,$ we have:
$$\P\biggl\{\sup_{[s,1]} A_1 \geq \frac 1 2 g(\gamma)\biggr\}\sim e^{-\gamma \infty}.$$
Similar arguments apply to $A_2 \sim e^{-\gamma \infty} $, and
$$\P\{A, \sup_{[0,1]}\{T-T_+\} \geq 2 g(\gamma)\}$$
has an infinite convergence rate. Also,
$$\P\biggl\{A, \inf_{[0,1]}\{T-T_-\}\leq -2g(\gamma) \biggr\}$$
has an infinite convergence rate.
Therefore, we conclude that
$$ \P\{A,B^c\} \sim e^{-\gamma \infty}. $$
\end{proof}
\subsubsection{General Case}
We showed that the particles are almost interlaced by the bounds $f(\gamma), g(\gamma).$
We will prove a similar result for all $\{T_{n,k}; 1\leq k \leq n \leq N\}. $
We write $T_0, T_+, T_-, T$ as $T_{1,1}, T_{2,1}, T_{2,2}, T_{3,2}$,
and let $f_1(\gamma)= f(\gamma), g_1(\gamma)= g(\gamma).$
We let
\begin{align*}
A_1 &= \biggl\{T_{2,1} - T_{1,1} \geq -f_1(\gamma),T_{1,1} - T_{2,2} \geq -f_1(\gamma) \biggr\}, \\
B_1 &= \biggl\{ T_{2,1}-T_{3,2} \geq 2 g(\gamma),T_{3,2}-T_{2,2} \geq 2 g(\gamma) \quad \biggr\}, \\
A_1 &\subset C_1 = \{T_{2,1} - T_{2,2} \geq -g_1(\gamma)\}.
\end{align*}
Then,
\begin{align*}
\mathbb{P}\{ A_1^c \}&\sim e^{-\gamma \infty}, \\
\mathbb{P}\{C_1^c\}&\sim e^{-\gamma \infty}, \\
\mathbb{P}\{C_1 \cap B_1^c\}&\sim e^{-\gamma \infty}.
\end{align*}
For
\begin{align*}
A_n &= \{T_{n,k} - T_{n+1,k} \geq -f_n(\gamma), T_{n+1,k+1} - T_{n,k} \geq -f_n(\gamma), 1 \leq k \leq n\}, \\
A_n &\subset C_n = \{T_{n+1,k} - T_{n+1,k+1} \geq -g_n(\gamma), 1 \leq k \leq n\}.
\end{align*}
We find $A_n \subset C_n$ if we set $2f_n(\gamma) = g_n(\gamma).$
Let $g_{n}(\gamma) = 4^{n-1}g(\gamma)$, and
\begin{equation}
B_n = \{ T_{n+1,k}-T_{n,k}\geq -g_n(\gamma), T_{n,k} - T_{n+1, k+1}\geq - g_n(\gamma), 1 \leq k \leq n \}. \nonumber
\end{equation}
By a similar argument as the previous subsection, we can show
\begin{equation}
\mathbb{P}\{C_n \cap B_n^c\} \sim e^{-\gamma \infty}. \nonumber
\end{equation}
Thus, the case is true for $A_{n+1}.$ By induction, we have all the relations.
Moreover, we can replace $f_n, g_n$ by some positive bound $\delta,$ and introduce the events
$$A_n(\delta), B_n(\delta), C_n(\delta).$$
Since $f_n(\gamma), g_n(\gamma) \to 0,$ as $\gamma \to \infty,$
$A_n, B_n, C_n$ are still dominant, meaning that
\begin{align*}
&\P \{A_n(\delta)^c\} \sim e^{-\gamma \infty}, \\
&\P \{C_n(\delta)^c\} \sim e^{-\gamma \infty}, \\
&\P \{C_n(\delta) \cap B_n(\delta)^c\} \sim e^{-\gamma \infty}.
\end{align*}
\subsection{Proof for the Main Theorem}
\label{main_theorem}
Similar to the arguments in previous subsections, we first show the result for the four particles.
\subsubsection{The Case for four Particles}
Recall the four particles we have in the previous section.
\begin{align*}
dT_0 &= \frac{1}{\sqrt \gamma} dW_0, \\
dT_+ &= \frac{1}{\sqrt \gamma} dW_+ + e^{\gamma(T_0 - T_+)} dt, \\
dT_- &= \frac{1}{\sqrt \gamma} dW_- - e^{\gamma(T_- - T_0)}dt,\\
dT &= \frac{1} {\sqrt \gamma} dW + (e^{\gamma(T_- - T)} - e^{\gamma(T- T_+)})dt.
\end{align*}
It is even easier to take the first three particles into account, the rate function of which is easily computed.
Since by Lemma (\ref{lem3.1}) and (\ref{lem3.2}), we have shown that:
$T_{\pm}$ is exponentially equivalent to
\begin{equation}
\label{3.1}
\widetilde T_{\pm}(t) = T_{\pm}(0) + \frac 1 {\sqrt \gamma}W_{\pm}(t) \pm \sup_{[0,t]} (\pm(T_0(s) - \frac 1 {\sqrt \gamma}W_{\pm}(s) - T_{\pm}(0)))_+.\nonumber
\end{equation}
It is easy to show that the map from $ W_0,W_{\pm} $ to $T_0,\widetilde T_{\pm}$ is continuous from $(C_0([0,1]), \mathcal{L}_\infty)$ to itself.
Therefore, we can apply the contraction principle.
The difficulty lies in $T$. We will work on the limit of the probability
\begin{equation}
\P\{\|T_{0, \pm} - \phi_{0,\pm}\|\leq \delta, \|T- \phi\|\leq \delta \},\nonumber
\end{equation}
when $\gamma \to \infty,$ and $\delta \to 0.$
While its upper bound for the probability above is similar and even easier to get, we adopt the following strategies for the lower bound.
First, we note that the above equation is no smaller than
\begin{equation}
\lim_{\delta \to 0} \lim_{\delta_1 \to 0} \lim_{\gamma \to \infty} \P\{\|T_{0, \pm} - \phi_{0,\pm}\|\leq \delta_1, \|T- \phi\|\leq \delta \}.\nonumber
\end{equation}
We consider $$\eta = \delta_2 - \delta_1 - \epsilon >0.$$
\subsubsection{Step 1: Fixing Intervals:}
We have previously proved that
\begin{align*}
A &= \biggl\{T_+ - T_0 \geq -\epsilon,T_0 - T_- \geq -\epsilon \; \biggr\}, \\
B &= \biggl\{ T_+ - T \geq -\epsilon,T - T_- \geq -\epsilon \; \biggr \}, \\
A &\subset C = \biggl\{T_+ - T_- \geq -2\epsilon \; \biggr\},
\end{align*}
$
\P (A^c) \sim e^{- \gamma \infty},
\P (C^c) \sim e^{- \gamma \infty},
\P (C, B^c) \sim e^{-\gamma \infty}.
$
Hence these events $A,B,C$ are `dominant' meaning that the probabilities of $A,B,C$ will converge to one as $\epsilon \to 0.$
Consider the time interval
\begin{equation}
\label{3.3.3}
P\doteq \{t: \phi
|
_+ - \eta \leq \phi \leq \phi_- + \eta\},
\end{equation}
which is a closed set.
Its compliment in $(0,1),$ which is an open set, and thus a union of countable intervals: $P^c \doteq \cup_{n \in J} I_n.$
Denote $I_n = (a_n, b_n).$
Meanwhile, for $t \in P,$ we have
\begin{align*}
T(t) &> T_-(t) - \epsilon \geq \phi_-(t) -\delta_1 - \epsilon \geq \phi(t) -\delta_1 - \epsilon -\eta =\phi(t) - \delta_2,\\
T(t) &< T_+(t) + \epsilon \leq \phi_+(t) +\delta_1 + \epsilon \leq \phi_+(t) +\delta_1 + \epsilon +\eta = \phi(t) + \delta_2.
\end{align*}
In other words, $|T- \phi|\leq \delta_2 < \delta$ holds on this part, and therefore we only need to consider $I_n$ for each $n$. We obtain the lower bound for the probability
\begin{align}
&\P\biggl\{\|T_{0, \pm} - \phi_{0,\pm}\|\leq \delta_1, \|T- \phi\|_{\cup I_n(\eta)} \leq \delta \biggr\} \notag \\
&\geq \P\biggl\{A_\epsilon, \|T_{0, \pm} - \phi_{0,\pm}\|\leq \delta_1, \|T- \phi\|_{\cup I_n(\eta)}\leq \delta \biggr\} \nonumber \\
\label{3.3.1}
&\geq \P\biggl\{A_\epsilon, \|T_{0, \pm} - \phi_{0,\pm}\|\leq \delta_1, \|T- \phi - (T-\phi)(a_n)\|_{ I_n(\eta)}\leq \delta- \delta_2, n \in J \biggr\} .
\end{align}
Then we replace $T - T(a_n)$ by $\widetilde T _n$.
Next, we replace $\widetilde T _n$ by
\begin{equation}
\widetilde T _n(\phi) = dW_n + (e^{\gamma(\phi_- - T_n)} - e^{\gamma(T_n - \phi_+)})dt \nonumber .
\end{equation}
Note that
\begin{equation}
|\widetilde T _n - \widetilde T _n (\phi)| \leq 2\delta_1.\nonumber
\end{equation}
We then have
\begin{align}
&\P\biggl\{\|T_{0, \pm} - \phi_{0,\pm}\|\leq \delta_1, \|T- \phi\|_{\cup I_n(\eta)} \leq \delta \biggr\} \notag \\
&\geq \P\biggl\{A_\epsilon, \|T_{0, \pm} - \phi_{0,\pm}\|\leq \delta_1, \|\widetilde T _n (\phi_{\pm}) - (\phi-\phi(a_n))\|_{ I_n(\eta)}\leq \delta - 2 \delta_1 - \delta_2, n \in J \biggr\} \notag \\
&\geq \P\biggl\{A_\epsilon, \|T_{0, \pm} - \phi_{0,\pm}\|\leq \delta_1\}
\P\{\|\widetilde T _n (\phi_{\pm}) - (\phi-\phi(a_n))\|_{ I_n(\eta)}\leq \delta - 2 \delta_1 - \delta_2, n \in J \biggr\} \notag \\
&\geq \P\biggl\{A_\epsilon, \|T_{0, \pm} - \phi_{0,\pm}\|\leq \delta_3 \}
\P\{\|\widetilde T _n (\phi_{\pm}) - (\phi-\phi(a_n))\|_{ I_n(\eta)}\leq \delta_3, n \in J \} \nonumber .
\end{align}
We choose $\delta_3 < \delta - 2 \delta_1 - \delta_2$ and $\delta_3 < \delta_1$, independent of $\eta,$ so that we can set $\delta_3 \to 0 $ while retaining $\eta.$
The upper bound is derived to be
\begin{align}
&\P\biggl\{\|T_{0, \pm} - \phi_{0,\pm}\|\leq \delta, \|T- \phi\|\leq \delta \biggr\} \notag \\
\label{3.3.2}
&\leq \P\biggl\{\|T_{0, \pm} - \phi_{0,\pm}\|\leq 2\delta, \|T- \phi - (T- \phi)(a_n)\|_{\cup I_n(\zeta)}\leq 2\delta\biggr\}.
\end{align}
Similarly,
\begin{align}
&\P\biggl\{\|T_{0, \pm} - \phi_{0,\pm}\|\leq 2\delta, \|\widetilde T _n(\phi_{\pm})- (\phi - \phi(a_n))\|_{\cup I_n(\zeta)}\leq 4\delta \biggr\} \notag \\
&= \P\biggl\{\|T_{0, \pm} - \phi_{0,\pm}\|\leq 2\delta\} \P\{\|\widetilde T _n(\phi_{\pm})- (\phi - \phi(a_n))\|_{\cup I_n(\zeta)}\leq 4\delta \biggr\}.\nonumber
\end{align}
Note that for the upper and lower bounds, (\ref{3.3.1}) and (\ref{3.3.2}) are similar in the sense that $\eta$ and $\delta_1$ are independent and $\zeta$ and $2\delta$ are independent. Thus, in the following subsections, we can first set $\delta_1,2\delta \to 0$ and then set $\eta, \zeta \to 0$.
\subsubsection{Step 2: Exponentially equivalence and contraction principle}
By Lemmas \ref{lem3.1} and \ref{lem3.2}, we can replace $T_{\pm}$ due to $\widetilde T_{\pm}$ by the exponential equivalence.
Thus, we can replace $T$ by some simpler $T_n$ on each $I_n \subset P^c.$
Recall the definition of $P$, which is in $(\ref{3.3.3})$
on $I_n,$ we have
$\phi_- - \eta > \phi$ or $\phi > \phi_+ + \eta.$
Therefore, guaranteed by the previous lemmas,
we construct the exponentially equivalent $\widetilde T_{n,0},$ which is
\begin{equation}
\widetilde T_{n,0} (t) - \widetilde T_n(a_n) = \frac 1 {\sqrt \gamma}W (a_n) + \sup_{[a_n,t]} (T_- - \frac 1 {\sqrt \gamma}W )_+; \nonumber
\end{equation}
if $\phi > \phi_+ + \eta,$ on $[a_n, b_n].$
\begin{equation}
\widetilde T_{n,0} (t) - \widetilde T_n(a_n) = \frac 1 {\sqrt \gamma}W (a_n) - \sup_{[a_n,t]} (- T_+ + \frac 1 {\sqrt \gamma}W )_+; \nonumber
\end{equation}
if $\phi < \phi_- - \eta,$ on $[a_n, b_n]$.
Suppose that we replace $T_-$ and $T_+$ by $\phi_-$ and $\phi_+$. We define $\widetilde T_n$ as
\begin{equation}
\widetilde T_n (t) - \widetilde T_n(a_n) = \frac 1 {\sqrt \gamma}W (a_n) + \sup_{[a_n,t]} (T_- - \frac 1 {\sqrt \gamma}W )_+; \nonumber
\end{equation}
if $\phi > \phi_+ + \eta,$ on $[a_n, b_n],$ and
\begin{equation}
\widetilde T_n (t) - \widetilde T_n(a_n) = \frac 1 {\sqrt \gamma}W (a_n) - \sup_{[a_n,t]} (- T_+ + \frac 1 {\sqrt \gamma}W )_+; \nonumber
\end{equation}
if $\phi < \phi_- - \eta,$ on $[a_n, b_n]$.
Next, we will show that
\begin{equation}
\label{norm_comparison}
\| \widetilde T_{n,0} - \widetilde T_n\|_{[a_n,b_n],\infty}
\leq \| \sup_{[a_n,t]} (- T_\pm + \frac 1 {\sqrt \gamma}W )_+ - \sup_{[a_n,t]} (- \phi_\pm + \frac 1 {\sqrt \gamma}W )_+ \|_{\infty}
\leq \sup_{[a_n,b_n]} \| T_\pm - \phi_\pm \|_{\infty} .\nonumber
\end{equation}
There exist $t_1, t_2 \leq t$ such that
\begin{align*}
&\sup_{[a_n,t]} (- T_\pm + \frac 1 {\sqrt \gamma}W )_+ = (- T_\pm + \frac 1 {\sqrt \gamma}W )_+(t_1), \\
&\sup_{[a_n,t]} (- \phi_\pm + \frac 1 {\sqrt \gamma}W )_+ = (- \phi_\pm + \frac 1 {\sqrt \gamma}W )_+(t_2) .
\end{align*}
By symmetry, we assume $t_1 \geq t_2$. If the first term is larger, i.e.,
$$
\sup_{[a_n,t]} (- T_\pm + \frac 1 {\sqrt \gamma}W )_+ \geq \sup_{[a_n,t]} (- \phi_\pm + \frac 1 {\sqrt \gamma}W )_+,
$$
then
\begin{align*}
\biggl\| \sup_{[a_n,t]} (- T_\pm + \frac 1 {\sqrt \gamma}W )_+ - \sup_{[a_n,t]} (- \phi_\pm + \frac 1 {\sqrt \gamma}W )_+ \biggr\|_{\infty}
&= T_\pm(t_1) - \phi_\pm(t_2)
\leq T_\pm(t_1) - \phi_\pm(t_1) \\
&\leq \sup_{[a_n,t]} \biggl\| T_\pm - \phi_\pm \biggr\|_{\infty}.
\end{align*}
If the second term is larger, i.e.,
$$
\sup_{[a_n,t]} (- T_\pm + \frac 1 {\sqrt \gamma}W )_+ \leq \sup_{[a_n,t]} (- \phi_\pm + \frac 1 {\sqrt \gamma}W )_+,
$$
then
\begin{align*}
\biggl\| \sup_{[a_n,t]} (- T_\pm + \frac 1 {\sqrt \gamma}W )_+ - \sup_{[a_n,t]} (- \phi_\pm + \frac 1 {\sqrt \gamma}W )_+ \biggr\|_{\infty}
&= \phi_\pm(t_2) - T_\pm(t_1) \\
&\leq \phi_\pm(t_2) - T_\pm(t_2) \\
&\leq \sup_{[a_n,t]} \biggl\| T_\pm - \phi_\pm \biggr\|_{\infty}.
\end{align*}
Thus we proved (\ref{norm_comparison}).
Based on the above, for $\delta_1 < \delta_2 < \delta,$ we have:
\begin{align*}
& \P\{\|T_{0, \pm} - \phi_{0,\pm}\|\leq \delta, \|\widetilde T- \phi\|_{\cup I_n}\leq \delta \} \notag \\
\geq &
\P\{\|T_0 - \phi_0\|\leq \delta_1, \|\widetilde T_{\pm} - \phi_{\pm}\|\leq \delta_2,\|\widetilde T_{n,0}- \phi\|\leq \delta_3, n \in J\} \notag \\
\geq &
\P\{\|T_0 - \phi_0\|\leq \delta_1, \|\widetilde T_{\pm} - \phi_{\pm}\|\leq \delta_2 - \delta_1,\|\widetilde T_n- \phi\|\leq \delta - \delta_2, n \in J\} \notag . \\
\end{align*}
Therefore, we can apply the contraction principle for the mapping:
$$\{W_0, W_{\pm}, W\} \to \{T_0, \widetilde T_{\pm}, \widetilde T_n , n\in J\}$$
and obtain
\begin{align}
&\lim_{\delta' \to 0} \lim_{\gamma \to \infty}
\P\biggl\{\|T_{0, \pm} - \phi_{0,\pm}\|\leq \delta', \|T- \phi\|_{\cup I_n}\leq \delta' \biggr\} \notag \\
= &\lim_{\delta' \to 0} \lim_{\gamma \to \infty}
\P\biggl\{\|T_0 - \phi_0\|\leq \delta', \|\widetilde T_{\pm} - \phi_{\pm}\|\leq \delta',\|T_n- \phi\|\leq \delta', n \in J\biggr\} \notag \\
= &- \frac 1 2 \biggl(\int_0^1 \dot{\phi_0}^2
+ \int_{\phi_+ > \phi_0} \dot{\phi_+}^2 + \int_{\phi_+ = \phi_0} (\dot{\phi_+}_+)^2
+ \int_{\phi_- > \phi_0} \dot{\phi_-}^2 + \int_{\phi_- = \phi_0} (\dot{\phi_-}_-)^2 \notag \\
&+ \int_{\{\phi_- < \phi < \phi_+\} \cap \cup I_n(\eta)} \dot{\phi}^2
+ \int_{\{\phi_- = \phi < \phi_+\} \cap \cup I_n(\eta)} \dot{\phi_-}^2
+ \int_{\{\phi_- < \phi = \phi_+\} \cap \cup I_n(\eta)} \dot{\phi_+}^2\biggr) \notag \\
= &I(\phi) + I(\phi_+) + I(\phi_-)+ I(\phi,\eta).
\end{align}
Since the above equation holds for all $\eta >0,$ $\cup I_n(\eta)$
would be the complement of $\{\phi_- = \phi =\phi_+\}.$
If we set $\eta \to 0,$
we have
\begin{equation}
\lim_{\eta \to 0}I(\phi,\eta) = I(\phi) =
\int_{\{\phi_- < \phi < \phi_+\} } \dot{\phi}^2
+ \int_{\{\phi_- = \phi < \phi_+\}} \dot{\phi_-}^2
+ \int_{\{\phi_- < \phi = \phi_+\}} \dot{\phi_+}^2.
\end{equation}
Therefore,
\begin{align}
\liminf_{\delta \to 0} \lim_{\gamma \to \infty} \P
\geq I(\phi) + I(\phi_+) + I(\phi_-)+ I(\phi).
\end{align}
Thus, the lower bound is proved.
The upper bound is similar to obtain and it is the same as the lower bound. Therefore, it is the rate function. This is a good rate function, because by the Schilder's Theorem and contraction principle. We have proved our result for the four-particle case.
\subsubsection{General Case}
When $n = 3$, the proof in the four-particle case still applies. In general, we can prove the results recursively.
\section{Conclusion and Future Remarks} \label{sec_con}
In this work, we presented a large deviation principle for the Whittaker 2d growth model.
An interesting future problem is to emulate the analysis to address the Whittaker 2d growth model with more general functions instead of the current exponential function.
Different from Freidlin-Gartner formula, our rate function depends on the position of sample paths. This is due to the interactions of interlaced particles.
In Appendix~\ref{sec_example}, we elaborate on two special cases, namely the Crossing and Strict Interlacing of sample paths, where the proof can be simplified.
In Appendix~\ref{sec_existence}, we show the existence and uniqueness of the strong solution for technical completeness.
\section*{Acknowledgment}
The authors sincerely thank Amir Dembo for his advice on the topic. The authors are also grateful to Tianyi Zheng, Mykhaylo Shkolnikov, Vadim Gorin, and Ivan Corwin for helpful discussions.
|
\section{Introduction.} Let $\gg$ be a semisimple complex Lie algebra.
The universal enveloping algebra $U(\gg)$ bears a natural
filtration by the degree with respect to the generators. The
associated graded algebra $\gr U(\gg)$ is naturally isomorphic to
the symmetric algebra $S(\gg)=\mathbb{C}[\gg^*]$ by the
Poincar\'e--Birkhoff--Witt theorem. The commutator operation on
$U(\gg)$ defines a Poisson bracket on $S(\gg)$, which we call the
\emph{Poisson--Lie bracket}.
The \emph{argument shift method} gives a way to construct
Poisson-commutative subalgebras in $S(\gg)$. The method is as
follows. Let $ZS(\gg)=S(\gg)^{\gg}$ be the center of $S(\gg)$ with
respect to the Poisson bracket, and let $\mu\in\gg^*$ be a regular
semisimple element. Then the algebra $A_{\mu}\subset S(\gg)$
generated by the elements $\partial_{\mu}^n\Phi$, where $\Phi\in
ZS(\gg)$, (or, equivalently, generated by central elements of
$S(\gg)=\mathbb{C}[\gg^*]$ shifted by $t\mu$ for all $t\in\mathbb{C}$) is
Poisson-commutative and has maximal possible transcendence degree
equal to $\frac{1}{2}(\dim\gg+\rk\gg)$ (see \cite{MF}). Moreover,
the subalgebras $A_{\mu}$ are maximal Poisson-commutative
subalgebras in $S(\gg)$ (see \cite{T}). In \cite{V}, the
subalgebras $A_{\mu}\subset S(\gg)$ are named the {\em
Mischenko--Fomenko subalgebras}.
Let ${\mathfrak{h}}\subset\gg$ be a Cartan subalgebra of the Lie algebra
$\gg$. We denote by $\Delta$ and $\Delta_+$ the root system of
$\gg$ and the set of positive roots, respectively. Let
$\alpha_1,\dots,\alpha_l$ be the simple roots. Fix a
non-degenerate invariant scalar product $(\cdot,\cdot)$ on $\gg$
and choose from each root space $\gg_{\alpha},\ \alpha\in\Delta,$
a nonzero element $e_{\alpha}$ such that
$(e_{\alpha},e_{-\alpha})=1$. Set
$h_{\alpha}:=[e_{\alpha},e_{-\alpha}]$, then for any $h\in{\mathfrak{h}}$ we
have $(h_{\alpha},h)=\langle\alpha,h\rangle$. The elements
$e_{\alpha}\ (\alpha\in\Delta)$ together with
$h_{\alpha_1},\dots,h_{\alpha_l}\in{\mathfrak{h}}$ form a basis of $\gg$.
We identify $\gg$ with $\gg^*$ via the scalar product
$(\cdot,\cdot)$ and assume that $\mu$ is a regular semisimple
element of the fixed Cartan subalgebra ${\mathfrak{h}}\subset\gg=\gg^*$. The
linear and quadratic part of the Mischenko--Fomenko subalgebras
can be described as follows \cite{F}:
\begin{gather*}A_\mu\cap\gg={\mathfrak{h}},\\ A_\mu\cap S^2(\gg)=S^2({\mathfrak{h}})\oplus
Q_\mu,\ \text{where}\
Q_\mu=\{\sum\limits_{\alpha\in\Delta_+}\frac{\langle\alpha,h\rangle}{\langle\alpha,\mu\rangle}e_{\alpha}e_{-\alpha}|h\in{\mathfrak{h}}\}.\end{gather*}
The main result of the present paper is the following
\begin{Theorem}\label{main} For generic $\mu\in{\mathfrak{h}}$
(i.e. for $\mu$ in the complement to a certain countable union of
Zariski-closed subsets in ${\mathfrak{h}}$), the algebra $A_\mu$ is the
Poisson centralizer of the subspace $Q_\mu$ in $S(\gg)$.
\end{Theorem}
In \cite{ChT,NO,T2} the Mischenko--Fomenko subalgebras were lifted
(quantized) to the universal enveloping algebra, i.e. the family
of commutative subalgebras $\mathcal{A}_\mu\subset U(\gg)$ such that
$\gr\mathcal{A}_\mu=A_\mu$ was constructed for any classical Lie algebra
$\gg$ (i.e. ${\mathfrak{s}}\ll_r$, ${\mathfrak{s}}{\mathfrak{o}}_r$, ${\mathfrak{s}}{\mathfrak{p}}_{2r}$). In \cite{R} we do
this (by different methods) for any semisimple $\gg$.
We deduce the following assertion from Theorem~\ref{main}.
\begin{Theorem}\label{lift} For generic $\mu\in{\mathfrak{h}}$
there exist no more than one commutative subalgebra $\mathcal{A}_\mu\subset
U(\gg)$ satisfying $\gr\mathcal{A}_\mu=A_\mu$.
\end{Theorem}
This means that there is a \emph{unique} quantization of
Mischenko--Fomenko subalgebras. In particular, the methods of
\cite{ChT,NO,T2} and \cite{R} give the same for classical Lie
algebras. In the case $\gg=\gg\ll_n$ the assertion of
Theorem~\ref{lift} was proved by A.~Tarasov \cite{T3} for any
regular $\mu\in{\mathfrak{h}}$.
I thank E.~B.~Vinberg for attention to my work and useful
discussions.
\section{Proof of Theorem~\ref{main}} Note that the set $E_n\subset{\mathfrak{h}}$ of such $\mu\in{\mathfrak{h}}$
that the Poisson centralizer of the space $Q_\mu$ in $S^n(\gg)$
has the dimension greater than $\dim A_\mu\cap S^n(\gg)$ is
Zariski-closed in ${\mathfrak{h}}$ for any $n$. Therefore it suffices to
prove that $E_n\ne{\mathfrak{h}}$ for any $n$. Thus, it suffices to prove the
existence of $\mu\in{\mathfrak{h}}$ satisfying the conditions of the Theorem.
\begin{Lemma}\label{irrational}
There exist $\mu,h\in{\mathfrak{h}}$ such that numbers
$\frac{\langle\alpha,h\rangle}{\langle\alpha,\mu\rangle}\
(\alpha\in\Delta_+)$ are linearly independent over $\mathbb{Q}$.
\end{Lemma}
\begin{proof}
Choose $\mu$ such that the values $\alpha_i(\mu)$ are
algebraically independent over $\mathbb{Q}$ for simple roots $\alpha_i$.
Since there are no proportional positive roots,
|
the numbers
$\frac{1}{\langle\alpha,\mu\rangle},\ \alpha\in\Delta_+,$ are
linearly independent over $\mathbb{Q}$. Choose $h$ such that the values
$\langle\alpha,h\rangle$ are nonzero rational numbers. Then the
numbers $\frac{\langle\alpha,h\rangle}{\langle\alpha,\mu\rangle},\
\alpha\in\Delta_+,$ are linearly independent over $\mathbb{Q}$.
\end{proof}
Choose $\gamma\in\gg^*$ such that $\gamma(h_{\alpha_i})=1$ for any
simple root $\alpha_i$ and $\gamma(e_{\alpha})=0$ for
$\alpha\in\Delta$. We define a new Poisson bracket
$\{\cdot,\cdot\}_{\gamma}$ on $S(\gg)$ by setting
$\{x,y\}_{\gamma}=\gamma([x,y])$ for $x,y\in\gg$. This bracket is
compatible with the Poisson--Lie bracket, i.e. the linear
combination $t\{\cdot,\cdot\}+(1-t)\{\cdot,\cdot\}_{\gamma}$ is a
Poisson bracket on $S(\gg)$ (i.e. satisfies the Jacobi identity)
for any $t\in\mathbb{C}$. Moreover, for $t\ne0$, the corresponding
Poisson algebras are isomorphic. Namely, denote by $S(\gg)_t$ the
algebra $S(\gg)$ equipped with the Poisson bracket
$t\{\cdot,\cdot\}+(1-t)\{\cdot,\cdot\}_{\gamma}$; then for $t\ne0$
the Poisson algebra isomorphism $\psi_t:S(\gg)_1\to S(\gg)_t$ is
defined on the generators $x\in\gg$ as follows:
$\psi_t(x)=t^{-1}x+t^{-2}(1-t)\gamma(x)$. Clearly, we have
$\psi_t(Q_\mu)=Q_\mu$.
\begin{Lemma}\label{lem_tr_deg}
The transcendence degree of the Poisson centralizer of the
subspace $Q_\mu$ in $S(\gg)_0$ is not greater than
$\frac{1}{2}(\dim\gg+\rk\gg)$ for some $\mu\in{\mathfrak{h}}$.
\end{Lemma}
\begin{proof}
Choose $\mu$ and $h$ as in Lemma~\ref{irrational} and set
$q=\sum\limits_{\alpha\in\Delta_+}\frac{\langle\alpha,h\rangle}{\langle\alpha,\mu\rangle}e_{\alpha}e_{-\alpha}\in
Q_\mu$. For any $f\in S(\gg)$, we have
$\{q,f\}_{\gamma}=\sum\limits_{\alpha\in\Delta_+}\gamma(h_{\alpha})\frac{\langle\alpha,h\rangle}{\langle\alpha,\mu\rangle}(e_{-\alpha}\frac{\partial
f}{\partial e_{-\alpha}}-e_{\alpha}\frac{\partial f}{\partial
e_{\alpha}})$. In particular,
\begin{multline}\label{monom}
\{q,\prod\limits_{i=1}^{l}h_{\alpha_i}^{m_i}\prod\limits_{\alpha\in\Delta_+}
e_{\alpha}^{n_{\alpha}}e_{-\alpha}^{n_{-\alpha}}\}_{\gamma}=\\=
\sum\limits_{\alpha\in\Delta_+}\gamma(h_{\alpha})\frac{\langle\alpha,h\rangle}{\langle\alpha,\mu\rangle}(n_{-\alpha}-n_{\alpha})\prod\limits_{i=1}^{l}h_{\alpha_i}^{m_i}\prod\limits_{\alpha\in\Delta_+}
e_{\alpha}^{n_{\alpha}}e_{-\alpha}^{n_{-\alpha}}.
\end{multline}
For any $\alpha=\sum\limits_{i=1}^{l}k_i\alpha_i\in\Delta_+$, we
have
$\gamma(h_{\alpha})=\sum\limits_{i=1}^{l}k_i\in\mathbb{Q}\backslash\{0\}$.
Since the numbers
$\frac{\langle\alpha,h\rangle}{\langle\alpha,\mu\rangle}$ are
linearly independent over $\mathbb{Q}$, the right hand part
of~(\ref{monom}) is zero iff $n_{\alpha}-n_{-\alpha}=0$ for any
$\alpha\in\Delta_+$. This means that the Poisson centralizer of
$q$ in $S(\gg)_0$ is linearly generated by monomials having equal
degrees in $e_{\alpha}$ and $e_{-\alpha}$ for any
$\alpha\in\Delta_+$, i.e. the Poisson centralizer of $q$ in
$S(\gg)_0$ is generated (as a commutative algebra) by the elements
$h_{\alpha_i}\ (i=1,\dots, l)$ and $e_{\alpha}e_{-\alpha}\
(\alpha\in\Delta_+)$. Therefore, the transcendence degree of the
Poisson centralizer of $q$ in $S(\gg)_0$ is equal to
$\frac{1}{2}(\dim\gg+\rk\gg)$.
\end{proof}
By Lemma~\ref{lem_tr_deg}, the transcendence degree of the Poisson
centralizer of the subspace $Q_\mu$ in $S(\gg)_t$ is not greater
than $\frac{1}{2}(\dim\gg+\rk\gg)$ for generic $t$. Since the
Poisson algebras $S(\gg)_t$ are isomorphic to each other for
$t\ne0$, this lower bound of the transcendence degree holds for
any $t\in\mathbb{C}$. Let $Z\subset S(\gg)$ be the Poisson centralizer of
$Q_\mu$ in $S(\gg)_1$. Since $\tr\ \deg(Z)\le\tr\ \deg(A_\mu)$ and
$A_\mu\subset Z$, we see that each element of $Z$ is algebraic
over $A_\mu$. By Tarasov's results \cite{T}, the subalgebra
$A_\mu$ is algebraically closed in $S(\gg)_1$, hence, $Z=A_\mu$.
Theorem~\ref{main} is proved.
\section{Proof of Theorem~\ref{lift}} By \cite{V}, the subspace $A_\mu^{(2)}=\mathbb{C}+{\mathfrak{h}}+S^2({\mathfrak{h}})+Q_\mu\subset S(\gg)^{(2)}$
can be uniquely lifted to a commutative subspace
$\mathcal{A}_\mu^{(2)}\subset U(\gg)^{(2)}$ (this subspace is the image of
$A_\mu^{(2)}$ under the symmetrization map). By
Theorem~\ref{main}, any lifting $\mathcal{A}_\mu\subset U(\gg)$ of $A_\mu$
is the centralizer of the subspace $\mathcal{A}_\mu^{(2)}$ in $U(\gg)$ in
$U(\gg)$ for generic $\mu$. Theorem~\ref{lift} is proved.
|
\section{Main theorem}
The group $S^1$ of scalars $\lambda \in \C^1$, $|\lambda|=1$, acts by multiplications on the unit spheres in the complex vector spaces. Denote by $E_{S^1}(k,n)$ the space of continuous maps $f: S^{2k-1} \to S^{2n-1}$ equivariant under these actions, i.e. of maps such that $f(\lambda x) =\lambda f(x)$ for any $\lambda \in S^1 \subset \C^1$, $x \in S^{2k-1} \subset \C^k$. The Stiefel manifold $V_k(\C^n)$ of orthonormal $k$-frames in $\C^n$ can be identified with the space of isometric complex linear maps $\C^k \to \C^n$ of Hermitian spaces, and hence be considered as a subset of $E_{S^1}(k,n)$. In a similar way, the quaternionic Stiefel manifold $V_k(\HH^n)$ can be considered as a subset of the space $E_{S^3}(k,n)$ of continuous maps $S^{4k-1} \to S^{4n-1}$ equivariant under the standard action (by left multiplications) of the group $S^3$ of quaternions of unit norm.
\begin{theorem}
\label{mainth}
For any natural $k \leq n$, the maps \begin{equation} \label{mt1} H^*( E_{S^1}(k,n), \Q) \to H^*(V_k(\C^n), \Q) \ , \end{equation} \begin{equation} \label{mtm2} H^*( E_{S^3}(k,n), \Q) \to H^*(V_k(\HH^n), \Q) \end{equation} induced by inclusions are isomorphisms.
\end{theorem}
\section{Preliminary remarks}
Recall (see \cite{borel}) that there are ring isomorphisms
\begin{equation}
\label{borf2}
H^*(V_k(\C^n),\Z) \simeq H^*(S^{2n-1} \times S^{2(n-1)-1} \times \dots \times S^{2(n-k+1)-1}, \Z) \ ,
\end{equation}
\begin{equation}
H^*(V_k(\HH^n),\Z) \simeq H^*(S^{4n-1} \times S^{4(n-1)-1} \times \dots \times S^{4(n-k+1)-1}, \Z).
\end{equation}
We replace the space $E_{S^1}(k,n)$ by its subspace $\tilde E_{S^1}(k,n)$ of {\em $C^\infty$-smooth} equivariant maps (which is homology equivalent to it by a kind of Weierstrass approximation theorem, cf. \cite{JTA}). Using the standard techniques of finite-dimensional approximations (see e.g. \cite{book}, \cite{JTA}) we will work with these spaces of maps as with manifolds of very large but finite dimensions.
A {\em singular $m$-dimensional simplex} in $\tilde E_{S^1}(k,n)$ is a continuous map $F: S^{2k-1} \times \Delta^m \to S^{2n-1}$, where $\Delta^m$ is the standard $m$-dimensional simplex and all maps $F(\cdot, \varkappa): S^{2k-1} \to S^{2n-1},$ $\varkappa \in \Delta^m$, are of class $\tilde E_{S^1}(k,n)$. Again by approximation theorem, the chain complex of these simplices has the same homology as its subcomplex of $C^\infty$-smooth maps $F$.
\section{Surjectivity of the map (\ref{mt1})}
Fix an orthonormal frame $\{e_1, \dots, e_k\}$ in $\C^k$ and the corresponding complete flag $\C^1 \subset \dots \subset \C^{k-1} \subset \C^k$ where any $\C^j$ is spanned by $e_1, \dots, e_j$. Consider $\C^k$ as a subspace in $\C^n$, so that the source $S^{2k-1} \subset \C^k$ of our equivariant maps is a subset of their target $S^{2n-1} \subset \C^n$. For any $j=1, 2, \dots, k,$ denote by $\Xi_j$ the subset in $\tilde E_{S^1}(k,n)$ consisting of all $S^1$-equivariant smooth maps $S^{2k-1} \to S^{2n-1}$ which are identical on some big circle in $S^{2k-1}$ cut by a complex line through the origin in $\C^j \subset \C^k$.
\begin{definition}
\label{defreg}
\rm
A $S^1$-equivariant smooth map $f \in \Xi_j$ is {\em generic} if it is identical only on finitely many big circles in $S^{2k-1} \cap \C^j$, and for any such circle, any point $\divideontimes$ of this circle and any $(2k-2)$-dimensional transversal slice of this circle in $S^{2k-1}$, the differential of the map \ $f - \mbox{\rm Id}$ \ from this slice to $\C^n$ is injective (i.e. has rank $2k-2$) at this point $\divideontimes$. (A map $f$ can be generic in $\Xi_j$ and not generic in some $\Xi_l, l>j$.)
\end{definition}
\begin{lemma} 1. For any $j=1, \dots, k$, the set of generic maps $f \in \Xi_j$ is a smooth immersed submanifold of codimension $2n-2j+1$ in the space $\tilde E_{S^1}(k,n)$ in the following exact sense: for any natural $m$ and open domain $U \subset \R^m$ there is a residual subset in the space of $m$-parametric smooth families $F: S^{2k-1} \times U \to S^{2n-1}$ of maps of class $\tilde E_{S^1}(k,n)$, such that for any family from this subset the points $\varkappa \in U$ such that $F(\cdot, \varkappa) \in \Xi_j$ form a smooth immersed submanifold of codimension $2n-2j+1$ in $U$.
2. For any generic map $f \in \Xi_j$, the local irreducible branches of this immersed manifold at the point $f$ in $\tilde E_{S^1}(k,n)$ are in a natural one-to-one correspondence with big circles in $\C^j \cap S^{2k-1}$ on which this map is identical.
3. All these local branches have
a natural coorientation $($i.e. orientation of the normal bundle$)$ in $\tilde E_{S^1}(k,n)$.
4. The set of non-generic maps $f \in \Xi_j$ has codimension at least $(2n-2j+1) +2+2(n-k)$ in $\tilde E_{S^1}(k,n)$.
\end{lemma}
\noindent
{\it Proof.} Items 1 and 4 follow from Thom transversality theorem. The local branch of $\Xi_j$ corresponding to a big circle fixed by $f$, which is assumed in item 2, consists of maps which are identical on certain big circles neighboring to this one.
The coorientation assumed in item 3 is induced from the standard orientations of the target space $S^{2n-1}$ and the space $\CP^{j-1}$ of big circles in $\C^j \cap S^{2k-1}$. Namely, let $f$ be a generic point of $\Xi_j$ and $\divideontimes$ a point of $\C^j \cap S^{2k-1}$ such that $f(\divideontimes)=\divideontimes$. Denote by $\Xi(\divideontimes)$ the local branch of the variety $\Xi_j \subset \tilde E_{S^1}(k,n) $ at $f $ which corresponds to the big circle $\{e^{it}\divideontimes\}$ containing $\divideontimes$. Choose an arbitrary transversal slice $L$ of this big circle in $\C^j \cap S^{2k-1}$. The canonical orientation of $L$ is induced from that of $\CP^{j-1}$ by the Hopf projection $\C^j \cap S^{2k-1} \to \CP^{j-1}$. By Definition \ref{defreg} the differential of the map \ $f-\mbox{\rm Id}: L \to \
|
C^n$ \ is injective at the point $\divideontimes$; obviously it acts to the tangent space of $S^{2n-1}$ at this point.
The normal space in $\tilde E_{S^1}(k,n) $ of the local branch $\Xi(\divideontimes)$ at the point $f $ can be identified with the quotient space of $T_{\divideontimes}(S^{2n-1})$ by the image of this differential. Indeed, let us realize some tangent vector of the space $\tilde E_{S^1}(k,n) $ at $f$ by a one-parameter family of maps $f_\tau$, $\tau \in (-\varepsilon, \varepsilon)$, $f_0\equiv f$. This vector is tangent to the branch $\Xi(\divideontimes)$ if and only if there is a family of points $\divideontimes(\tau) \in L$, $\divideontimes(0)=\divideontimes$, such that $|f_\tau(\divideontimes(\tau)) - \divideontimes(\tau)| =_{\tau \to 0} o(|\tau|)$ or, which is the same, the vector $\frac{d f_\tau(\divideontimes)}{d \tau}(0) \in T_\divideontimes(S^{2n-1})$ belongs to the image of the tangent space $T_\divideontimes(L)$ under the differential of the map\ $f - \mbox{\rm Id}$.
A canonical orientation of the latter image is defined by that of $L$, and the desired orientation of the quotient space (i.e. of the normal space of $E(\divideontimes)$ at $f$) is specified as the factor of the standard orientation of $S^{2n-1}$ by this one. \hfill $\Box$
\begin{corollary}
\label{mlem}
The intersection index with the set $\Xi_j$ defines an element $[\Xi_j]$ of the group $H^{2n-2j+1}(\tilde E_{S^1}(k,n))$ for any $j=1, \dots, k$. \hfill $\Box$
\end{corollary}
\begin{proposition}
\label{mlem2}
The restrictions of $k$ cohomology classes $[\Xi_j] \in H^{2(n-j)+1}(\tilde E_{S^1}(k,n))$, $j=1, \dots, k$, to the subspace $V_k(\C^n) \subset \tilde E_{S^1}(k,n)$ generate multiplicatively the ring $H^*(V_k(\C^m), \Z)$.
\end{proposition}
\noindent
{\it Proof.} For any $j=0, 1, \dots, k$ define the subset $A_{k-j} \subset V_k(\C^n)$ as the set of isometric linear maps $\C^k \to \C^n$ sending the first $j$ vectors $e_1, \dots, e_j$ of the basic frame to $ie_1$, \dots, $ie_{j}$ respectively. Obviously, any $A_j$ is diffeomorphic to $V_j(\C^{n-k+j}),$ and we have inclusions $\{i \cdot \mbox{\rm Id}(\C^k)\} \equiv A_0 \subset A_1 \subset A_2 \subset \dots \subset A_{k-1} \subset A_k \equiv V_k(\C^n), $ where any $A_{k-j}$ is embedded into $A_{k-j+1}$ as a fiber of the fiber bundle $A_{k-j+1} \to S^{2(n-j)+1}$ taking a map $\C^k \to \C^n$ into the image of the basic vector $e_{j}$. The manifolds $ A_{k-j+1}$ and $\Xi_{j} $ meet transversally, and their intersection is equal to a different fiber of the same fiber bundle; hence the restriction of the cohomology class $[\Xi_{j}]$ to $A_{k-j}$ (respectively, to $A_{k-j+1}$) is equal to zero (respectively, coincides with the class induced from the basic cohomology class of $S^{2(n-j)+1}$ by the projection map of this fiber bundle). Therefore our proposition follows by induction over the spectral sequences of these fiber bundles (and the multiplicativity of these spectral sequences). \hfill $\Box$
\begin{corollary}
\label{mcor}
The map $($\ref{mt1}$)$ is epimorphic. \hfill $\Box$
\end{corollary}
\section{Injectivity of (\ref{mt1})}
For any topological space $X$ and natural number $s$, denote by $B(X,s)$ the $s$-configuration space of $X$ (i.e., the space of $s$-element subsets of $X$). Denote by $\pm \Z$ the {\em sign local system} of groups on $B(X,s)$: it is locally isomorphic to $\Z$, but loops in $B(X,s)$ act on its fibers by multiplications by $1$ or $-1$ depending on the parity of the permutations of $s$ points defined by these loops. $\bar H_*$ denotes the Borel--Moore homology group (i.e. the homology group of the complex of locally finite singular chains).
\begin{proposition}
There is a spectral sequence $E_r^{p,q}$ converging to the group $\tilde H^*( E_{S^1}(k,n),\Z)$, whose term $E_1^{p,q}$ is trivial if $p\geq 0$, and is defined by the formula
\begin{equation}
\label{mfo}
E_1^{p,q} \simeq \bar H_{-2pn-q}(B(\CP^{k-1},-p), \pm \Z)
\end{equation}
for $p<0$.
\end{proposition}
The construction of this spectral sequence essentially repeats that of the spectral sequence assumed in Theorem 3 of \cite{JTA}, see also \cite{book}. Namely, we consider the vector space of all $S^1$-equivariant smooth maps $S^{2k-1} \to \C^n$, define the discriminant set in it as the set of maps whose images contain the origin, and calculate its ``homology of finite codimension'' (which is Alexander dual to the usual cohomology group of the space of equivariant maps to $\C^n \setminus 0 \sim S^{2n-1}$) by using a natural simplicial resolution. \hfill $\Box$
\begin{proposition}[see \cite{how}] For any $j \geq 0$ and $s \geq 1$ there is a group isomorphism
\begin{equation}
\label{cpnk} \bar H_j(B(\CP^{k-1},s), \pm \Z \otimes \Q) \simeq H_{j-s(s-1)}(G_s(\C^{k}),\Q),
\end{equation}
where
$G_s(\C^{k})$ is the Grassmann manifold of $s$-dimensional complex subspaces in
$\C^{k}$.
In particular, the left-hand group $($\ref{cpnk}$)$ is trivial if $s>k$. \hfill $\Box$
\end{proposition}
Therefore the total dimension of the group $\tilde H^*( E_{S^1}(k,n),\Q)$ does not exceed $$\sum_{s=1}^k \dim H^*(G_s(\C^k), \Q) = \sum_{s=1}^k\binom{k}{s} = 2^k-1, $$ which by (\ref{borf2}) is equal to the total dimension of the reduced homology group of the space $V_k(\C^n)$. So, (\ref{mt1}) is an epimorphic (by Corollary \ref{mcor}) map, the dimension of whose source does not exceed that of the target; hence it is an isomorphism.
\medskip
The bijectivity of the map (\ref{mtm2}) can be proved in exactly the same way. In particular, the first term of
the spectral sequence calculating the group $\tilde H^*( E_{S^3}(k,n),\Z)$ and analogous to (\ref{mfo}) is given by \begin{equation} E_1^{p,q} \simeq \bar H_{-4pn-q}(B(\HH P^{k-1},-p), \pm \Z) \end{equation}
for $p< 0$ and is trivial for $p \geq 0$;
the quaternionic version of the equality (\ref{cpnk}) is
\begin{equation}
\bar H_j(B(\HH P^{k-1},s), \pm \Z \otimes \Q) \simeq H_{j-2s(s-1)}(G_s(\HH^{k}),\Q) \ .
\end{equation}
|
\section{Introduction}
\begin{figure}
\includegraphics[width=0.5\columnwidth]{figures/phoning_008}\includegraphics[width=0.5\columnwidth]{figures/blowing_bubbles_082_1}
\protect\caption{\label{fig:Actions-with-small}Actions where objects are barely visible.
Accurately localizing the action-object is important for succeeding
in classifying such images.}
\end{figure}
Recognizing actions in still images has been an active field of research
in recent years, using multiple benchmarks \cite{delaitre2010recognizing,pascal-voc-2012,sadeghi2011recognition,ICCV11_0744}.
It is a challenging problem since it combines different aspects of
recognition, including object recognition, human pose estimation,
and human-object interaction. Action recognition schemes can be divided
into transitive vs. intransitive. Our focus is on the recognition
of transitive actions from still images. Such actions are typically
based on an interaction between a body part and an action-object (the
object involved in the action). In a cognitive study of 101 frequent
actions and 15 body parts, Hidaka, Shohei and Smith \cite{maouene2008body}
have shown that hands and faces (in particular mouth and eyes) were
by far the most frequent body parts involved in actions. It is interesting
to note that in the human brain, actions related to hands and faces
appear to play a special role: neurons have been identified in both
physiological and fMRI studies \cite{brozzoli2009peripersonal,ladavas1998visual}
which appear to represent 'hand space' and 'face space', the regions
of space surrounding the hands and face respectively. In this work
we focus, but are not limited to such actions, in particular interactions
between the mouth and small objects, including drinking, smoking,
brushing teeth and others. We refer to them as face-related actions.
For such actions, the ability to detect the action-object involved
has a crucial effect on the attainable classification accuracy. A
simple demonstration is summarized in table \ref{tab:Classification-performance-impro}
(Section \ref{sec:The-Importance-of} below). The action objects involving
such classes are often barely visible, making them hard to detect
(see Figure \ref{fig:Actions-with-small}). Furthermore, the spatial
configuration of the action object with respect to the relevant body
parts must be considered to avoid the chance of misclassification
due to the detection of an unrelated action object in the image. In
other words, the right action-object should be detected at a particular
surrounding context.
To obtain these goals, our algorithm includes a method for the detection
of the relevant action object and related body parts as means to aid
the classification process. We seek the relevant body parts and the
action object \textit{jointly} using a cascade of two fully convolutional
networks: the first operates at the entire image level, highlighting
regions of high probability for the location of the action object.
The second network, guided by the result of the first, refines the
probability estimates within these regions. The results of both networks
are combined to generate a set of action-object candidates which are
pruned using contextual features. In the final stage, each candidate,
together with features extracted from the entire image and from the
two networks, is used to score the image as depicting a target action
class. We validate our approach on a subset of the Stanford-40 Actions
\cite{ICCV11_0744} dataset, achieving a 35\% improvement over state
of the art.
\subsection{Related Work\label{sub:Related-Work}}
Recent work on action recognition in still images can be categorized
into several approaches. One group of methods attempt to produce accurate
human pose estimation and then utilize it to extract features in relevant
locations, such as near the head or hands, or below the torso \cite{ICCV11_0744,desai2012detecting}
Others attempt to find relevant image regions in a semi-supervised
manner, given the image label. Prest et al \cite{eth_biwi_00828}
find candidate regions for action-objects and optimize a cost function
which seeks agreement between the appearance of the action objects
for each class as well as their location relative to the person. In
\cite{sener2012recognizing} the objectness \cite{alexe2010object}
measure is applied to detect many candidate regions in each image,
after which multiple-instance-learning is utilized to give more weight
to the informative ones. Their method does not explicitly find regions
containing action objects, but any region which is informative with
respect to the target action. In \cite{yao2011combining} a random
forest is trained by choosing the most discriminative rectangular
image region (from a large set of randomly generated candidates) at
each node, where the images are aligned so the face is in a known
location. This has the advantage of spatially interpretable results.
Some methods seek the action objects more explicitly: \cite{ICCV11_0744}
apply object detectors from Object-Bank \cite{li2010object} and use
their output among other cues to classify the images. In \cite{DBLP:conf/icmcs/LiangWHL14},
outputs of stronger object detectors are combined with a pose estimation
of the upper body in a neural net-setting and show improved results,
where the object detectors are the main cause for the improvement
in performance. Very recently \cite{gkioxari2015contextual} introduced
R{*}CNN which is a semi-supervised approach to find the most informative
image region constrained to overlap with the person bounding box.
Also relevant to our work are some recent attention based methods:
\cite{xu2015show} uses a recurrent neural network which restricts
its attention to a relevant subset of features from the image at a
given timestep. We do not use an RNN, but two different networks.
Instead of restricting the view of the second network to a subset
of the original features, we allow the network to extract new ones,
allowing it to focus on more subtle details.
Unlike \cite{eth_biwi_00828,sener2012recognizing,yao2011combining,gkioxari2015contextual},
our approach is fully supervised. Unlike\cite{DBLP:conf/icmcs/LiangWHL14}
we seek the relevant body parts and action objects jointly instead
of using separate stages. In contrast with \cite{gkioxari2015contextual},
we employ a coarse-to-fine approach which improves on the ability
of a single stage to localize the relevant image region accurately.
The rest of the paper is organized as follows: in the next section,
we demonstrate our claim that exact detection of the action object
can be crucial for the task of classification. In Section \ref{sec:Approach},
we describe our approach in detail. In Section \ref{sec:Experiments}
we report on experiments used to validate our approach on a subset
of the Stanford-40 Actions \cite{ICCV11_0744} dataset. Section \ref{sec:Conclusions-=000026-Future}
contains a discussion \& concluding remarks.
\section{The Importance of Action objects in Action Classification\label{sec:The-Importance-of}}
\begin{figure}
\includegraphics[width=1\columnwidth]{figures/s40}\protect\caption{\label{fig:Distribution-of-baseline}Distribution of baseline performance
for various action classes. We note the typical size of the action-objects
for the transitive actions (those involving objects), and classify
them as small(\textcolor{red}{red}), large(\textcolor{green}{green}),
and leave the rest of the classes undecided (orange). Mean performance
is significantly worse on actions involving small objects (37.5 \%)
vs. large objects (87 \%).}
\end{figure}
In this section, we demonstrate our claim that exact localization
of action objects plays a critical role in classifying actions, in
particular when they are hard to detect, \eg, due to their size or
occlusion. We begin by a simple baseline experiment on the Stanford-40
Actions \cite{ICCV11_0744} dataset. It contains 9532 images of 40
action classes split to 4000 for training and the rest for testing.
We train a linear SVM to discriminate between each of the 40 action
classes in a one-vs-all manner, using features from the penultimate
layer vgg-16 \cite{Simonyan2014}, \ie, fc6, and report performance
in terms of average precision. Figure \ref{fig:Distribution-of-baseline}
shows the resulting per-class performance. We have marked classes
where the action object tends to be very small or very large related
to the image. The difference in mean performance is striking: while
for the small objects, the mean AP is \textbf{37.5\%} and for the
large ones \textbf{87\%}. The mean performance on the entire dataset
is 67\%. Hence, we further restrict our analysis to a subset of Stanford-40
Actions \cite{ICCV11_0744} containing 5 classes: drinking, smoking,
blowing bubbles, brushing teeth and phoning - where the action objects
involved tend to be quite small. We name this the FRA dataset standing
for Face Related Actions. We augment the annotation of the images
in FRA by adding the exact delineation of the action objects and faces
and the bounding boxes of the hands. Next, we test the ability of
a classifier to tell apart these 5 action classes when it is given
various cues. We do this by extracting features for each image from
the following regions:
\begin{itemize}
\item Entire image (\textbf{G}lobal appearance)
\item Extended face bounding box scaled to 2x the original size (\textbf{F}ace
appearance)
\item Bounding box of action object (\textbf{O}bject appearance)
\item Exact region of action object ( \textbf{MO} : \textbf{M}asked \textbf{O}bject
appearance)
\end{itemize}
We extract fc6 features from each of these regions. For MO, the masked
object region, the image is cropped around the region bounding box
and all pixels not belonging to the object mask are set to the value
of the mean image learned by vgg-16. A linear SVM is trained on the
feature representation of each feature alone and on concatenations
of various combinations. We allow access to both face bounding boxes
and object bounding boxes/masks at test time, as if having access
to an Oracle which reveals them. The performance of the classifier
with different feature combinations is summarized in Table \ref{tab:Classification-performance-impro}.
Clearly, while the global (G) and face features (F) hold some information,
they are significantly outperformed by the classifier when it is given
the bounding box of the action object (O). This is further enhanced
by masking the surroundings of the action object (MO), which performs
best; The masking holds some information about the shape of the object
in addition to its appearance. Combining all features provides a further
boost, owing to the combination of local and contextual cues.
\noindent In the next section, we show our approach to detecting the
action object automatically.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Region & \multirow{1}{*}{Mean AP} & \includegraphics[width=0.08\columnwidth]{figures/icons/drink} & \includegraphics[width=0.08\columnwidth]{figures/icons/cigarette} & \includegraphics[width=0.08\columnwidth]{figures/icons/bubbles} & \includegraphics[width=0.08\columnwidth]{figures/icons/toothbrush} & \includegraphics[width=0.08\columnwidth]{figures/icons/phone}\tabularnewline
\hline
\hline
G & .58 & .54 & .58 & .74 & .57 & .45\tabularnewline
\hline
Face & .69 & .61 & .57 & .82 & .72 & .76\tabularnewline
\hline
O & .76 & .87 & .80 & .79 & .55 & .81\tabularnewline
\hline
MO & .82 & .91 & .78 & .85 & .66 & .88\tabularnewline
\hline
G+Face+O & .88 & .90 & .92 & .91 & .80 & .90\tabularnewline
\hline
All & .91 & .94 & .93 & .93 & .85 & .90\tabularnewline
\hline
\end{tabular}
\protect\caption{\label{tab:Classification-performance-impro}Classification performance
improves drastically as classifier is given access to the action objects'
exact locations : \textbf{G}lobal, Face, action \textbf{O}bject bounding
box, \textbf{MO}: \textbf{M}asked Object. This demonstrates the need
for accurately detecting the action-object.}
\end{table}
\section{Approach\label{sec:Approach}}
\begin{figure*}
\includegraphics[width=1\textwidth]{figures/flow}
\protect\caption{\label{fig:Flow-of-proposed}Flow of proposed method. A fully convolutional
network is applied to predict body parts and action objects. This
guides a secondary network to refine the predictions on a few selected
image regions. Using contextual features, the regions are ranked \&
pruned. The remaining candidates are scored using both global and
local features to produce a final classification, along with a visualization
of the detected action object. }
\end{figure*}
Our goal is to classify the action being performed in an image. To
that end, we perform the following stages:
\textbullet{} Produce a probability estimate for the locations of
hands, head and action objects in the entire image
\begin{itemize}
\item Refine the probability estimates at a set of selected locations
\item Using the above probabilities, generate a set of candidate action
objects
\item Rank and prune the set of candidate objects by using contextual features
\item Extract features from the probabilistic estimates and appearances
of the image and predicted action object locations to produce a final
classification.
\end{itemize}
We now elaborate on each of these stages.
\subsection{Coarse to Fine Semantic Segmentation \label{sub:Coarse-to-Fine_3.1}}
We begin by producing a semantic segmentation of the entire image,
then refining it: we train a fully convolutional network as in \cite{long2014fully}
to predict a pixel-wise segmentation of the image. The network is
trained to predict \textbf{jointly} the location of faces, hands,
and $k$ different object categories relating to the different action
classes. Using the framework of \cite{vedaldi15matconvnet}, we do
so by fine-tuning the vgg-16 network of \cite{Simonyan2014}. We denote
the set of learned labels as $\mathcal{L}$:
\begin{align}
\mbox{\ensuremath{\mathcal{L}}} & =\{bg,face,hand\}\cup Obj
\end{align}
where $Obj=\{obj_{1}\text{\dots}obj_{k}\}$ is the set objects relating
to $k$ action classes. We name this first network $Net_{coarse}$.
For a network $N$ and image $I$, we denote by $P^{N}$ the probability
maps resulting from applying $N$ to $I$:
\begin{equation}
N(I)=P^{N}(I,x,y,c),c\in\mathcal{L}
\end{equation}
\noindent where the superscript $\cdot^{N}$ is used to indicate the
network which operated on the image. In other words, $N$ assigns
to each pixel in $I$ a probability for each of the classes. For brevity,
we might drop the parameters where it is appropriate, writing only
$P^{N}$. For clarity, we may also write $P^{coarse}$ instead of
$P^{\ensuremath{Net_{coarse}}}$.
The predictions of this $Net_{coarse}$, albeit quite informative,
can often miss the action object or misclassify it as belonging to
another category (see Section \ref{sec:Experiments}). To improve
the localization and accuracy of the estimate it produces, we train
a secondary network; we use the same base network (vgg-16) but on
a different extent whose purpose is to refine the predictions of the
first network. We name the second network $Net_{fine}$. It is trained
on sub-windows of the original images which are upscaled during the
training and testing process. Hence it is trained on a data of a different
scale than $Net_{coarse}$. Moreover, since it operates on enlarged
sub-windows each image, its output is eventually transformed to a
finer scale in the original image's coordinate system. We note that
in our experiments, attempting to train the 32-pixel stride version
of Long et al \cite{long2014fully} worked significantly less well
as the full 8-pixel stride version, which uses intermediate features
from the net's hierarchy to refine the final segmentation. Both networks
are trained using the full 8-pixel stride method.
For each training image $I_{j}:j\in{1\ldots N}$, we define $B_{j}$
to be a sub-window of the entire image whose extent allows a good
tradeoff between the image region to be inspected (which is desirably
small to capture fine details), yet contains the desired objects or
body parts. The details of how this bounding box is selected depend
on the nature of the dataset and are described in Section \ref{sub:Training}.
The training set for $Net_{fine}$ is obtained by cropping each training
image $I_{j}$ around $B_{j}$ and cropping the ground-truth label-map
accordingly.
$Net_{fine}$ is used to refine the output of $Net_{coarse}$ as follows:
after applying $Net_{coarse}$ to an image $I$ we seek the $m$ top
local maxima of $P^{Coarse}$ which are at least $p$ pixels apart.
A window is cropped around each local maxima with a fixed size relative
to the size of the original image and $Net_{fine}$ is applied to
that sub-window after proper resizing. The probabilities predicted
for overlapping sub-windows are averaged. We chose $m=5$, $p=20$
by validating on a small subset of the training set. Denote the resulting
refined probability map by $P{}^{fine}$. See Figure \ref{fig:Coarse_And_Fine}
for an example of the resultant coarse and fine probability maps and
the resulting predictions.
\begin{center}
\begin{figure*}
\begin{centering}
\subfloat[Coarse prediction]{\begin{centering}
\includegraphics[height=0.22\textwidth]{figures/blowing_bubbles_027_prob_coarse. compressed}\includegraphics[bb=0bp 0bp 325bp 273bp,height=0.2\textwidth]{figures/blowing_bubbles_027_pred_coarse_and_fine}
\par\end{centering}
}\subfloat[Fine prediction]{\includegraphics[height=0.22\textwidth]{figures/blowing_bubbles_027_prob_fine. compressed}
}
\par\end{centering}
\protect\caption{\label
|
{fig:Coarse_And_Fine}Coarse to fine predictions. (a) $Net_{coarse}$
is applied to the entire image, producing (left) a pixelwise per-class
probability map and (center-top) prediction; (b) guided by local maxima
of $Net_{coarse}$, \textbf{$Net_{fine}$} is applied to a selected
set of subwindows. A missed bubble wand is detected by the refinement
process, as well as other fine details. Probability of false classes
(\eg,brushing) are suppressed, see differences in predicted probability
maps. Best viewed in color online.}
\end{figure*}
\par\end{center}
\subsection{Contextual Features\label{sub:Contextual-Features}}
We now describe how we use the outputs for both networks to generate
candidate regions for action objects. Identifying the region corresponding
to the object depends on its on appearance as well as the context,
e.g. of the face and hand. The outputs of the coarse and fine networks
often produce good candidate regions, along with many false ones:
typically tens of regions per image. In addition, the location of
the candidates may be correct but their predicted identity (i.e, the
maximal probability) is wrong. Therefore we produce candidate regions
from the resultant probability maps in two complementary ways: Define
\begin{equation}
Pred^{N}(x,y)=argmax_{c\in Obj}P^{N}(x,y,c)
\end{equation}
$Pred^{N}(x,y)$ is the pixelwise prediction of net $N\in\{Net_{coarse},Net_{fine}\}$.
We denote by $R_{pred}$ the set of connected components in $Pred^{N}$.
In addition, let $M_{p}$ be the set of $l$ local maxima, computed
separately for each channel $c\in Obj$ of $P^{N}$. For each channel
$c\in Obj$ we apply the adaptive thresholding technique of Otsu \cite{otsu1975threshold}.
We denote by $R_{m}$ the union of the regions obtained by the thresholding
process. We remove from $R_{m}$ regions which do not contain any
element in $M_{p}.$ Finally, we denote
\begin{equation}
R=R_{pred}\cup R_{m}
\end{equation}
\noindent as the set of candidate regions for the image. $R_{m}$
and $R_{pred}$ are complementary since the maximal prediction of
the network may not necessarily contain a local maxima in any of the
probability channels.
We train a regressor to score each candidate region: Let $R_{i}\in\{1...|R|\}$
be a candidate region and $B(R_{i})$ its bounding box. We extract
short and long range contextual features for $R_{i}$: for the \textit{\textcolor{black}{short}}
range features we scale $B(R_{i})$ by a factor of 3 while retaining
the box center. Next, we split the enlarged bounding box into a $5\times5$
grid, where we assign to each grid cell the mean value of each channel
of the probability maps inside that cell. Formally, let $w$ be the
window defined by $B(R_{i})$ and $w_{r,c}$,$r,c\in\{1\ldots5\}$
the subwindow for the $r,c$ row/column. We define
\begin{equation}
F_{context}^{s}(R_{i},r,c)=\sum_{x,y\in w_{r,c}}\frac{P(x,y,c)}{|w_{r,c}|}
\end{equation}
where $|w_{r,c}|$ is the area of each grid cell in pixels. This yields
a feature vector of length $5\times5\times\mathcal{\ensuremath{L}}$
channels representing the values of the network\textquoteright s predictions
both inside the predicted region and in the immediate surroundings.
Similarly, define a second bounding box, $B^{l}(R_{i})$ to be a bounding
box 1/3 times the size of the image with the same center as $B(R_{i})$.
We define a $5\times5$ grid inside $B^{l}(R_{i})$ and use this to
extract \textit{long} range contextual features $F_{context}^{L}$
in the same manner as for the short range $F_{context}^{s}.$ We compute
these features and concatenate them for all candidate regions from
the training set (including the ground truth ones). We train a regressor
whose input is these features and the target output is the overlap
with the bounding boxes of ground-truth regions.
It is worth noting that we have attempted to incorporate these contextual
features as another layer in the network, as they effectively serve
as another pooling layer over the probability maps. Perhaps due to
the small training set we have used, this network did not converge
to results as good as those obtained by our context features. At least
for a dataset of this size, constraining the context features to be
of this form performed better than letting the network learn how to
perform the pooling on its own. It remains an open question if a larger
dataset would facilitate end-to-end learning, effectively learning
the contextual features as an integral part of the network.
\subsection{Classification}
We now show how all the features are combined into a final classification.
We extract the features as follows: for a network $N_{t}\in\{Net_{coarse},Net_{fine}\}$
we denote
\begin{equation}
F^{t}(I)=[S_{1}^{N_{t}},S_{2}^{N_{t}},\dots,S_{L}^{N_{t}}]^{T}\label{eq:global_score}
\end{equation}
\noindent where $S^{t}(I)_{c}=max_{x,y}\{P^{t}(x,y,c)\}$. $F^{t}(I)$
is a concatenation of the maximal values of each channel, including
those predicting the body parts; it serves us as a global representation
of the image. Note that while features extracted this way are significantly
more discriminative when $N_{t}=Net_{fine},$we found that the networks
are complementary; combining $F^{t}(I)$ from both networks works
better than each own its own. Next we add features from the action
object and face regions. The face is detected using the face detector
of \cite{mathias2014face}. We increase the size of each image by
2 before running the face detector and retain the single highest scoring
face per image. Using the regressor learned in Section \ref{sub:Contextual-Features},
we rank the candidate regions $R{}_{i}$ and retain the top $q$ regions
per image. For each region, we extract appearance features in the
form of fc6 features using the vgg-16 {[}18{]} network. In training,
we use the ground-truth regions. Let $F_{f}(I)$, $F_{obj,i}(I)$
be the feature representations for the face and $q=3$ candidate object
features respectively, where $i=1\dots q$ are the indices of the
top-ranked regions. Let $F_{g}(I)$ be the fc6 representation of the
entire image. We extract features from the candidate regions by using
both bounding boxes and masked versions as in section \ref{sec:The-Importance-of},
with candidate regions $R_{i}$ as masks. The final scoring of image
$I$ for class $c$ is defined as:
\begin{equation}
S(I)_{c}=max_{i}W_{c}^{T}\cdot F_{i}(I)\label{eq:final_score}
\end{equation}
where $F_{i}(I)=[F_{g},F^{coarse},F^{fine},F_{f},F_{obj,i}]^{T}$
(dropping the $I$ argument for brevity) and $W_{c}^{T}$ is the set
of weights learned by an SVM classifier in a one-versus all manner
per action class. Please refer to section \ref{sec:Experiments} for
experiments validating our approach.
\subsection{Training\label{sub:Training}}
To train $Net_{coarse}$, we construct a fully-convolutional neural
net with the DAG architecture\cite{long2014fully} to produce a prediction
of 8-pixel stride by fine-tuning the vgg-16 network. We use a learning
rate of .0001 for 100 epochs, as our training set contains only a
few hundreds of images. A similar procedure is done for $Net_{fine}$,
where the training samples are selected as described in Section \ref{sub:Coarse-to-Fine_3.1}.
For the FRA dataset, the sub-windows are selected to be the ground-truth
bounding box of each face scaled by a factor of 2, this includes most
of the action objects. Note that for FRA we did not use the provided
bounding boxes at test time. The regressor for predicting the locations
of action objects is a support-vector regression trained on the context
features around each candidate region (see Section \ref{sub:Contextual-Features}).
All classifiers are trained using an SVM with a regularization $\lambda=.001$.
\section{Experiments\label{sec:Experiments}}
\begin{center}
\begin{table*}[t]
\begin{centering}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|}
\hline
\textbf{G} & \textbf{Face} & \textbf{C} & \textbf{F} & \textbf{Obj} & \multicolumn{1}{l}{\textbf{Mean AP}} & \includegraphics[width=0.08\columnwidth]{figures/icons/drink} & \includegraphics[width=0.08\columnwidth]{figures/icons/cigarette} & \includegraphics[width=0.08\columnwidth]{figures/icons/bubbles} & \includegraphics[width=0.08\columnwidth]{figures/icons/toothbrush} & \includegraphics[width=0.08\columnwidth]{figures/icons/phone}\tabularnewline
\hline
& + & + & + & + & 0.845 & 0.840 & 0.889 & 0.913 & 0.743 & 0.839\tabularnewline
\hline
+ & & + & + & + & 0.851 & 0.836 & 0.907 & 0.909 & 0.751 & 0.851\tabularnewline
\hline
+ & + & & + & + & 0.856 & 0.842 & 0.898 & 0.907 & 0.771 & 0.865\tabularnewline
\hline
+ & + & + & & + & 0.830 & 0.818 & 0.841 & 0.905 & 0.753 & 0.835\tabularnewline
\hline
+ & + & + & + & & 0.848 & 0.801 & 0.895 & 0.905 & 0.780 & 0.860\tabularnewline
\hline
+ & + & + & + & + & \textbf{0.865} & \textbf{0.845} & \textbf{0.910} & \textbf{0.914} & \textbf{0.786} & \textbf{0.868}\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\protect\caption{\label{tab:Ablation-Study}Ablation Study of various features combinations.
\textbf{G}lobal, \textbf{Face}, \textbf{F}ine/\textbf{C}oarse segmentation,
action \textbf{Obj}ect features). Removing the fine phase has the
worst effect on performance. }
\end{table*}
\par\end{center}
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|}
\hline
& \textbf{MAP} & \includegraphics[width=0.08\columnwidth]{figures/icons/drink} & \includegraphics[width=0.08\columnwidth]{figures/icons/cigarette} & \includegraphics[width=0.08\columnwidth]{figures/icons/bubbles} & \includegraphics[width=0.08\columnwidth]{figures/icons/toothbrush} & \includegraphics[width=0.08\columnwidth]{figures/icons/phone}\tabularnewline
\hline
C & 0.636 & 0.593 & 0.692 & 0.720 & 0.551 & 0.623\tabularnewline
\hline
G & 0.642 & 0.601 & 0.623 & 0.789 & 0.660 & 0.537\tabularnewline
\hline
F & 0.668 & 0.565 & \textbf{0.816} & 0.697 & 0.513 & \textbf{0.749}\tabularnewline
\hline
Face & 0.699 & 0.658 & 0.578 & 0.829 & \textbf{0.686} & 0.742\tabularnewline
\hline
Obj & \textbf{0.743} & \textbf{0.776} & 0.744 & \textbf{0.851} & 0.606 & 0.738\tabularnewline
\hline
\end{tabular}
\protect\caption{\label{tab:Single-feature-performance}Single feature performance
for classifying actions. (\textbf{G}lobal, \textbf{Face}, \textbf{F}ine/\textbf{C}oarse
segmentation,action\textbf{ Obj}ect features) On average, features
extracted from the automatically detected action-object outperform
others.}
\end{table}
We now show some experimental results. We begin with FRA, a subset
of Stanford-40 Actions dataset, containing 5 action classes: drinking,
smoking, blowing bubbles, brushing teeth and phoning. To show the
contribution of each feature, we perform the following ablation study.
First, we test the performance of each feature on its own. In Table
\ref{tab:Single-feature-performance}, we can see that while the coarse
level information extracted by $Net_{coarse}$ provides moderate
results, it is outperformed by other feature types, namely the global
image representation and that of the face. The fine representation
performs on average slightly less well as the extended face area,
with the biggest exception being the smoking category, where it improves
from an AP of .578 to .816 (in many cases the cigarettes are held
far away from the face). As can be seen in the images of Figure \ref{fig:Highly-scoring-detected},
the semantic segmentation of \textbf{$Net_{fine}$ }is able to capture
well the cigarettes in the image. The \textbf{Obj} score was obtained
by assigning to each image that of the highest scoring candidate region.
We can see that its performance nears that of the ``Oracle'' classifier,
which is given the bounding box at test time. See Figure \ref{fig:Highly-scoring-detected}
for an example of top-ranked detected action objectes weighted by
the predicted object probabilities.
\begin{figure}
\subfloat[drinking]{\includegraphics[width=1\columnwidth]{figures/top_masked_patches_drink}
}
\subfloat[smoking]{\includegraphics[width=1\columnwidth]{figures/top_masked_patches_smoke}}
\subfloat[blowing bubbles]{\includegraphics[width=1\columnwidth]{figures/top_masked_patches_bubbles}}
\subfloat[brushing teeth]{\includegraphics[width=1\columnwidth]{figures/top_masked_patches_brush}
}
\subfloat[phoning]{\includegraphics[width=1\columnwidth]{figures/top_masked_patches_phone}}\protect\caption{\label{fig:Highly-scoring-detected}Highly scoring action-objects
detected by our method, weighted by the action-object per pixel probability
for the respective class.}
\end{figure}
Next, we performed an ablation study showing how performance changes
when we use all of the sources of information except one. Table \ref{tab:Ablation-Study}
summarizes this. We can see that performance is worst when excluding
the $Net_{fine}$ predictions. Also, as expected by our motivation
of the problem in Section \ref{sec:The-Importance-of}, we see that
the best performance is gained when including the predicted object
locations. Overall, the increase in mean average precision obtained
via the baseline global features produced by the vgg-16 network increases
from 0.642 to .865, a 35\% relative increase. Note that this is also
quite near the results of the ``Oracle'' classifier (Section \ref{sec:The-Importance-of}).
\subsection{Joint training for Pose and Objects}
It is noteworthy that the joint training of the networks to detect
the hands and faces along with the action objects performed dramatically
better than attempting to train a network to predict the location
of the action objects only; we have attempted to train a network when
the ground-truth masks contained only the locations and identities
of action objects without body parts. This worked very poorly; we
conjecture that while the action objects may be difficult to detect
on their own, contextual cues are implicitly learned by the networks.
Such cues likely include proximity to the face or hand.
\section{Conclusions \& Future Work\label{sec:Conclusions-=000026-Future}}
We have demonstrated that exact localization of action objects and
their relation to the body parts is important for action classes where
the action-objects are small and often barely visible. We have done
so using two main elements. First, using a coarse-to-fine approach
which focuses in a second stage on relevant image regions. Second,
using contextual cues which aid the detection of the small objects.
This happens both during the network's operation, as it seeks the
objects jointly with the relevant body parts, as well as pruning false
object candidates generated by the networks, by considering their
context explicitly. The coarse to fine approach whos networks are
based on the full 8-pixel stride model of \cite{long2014fully} utilizes
features from intermediate levels of the network and not only from
the top level. It outperforms a purely feed-forward method such as
obtained from the 32-pixel stride version. Together, these elements
aid in good localizations of action objects, leading to a significant
improvement over baseline methods. Our method uses two networks and
a specific form of contextual features. Our comparisons showed that
the results are better than incorporating the entire process in an
end-to-end pipeline; it remains an open question if this is due to
the relatively small size of the training set. A current drawback
of the method is that it required annotation of both body-parts and
action objects in the dataset; in the future we intend to alleviate
this constraint by being able to combine information from existing
datasets, which typically contain annotations of objects or poses,
but not both.
\bibliographystyle{plain}
|
\section{Introduction: basic definitions and statement of the results}
\begin{subsection}{The symplectomorphisms framework}
Denote by $M$ a $2d$-dimensional manifold with Riemaniann structure and endowed with a closed and nondegenerate 2-form $\omega$ called \emph{symplectic form}. Let $\mu$ stands for the volume measure associated to the volume form wedging $\omega$ $d$-times, i.e., $\omega^d=\omega\land\dots\land\omega$. The Riemannian structure induces a norm $\|\cdot\|$ on the tangent bundle $T_x M$. Denote the Riemannian distance by $d(\cdot,\cdot)$. We will use the canonical norm of a bounded linear map $A$ given by $\|A\|=\sup_{\|v\|=1}\|A\cdot v\|$. By the theorem of Darboux (see e.g.~\cite[Theorem 1.18]{MZ}) there exists an atlas $\{\varphi_j\colon U_j\to\mathbb{R}^{2d}\}$, where $U_j$ is an open subset of $M$, satisfying $\varphi_j^*\omega_0=\omega$ with $\omega_0=\sum_{i=1}^d dy_i\land dy_{d+i}$. A diffeomorphism $f\colon(M,\omega)\to(M,\omega)$ is called a \emph{symplectomorphism} if it leaves invariant the symplectic structure, say $f^*\omega=\omega$. Observe that, since $f^*\omega^d=\omega^d$, a symplectomorphism $f\colon M\to M$ preserves the volume measure $\mu$.
Symplectomorphisms arise naturally in the rational mechanics formalism as the first return Poincar\'e maps of hamiltonian flows. For this reason, it has long been one of the most interesting research fields in mathematical physics. We suggest the reference \cite{MZ} for more details on general hamiltonian and symplectic theories. Let $(\text{Symp}_{\omega}^1(M), C^1)$ denote the set of all symplectomorphisms of class $C^1$ defined on $M$, endowed with the usual $C^1$ Whitney topology.
\end{subsection}
\begin{subsection}{The shadowing property}
The concept of shadowing in dynamical systems is inspired by the numerical computational idea of estimating differences between exact and approximate solutions along orbits and to understand the influence of the errors that we commit and allow on each iterate. We may ask if it is possible to obtain shadowing of approximate trajectories in a given dynamical system by exact ones. We refer Pilyugin's book ~\cite{P} for a completed description on the subject.
There are, of course, considerable limitations to the amount of information we can extract from a given specific system that exhibits the shadowing property, since a $C^1$-close system may be absent of that property. For this reason it is of great utility and natural to consider that a selected model can be slightly perturbed in order to obtain the same property - the stably shadowable dynamical systems.
For $\delta>0$ and $a,b\in [-\infty,+\infty]$ such that $a<b$, the sequence of points $\{x_i\}_{i=a}^b$ in $M$ is called a \emph{$\delta$-pseudo orbit} for $f$ if $d(f(x_i),x_{i+1})<\delta$ for all $a\leq i \leq b-1$.
Let $\Lambda\subseteq M$ be a closed and $f$-invariant set. The symplectomorphism $f\colon M\rightarrow M$ is said to have \emph{the shadowing property} in $\Lambda$ if for all $\epsilon>0$, there exists $\delta>0$, such that for any $\delta$-pseudo orbit $(x_n)_{n\in\mathbb{Z}}$ in $\Lambda$, there is a point $x\in M$ which $\epsilon$-shadows $(x_n)_{n\in\mathbb{Z}}$, i.e. $d(f^i(x),x_i)<\epsilon$.
Let $f \in \text{Symp}^{1}_\omega(M)$ we say that $f$ is $C^1$-\emph{stably (or robustly) shadowable} if there exists a neighborhood $\mathcal{V}$ of $f$ in $\text{Symp}^{1}_\omega(M)$ such that any $g\in\mathcal{V}$ has the shadowing property.
We point out that $f$ has the shadowing property if and only if $f^n$ has the shadowing property for every $n\in\mathbb{Z}$.
\end{subsection}
\begin{subsection}{Hyperbolicity and statement of the results}
We say that any element $f$ in the set $\text{Symp}_{\omega}^1(M)$ is \emph{Anosov} if, there exists $\lambda\in(0,1)$ such that the tangent vector bundle over $M$ splits into two $Df$-invariant subbundles $TM=E^u\oplus E^s$, with $\|Df^n|_{E^s}\|\leq \lambda^n$ and $\|Df^{-n}|_{E^u}\|\leq \lambda^n$. We observe that there are plenty Anosov diffeomorphisms which are not symplectic.
Let us recall that a periodic point $p$ of period $\pi$ is said to be \emph{hyperbolic} if the tangent map $Df^\pi(p)$ has no norm one eigenvalues.
Let $f \in \text{Symp}^{1}_\omega(M)$ we say that $f$ is in $\mathcal{F}^1_{\omega}(M)$ if there exists a neighborhood $\mathcal{V}$ of $f$ in $\text{Symp}^{1}_\omega(M)$ such that any $g\in\mathcal{V}$, has all the periodic orbits of hyperbolic type. We define analogously the set $\mathcal{F}^1_\mu(M)$ in the broader set of volume-preserving diffeomorphisms $\text{Symp}^{1}_\mu(M)$.
The results in this note can be seen as a generalization of the result in \cite{S} for symplectomorphisms and volume-preserving diffeomorphisms. Let us state our first result.
\begin{maintheorem}\label{teo1}
If a symplectomorphism $f$ defined in symplectic manifold $M$ is $C^1$-stably shadowable, then $f$ is Anosov.
\end{maintheorem}
It is well known that Anosov diffeomorphisms impose severe topological restrictions to the manifold where they are supported. Thus, we present a simple but startling consequence of Theorem~\ref{teo1} that shows how topological conditions on the phase space imposes numerical restrictions to a given dynamical system.
\begin{corollary}
If $M$ does not support an Anosov diffeomorphisms, then there are no $C^1$-stably shadowable symplectomorphisms.
\end{corollary}
We end this introduction by recalling a result in the vein of ours proved in ~\cite{BR} - \emph{$C^1$-robust topologically stable symplectomorphisms are Anosov}.
We point out that, in the broader setting of dissipative diffeomorphisms, it was proved, in \cite[Theorem 4]{W}, that \emph{expansiveness} and the (robust) shadowing property implies (robust) topologic stability. In \cite[Theorem 11]{W} it is also proved that topological stability, imply the shadowing property. Another result which relates $C^1$-robust properties with hyperbolicity is the Horita and Tahzibi theorem
|
(see~\cite{HT}) which states that $C^1$-robustly transitive symplectomorphisms are partially hyperbolic.
\bigskip
\end{subsection}
\section{Proof of Theorem~\ref{teo1}}\label{diff}
Theorem\ref{teo1} is a direct consequence of the following two propositions. The following result can be found in ~\cite{Ne}.
\begin{proposition}\label{Newhouse} (Newhouse)
If $f\in\mathcal{F}^1_\omega$, then $f$ is Anosov.
\end{proposition}
The following result is a symplectic version of ~\cite[Proposition 1]{M}. Actually, Moriyasu, while working in the dissipative context, con\-si\-de\-red the shadowing property in the non-wandering set, which, in the symplectic setting, and due to Poincar\'e recurrence, is the whole manifold $M$.
\begin{proposition}\label{Moriyasu}
If $f$ is a $C^1$-stably shadowable symplectomorphism, then $f\in\mathcal{F}^1_\omega$.
\end{proposition}
\begin{proof}
The proof is by contradiction; let us assume that there exists a $C^1$-stably shadowable symplectomorphism $f$ having a non-hyperbolic closed orbit $p$ of period $\pi$.
In order to go on with the argument we need to $C^1$-approximate the symplectomorphism $f$ by a new one, $f_1$, which, in the local coordinates given by Darboux's theorem, is \emph{linear} in a neighborhood of the periodic orbit $p$. To perform this task, in the sympletic setting, and taking into account ~\cite[Lemma 3.9]{AM}, it is required higher smoothness of the symplectomorphism.
Thus, if $f$ is of class $C^\infty$, take $g=f$, otherwise we use \cite{Z} in order to obtain a $C^\infty$ $C^1$-stably shadowable symplectomorphism $h$, arbitrarily $C^1$-close to $f$, and such that $h$ has a periodic orbit $q$, close to $p$, with period $\pi$. We observe that $q$ may not be the analytic continuation of $p$ and this is precisely the case when $1$ is an eigenvalue of the tangent map $Df^{\pi}(p)$.
If $q$ is not hyperbolic take $g=h$. If $q$ is hyperbolic for $Dh^{\pi}(q)$, then, since $h$ is $C^1$-arbitrarily close to $f$, the distance between the spectrum of $Dh^{\pi}(q)$ and the unitary circle can be taken arbitrarily close to zero. This means that we are in the presence of ``poor" hyperbolicity, thus in a position to apply ~\cite[Lemma 5.1]{HT} to obtain a new $C^1$-stably shadowable symplectomorphism $g\in\text{Symp}_{\omega}^\infty(M)$, $C^1$-close to $h$ and such that $q$ is a non-hyperbolic periodic orbit.
Now, we use the \emph{weak pasting lemma} (\cite[Lemma 3.9]{AM}) in order to obtain a $C^1$-stably shadowable symplectomorphism $f_1$ such that, in local canonical coordinates, $f_1$ is linear and equal to $Dg$ in a neighborhood of the periodic non-hyperbolic orbit, $q$. Moreover, the existence of an eigenvalue, $\lambda$, with modulus equal to one is associated to a symplectic invariant two-dimensional subspace contained in the subspace $E^c_q\subseteq T_q M$ associated to norm-one eigenvalues. Furthermore, up to a perturbation using again ~\cite[Lemma 5.1]{HT}, $\lambda$ can be taken rational. This fact assures the existence of periodic orbits arbitrarily close to the $f_1$-orbit of $q$. Thus, there exists $m\in\mathbb{N}$ such that $f_1^{m\pi}(q)|_{E^c_q}=(Dg^{m\pi})_q|_{E^c_q}=id$ holds, say in a $\xi$-neighborhood of $q$. Recall that, since $f_1$ has the shadowing property $f_1^{m\pi}$ also has. Therefore, fixing $\epsilon\in(0,\frac{\xi}{4})$, there exists $\delta\in(0,\epsilon)$ such that every $\delta$-pseudo $f_1^{m\pi}$-orbit $(x_n)_n$ is $\epsilon$-traced by some point in $M$. Take $y$ such that $d(y,q)=\frac{3\xi}{4}$ and a closed $\delta$-pseudo $f_1^{m\pi}$-orbit $(x_n)_n$ such that any ball centered in $x_i$ and with radius $\epsilon$ is still contained in the $\xi$-neighborhood of $q$, moreover, take $x_0=q$ and $x_s=y$.
By the shadowing property there exists $z\in M$ such that $d(f_1^{m\pi i}(z),x_i)<\epsilon$ for any $i\in\mathbb{Z}$. Moreover, we observe that $d(f_1^{m\pi i}(z),q)<\xi$ for every $i\in\mathbb{Z}$. Therefore, $z\in E^c_q$. Finally, we reach a contradiction by noting that
$$d(q,z)\geq d(q,x_s)-d(x_s,z)=d(q,y)-d(x_s,f_1^{m\pi s}(z))\geq \frac{3\xi}{4}-\epsilon> \frac{\xi}{2}>\epsilon.$$
\end{proof}
\section{Volume-preserving diffeomorphisms}
Theorem~\ref{teo1} also holds on the broader context of volume-preserving diffeomorphisms.
\begin{maintheorem}\label{teo2}
If a volume-preserving diffeomorphism $f$ defined in a manifold $M$ is $C^1$-stably shadowable, then $f$ is Anosov.
\end{maintheorem}
\begin{proof}
The proof follows the same steps as the one of Theorem~\ref{teo1}. Next we give the strategy for proving it by referring the fundamental pieces that replace the ones used along the proof of Theorem~\ref{teo1}.
\begin{itemize}
\item The version of Proposition~\ref{Newhouse} for volume-preserving diffeomorphisms is proved in ~\cite[Theorem 1.1]{AC};
\item Proposition~\ref{Moriyasu} can be obtained by noting that:
\begin{enumerate}
\item the Darboux coordinates are switched by the Moser volume-preserving coordinates (cf. ~\cite{Mo});
\item the result in~\cite{Z} are replaced by the smoothness result in ~\cite{A};
\item the perturbation lemma in ~\cite{HT}, are interchanged by its correspondent in the volume-preserving case proved in \cite[Proposition 7.4]{BDP};
\item and, finally, we should use ~\cite[Theorem 3.6]{AM} instead of ~\cite[Theorem 3.9]{AM}.
\end{enumerate}
\end{itemize}
We leave the details to the reader.
\end{proof}
\section*{Acknowledgements}
The author was partially supported by the FCT-Funda\c{c}\~ao para a Ci\^encia e a Tecnologia, project PTDC/MAT/099493/2008.
|
\section{Introduction}
Machine learning is often used to estimate classification models for a set of predefined categories. Most of the times, these categories are assumed to be independent.
When independence cannot be assumed we may either construct artificial hierarchies (hierarchical clustering), or classify new instances onto a hierarchy
that is given, typically representing is-a relations.
In this paper we study cases where the hierarchy is already provided. Furthermore, the hierarchy is a tree and the classification nodes are always the
leaves of the hierarchy. We also assume that each instance belongs to only one category (single-label classification).
Many researchers approach hierarchical classification problems \cite{Brouard12,TsoumakasLMV13} using flat classification, i.e. ignoring the hierarchy.
Hierarchical classification approaches, typically divide the problem into smaller ones, usually one classification for each node of the hierarchy.
For each of these problems fewer features and instances are required to train a good classifier, compared to the respective flat approaches.
This can be very important, especially in cases of large scale classification, where the number of categories and instances can increase to thousands
and millions respectively. In such cases, a hierarchical approach would require much fewer resources than a flat one.
The main issue in hierarchical classification is to combine the decisions of the node-specific classifiers appropriately, in order to predict a category for an instance.
The most common approach is that of cascade classification. In this case, we start at the root of the hierarchy and greedily select the most probable
descendant. This continues until we reach a leaf, which is chosen as the predicted node.
The main disadvantage of this approach is that any mistake done during the descent deterministically leads to the wrong final decision.
Therefore the cascade is very sensitive to the quality of the inner node classifiers.
In this paper we propose a new approach, which is as fast as cascade regarding training but leads to better results compared to cascade and flat classification,
using the same classification algorithms.
In the next Section we present the related work, while in Section 3 we introduce our approach. Section 4 discusses our experimental results.
Finally, Section 5 concludes and points to future work.
\section{Related Work}
Although hierarchical classification has many advantages, typically researchers resort to mildly hierarchical or even flat approaches \cite{kosmopoulos2010}.
One reason for this is that flat classification is well studied, so it is easier to transfer methods from this field.
On the other hand on large scale problems, the flat use of traditional classifiers, such as SVMs, is often prohibitively expensive computationally \cite{Liu:2005}.
Early work in hierarchical classification focused on approaches such as shrinkage \cite{McCallumRMN98} and hierarchical mixture models \cite{ToutanovaCPH01}.
Unfortunately most of these approaches cannot be applied to large scale problems, at least in the form described in the original papers.
New methods based on similar ideas, such as that of latent concepts \cite{QiuHLZ11}, continue to appear in the literature, taking also into account
scalability issues. But still most of the proposed methods are tested on rather small datasets with small hierarchies.
Mildly hierarchical approaches, typically make limited use of the hierarchy. Methods such as \cite{XueXYY08} use only some levels
of the hierarchy, flattening the rest. Other approaches such as \cite{BabbarPGA13}, alter the initial hierarchy before performing cascading in order
to minimize errors at the upper levels of the hierarchy.
\section{Probabilistic Cascading}
In our method following the cascading approach, we train one binary classifier for each node of the hierarchy.
For example, using the hierarchy of Figure \ref{treeExample}
we would train one classifier for each of the nodes Arts, Health, Music, Dance, Fitness and Medicine.
The binary classifier of a node N is trained using as positive examples the instances belonging to the leaf descendants of N and as negative examples the
instances of its siblings. For example, the binary classifier of node Music would use all instances belonging to Music as positive
examples and all instances belonging to Dance as negative examples. Similarly for the binary classifier of node Arts, all instances belonging to Music and Dance would be positive examples,
while all instances belonging to Fitness and Medicine would be negative.
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[-,>=stealth',shorten >=1pt,auto, node distance=3cm,
thick,main node/.style={circle,fill=blue!20,draw,font=\sffamily\normalsize\bfseries, minimum size=14mm, scale=0.6}]
\node[main node] (1) {Root};
\node[main node] (2) [below left of=1] {Arts};
\node[main node] (3) [below right of=1] {Health};
\node[main node] (4) [below left of=2] {Music};
\node[main node] (5) [below right of=2] {Dance};
\node[main node] (6) [below right of=3] {Medicine};
\node[main node] (7) [below of=3] {Fitness};
\path[every node/.style={font=\sffamily\small}]
(1) edge node [left] {} (2)
edge node [right] {} (3)
(2) edge node [left] {} (4)
edge node [right] {} (5)
(3) edge node [right] {} (6)
edge node [right] {} (7);
\end{tikzpicture}
\caption{Tree hierarchy example.}
\label{treeExample}
\end{center}
\end{figure}
These binary classifiers require fewer resources to be trained compared to flat ones.
They can also be more accurate, since they aim to distinguish between fewer categories.
For example, if we have 10,000 leaves, each binary classifier would need to separate one class from 9.999 others.
In the case of cascading,it would only need to separate between the sibling categories.
Such classifiers would also require fewer features to train on, an important characteristic if we consider large datasets.
The main disadvantage of cascading is that any mistake is carried over. For example if an instance belonging to category of Music,
gets a higher probability by the classifier of Health than that of Arts, is classified wrongly, without taking into consideration the
classifiers of Music and Dance. In contrast, our method computes the probability of each root-to-leaf path for a testing instance and we classify
it to the most probable path, which we call $P_{path}$. As an example, the probability of an instance $d$ belonging to Music:
\begin{gather}
P(Music|d) = \frac{P(Arts|d) P(Music|Arts,d)}{P(Arts|Music,d)}
\end{gather}
but since $P(Arts|Music,d)=1$:
\begin{gather}
P(Music|d) = P(Arts|d) P(Music|Arts,d)
\end{gather}
Similarly:
\begin{gather}
P(Arts|d) = \frac{P(Root|d) P(Arts|Root,d)}{P(Root|Arts,d)}
\end{gather}
but since $P(Music|Root,d)=1$ and $P(Root|d)$ = 1:
\begin{gather}
P(Arts|d) = P(Arts|Root,d)
\end{gather}
By combining (2) and (4) we get:
\begin{gather}
P(Music|d) = P(Arts|Root,d)P(Music|Arts,d)
\end{gather}
These conditional probabilities are in fact the ones computed by the binary classifiers of each node.
So given a document $d$, a leaf $C$ and a set $S$ of all the ancestors of $C$:
\begin{gather}
P(C|d) = \prod_{i=1}^{|S|}{P(S_{i}|Ancestor(S_{i}),d)}
\end{gather}
and we define $P_{path}$ as the:
\begin{gather}
P_{path}(d) = \argmax _{C}P(C|d)
\end{gather}
Let's get back to our initial example where document $d$ belonged to Music. Lets assume that we have the following probabilities:
\begin{itemize}
\item $P(Arts|Root,d) = 0.2$
\item $P(Health|Root,d) = 0.21$
\item $P(Music|Arts,d) = 0.9$
\item $P(Dance|Arts,d) = 0.6$
\item $P(Fitness|Health,d) = 0.1$
\item $P(Medicine|Health,d) = 0.2$
\end{itemize}
If we used standard cascading, document $d$ would be classified to category Medicine.
Using $P_{path}$ we get:
\begin{itemize}
\item $P(Music|d) = 0.18$
\item $P(Dance|d) = 0.12$
\item
|
$P(Fitness|d) = 0.021$
\item $P(Medicine|d) = 0.042$
\end{itemize}
and $P_{path}$ would assign $d$ to class Music. The cost that we have to pay, compared to standard cascading, is that we have to compute all the
$P(C|d)$, in order to select the one with the highest probability.
\section{Experimental results}
In order to compare our approach against flat and cascade classification, we used the Task 1 dataset of the first Large Scale Hierarchical Text Classification Challenge (LSHTC1).\footnote {http://lshtc.iit.demokritos.gr/node/1}
This dataset contains 93,505 instances (split into train and validation files), composed of 55,765 distinct features and belonging to 12,294 categories. Classification is only allowed to the leaves of the hierarchy, which is a tree.
Each instance belongs to only one category.
The testing instances are 34,880 and the results are evaluated using the evaluation measures of the challenge (an Oracle is provided by the organizers)
which are the following:
\begin{itemize}
\item Accuracy
\item Macro F-measure
\item Macro Precision
\item Macro Recall
\item Tree Induced Error
\end{itemize}
As a classifier we used a L2 Regularized Logistic Regression with the regularization parameter C set to 1 (usually the default value). We also conducted
experiments with other regularization methods and other values of C, but the results were similar. All the experiments were conducted using TF/IDF instead
TF features, as our experiments indicated better performance with this feature set.
The goal of our experiments was to illustrate that the proposed method can improve the results of flat and cascade classification, using the same algorithm,
L2 Regularized Logistic Regression in this case. Further experimentation and engineering could make the method competitive to the best-performing systems, in the
challenge. However, we consider this exercise beyond the scope of the paper.
For flat classification, we trained one binary classifier (one versus all) for each leaf. We then assigned each testing document to the class with the highest probability.
For cascade classification we trained a binary classifier for each node of the hierarchy. We used as positive examples, all the instances belonging to all the descendant leaves of the node and
as negative, all the descendant leaves of its siblings.
This results in more classifiers than the for flat classification, but each of these classifiers was much easier to train, since it was
trained on fewer instances.
In table \ref{tbl:results_tf}, we present the results of each approach, for each evaluation measure.
The main observation is that $P_{path}$ outperforms both Flat and Cascade.
Another interesting result is, that Flat is the worst approach, according to Tree Induced Error.
This is an indication that by ignoring the hierarchy (flat classification), the mistakes tend to be located further from the correct category in the hierarchy.
This is very important in hierarchical classification, since different mistakes carry different weight.
Misclassifying an instance to a sibling of the correct category is a smaller error than if it was classified to a category 5 nodes away.
Flat evaluation measures, generally fail to capture this, so tree induced error, being the only hierarchical measure of the five that we use is more suitable for comparing
the three approaches.
Given that our hierarchy is a tree and each instance belongs to only a single class, there is no need to take into account more complex
hierarchical evaluation measures and tree induced error is sufficient for safe conclusions.
\begin{table}[h]
\centering
\scalebox {0.8}{
\begin{tabular}{|l | c c c|}
\hline
Evaluation Measure & Flat & Cascade & $P_{path}$\\
\hline
Accuracy & 0.405 & 0.404 & \textbf{0.431} \\
Macro F-measure & 0.256 & 0.278 & \textbf{0.294} \\
Macro Precision & 0.254 & 0.269 & \textbf{0.287} \\
Macro Recall & \textbf{0.302} & 0.289 & \textbf{0.302} \\
Tree Induced Error & 3.874 & 3.609 & \textbf{3.437} \\
\hline
\end{tabular}
}
\caption{Results for each approach per evaluation measure, using TF/IDF features. With bold we mark the best performing approach, given each evaluation measure.}
\label{tbl:results_tf}
\end{table}
Both $P_{path}$ and Flat classification produce a probability for each leaf and the highest one is returned as the predicted category.
But what if we evaluated the list of categories, ranked according o their probability? In order to obtain such an assessment in Figure \ref{topK}
we calculate the recall for the $K$ most-probable categories, with $K$ ranging from 1 to 10.
As expected the probability of success increases rapidly with $K$.
This is very important, for a realistic semi-automated classification scenario, where a human annotator selects the correct label between thousands of categories.
Such a system would allow the annotator to select only between five or ten suggestions.
The second observation is that for all values of $K$, $P_{path}$ performs better than the Flat one.
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=0.76]
\begin{axis}[
legend columns=2,
legend style={ at={(0.05,0.1)},anchor=west,draw=none},
xlabel={\small Top answers returned per instance},
ylabel={\small Recall at the Top},
ymin=0.3, ymax=0.9, xmin=0, xmax=10,
ytick={0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9},
xtick={1, 2, 3, 4, 5, 6, 7, 8, 9, 10},
thick,
axis x line*=bottom,
axis y line*=left]
\pgfplotstableread{
Acc K
0.431565 1
0.551233 2
0.607597 3
0.642775 4
0.668091 5
0.687443 6
0.702523 7
0.715768 8
0.72758 9
0.737328 10
}\Ppath
\pgfplotstableread{
Acc K
0.404633 1
0.514865 2
0.570282 3
0.60715 4
0.633468 5
0.654081 6
0.671139 7
0.685731 8
0.698145 9
0.708839 10
}\Flat
\addplot[color=blue,very thick] table[y=Acc,x=K] {\Ppath};
\addlegendentry{\tiny $P_{path}$}
\addplot[color=red,very thick, densely dashed] table[y=Acc,x=K] {\Flat};
\addlegendentry{\tiny $Flat$}
\end{axis}
\end{tikzpicture}
\label{topK}
\caption{Recall at the top K answers of $P_{path}$ and Flat classification for various values of $K$.}
\end{center}
\end{figure}
Regarding the scalability of the approaches, during the two cascading approaches (standard and $P_{path}$) require fewer resources than the flat classifiers.
During classification, $P_{path}$ is slower than Cascade, since it takes into account all the root-to-leaf paths,
and is similar to the cost of Flat classification.
\section{Conclusions}
In this paper we present the $P_{path}$ method for hierarchical classification. $P_{path}$ addresses the disadvantages of traditional flat and cascade classification.
Flat classification can be very computational demanding in large scale problems and also ignores completely the hierarchy information which can be exploited for better results.
Standard cascading on the other hand is much more computational efficient, but suffers from the problem of early misclassification at the top levels of the hierarchy.
Our approach has the same training computational complexity as the Cascade, while achieving better scores according to all the tested evaluation measures.
However, it is slower during classification, having a complexity is similar to that of flat classification.
The version presented in this paper is designed for tree hierarchies. As a future work, we plan to extend the idea of $P_{path}$ to DAG hierarchies.
Furthermore in this paper we focused on single-label classification. Although the idea of $P_{path}$ seems
compatible with multi-label approaches, further experiments need to be conducted in this direction.
\bibliographystyle{plain}
|
\section{Introduction}
Constraint satisfaction problems (CSPs) are widely used in AI, with applications in optimization, control, and planning \cite{russell2003artificial}.
While many classes of CSPs are intractable in the worst case~\cite{cook1971complexity}, many real-world CSP instances are easy to solve in practice~\cite{vardi2014boolean}.
As a result, there has been significant interest in understanding the average-case complexity of CSPs from multiple communities, such as AI, theoretical computer science, physics, and combinatorics~\cite{biere2009handbook}.
Random CSPs are an important model for studying the average-case complexity of CSPs. Past works have proposed several distributional models for random CSPs \cite{molloy2003models,creignou2003generalized}. An interesting feature arising from many models is a phase transition phenomenon that occurs as one changes the ratio of number of constraints, $m$, to the number of variables, $n$. Empirical results \cite{mitchell1992hard} show that for many classes of CSPs, randomly generated instances are satisfiable with probability near 1 when $m/n$ is below a certain threshold. For $m/n$ larger than a threshold, the probability of satisfiability is close to 0. The statistical physics, computer science, and mathematics communities have focused much attention on identifying these threshold locations \cite{achlioptas2005rigorous,biere2009handbook}.
Phase transitions for common classes of CSPs such as $k$-SAT and $k$-XORSAT are very well-studied. For $k$-SAT, researchers struggled to find tight lower bounds on the satisfiability threshold until the breakthrough work of \citeauthor{achlioptas2002asymptotic}, which provided lower bounds a constant factor away from the upper bounds. Later works closed this gap for $k$-SAT \cite{coja2013going,coja2016asymptotic}. More recently, \citeauthor{Dudek2016CombiningTK} \shortcite{Dudek2016CombiningTK} also studied the satisfiability threshold for a more general CSP class, namely $k$-CNF-XOR, where both $k$-SAT and $k$-XORSAT constraints can be used. The results and analyses from these works, however, are all specific to the constraint classes studied.
In this paper, we provide new lower bounds on the location of the satisfiability threshold that hold for general boolean CSP classes. We focus on the setting where CSPs are generated by a single constraint type, though our analysis can extend to the setting with uniform mixtures of different constraint functions. We extend techniques from \cite{achlioptas2004threshold} and build on \cite{creignou2003generalized}, which proposes a distributional model for generating CSPs and provides lower bounds on the satisfiability threshold for these models. The significance of our work is that \textit{our bounds hold for all functions that could be used to generate random CSP instances}. The lower bounds from \cite{creignou2003generalized} are also broadly applicable, but they are looser than ours because they do not depend on constraint-specific properties. Our lower bounds are often tight (within a constant factor of upper bounds for many CSP classes) because they depend on specific properties of the Fourier spectrum of the function used to generate the random CSPs. Since these properties are simple to compute for any constraint function, our lower bounds are broadly applicable too.
The Fourier analysis of boolean functions \cite{o2014analysis} will be vital for obtaining our main results. Expressing functions in the Fourier basis allows for clean analyses of random constraints \cite{friedgut1999sharp,barak2015beating,achim2016beyond}. Our use of Fourier analysis is inspired by the work of \citeauthor{achim2016beyond} \shortcite{achim2016beyond}, who analyze the Fourier spectra of random hash functions used as constraints in CSP-based model counting. We show that the Fourier spectrum of our constraint-generating function controls the level of spatial correlation in the set of satisfying assignments to the random CSP. If the Fourier spectrum is concentrated on first and second order coefficients (corresponding to ``low frequencies''), this correlation will be very high, roughly increasing the variance of the number of solutions to a random CSP and decreasing the probability of satisfiability. In related work, \citeauthor{montanari2011reconstruction} \shortcite{montanari2011reconstruction} also use Fourier analysis to provide tight thresholds in the case where odd Fourier coefficients are all zero.
\section{Notation and Preliminaries}
\label{sec:notation}
In this section, we will introduce the preliminaries necessary for presenting our main theorem. First, we formally define our distribution for generating random CSPs, inspired from \cite{creignou2003generalized}.
We will use $n$ and $m$ to denote the number of variables and number of constraints in our CSPs, respectively. We will also let $f : \{-1, 1\}^k \rightarrow \{0, 1\}$ denote a binary function and refer to $f$ as our constraint function. Often we use the term ``solution set of $f$'' to refer to the set $\{u : f(u) = 1\}$. Using the constraint function $f$, we create constraints by applying $f$ to a signed subset of $k$ variables.
\begin{definition} [Constraint]
Let $I = (i_1, \ldots, i_k)$ be an ordered tuple of $k$ indices in $[n]$, and let $s$ be a sign vector from $\{-1, 1\}^k$. Given a vector $\sigma \in \{-1, 1\}^n$, we will define the vector $\sigma_{I, s}$ of size $k$ as follows:
\begin{equation}
\label{eq:sigmaIs}
\sigma_{I, s} = (s_1\sigma_{i_1}, \ldots, s_k\sigma_{i_k})
\end{equation}
Now we can denote the application of $f$ to these indices by $f_{I, s}(\sigma) = f(\sigma_{I, s})$. We call $f_{I, s}$ a constraint, and we say that $\sigma \in \{-1, 1\}^n$ satisfies the constraint $f_{I, s}$ if $f_{I, s}(\sigma) = 1$.
\end{definition}
The definition of a CSP generated from $f$ follows.
\begin{definition} [CSP generated from $f$]
We will represent a CSP with $m$ constraints and $n$ variables generated from $f$ as a collection of constraints $C_f(n, m) = \{f_{I_1, s_1}, \ldots, f_{I_m, s_m}\}$. Then $\sigma \in \{-1, 1\}^n$ satisfies $C_f(n, m)$ if $\sigma$ satisfies $f_{I_j, s_j}$ for $j = 1, \ldots, m$.
\end{definition}
\begin{example} [3-SAT]
Let $f : \{-1, 1\}^3 \rightarrow \{1, 0\}$ where $f(u) = 0$ for $u = (-1, -1, -1)$ and $f(u) = 1$ for all other $u$. Then $f$ is the constraint function for the 3-SAT problem.
\end{example}
With these basic definitions in place, we are ready to introduce the model for random CSPs.
\subsection{Random CSPs}
\label{sec:randcsp}
We discuss our model for randomly generating $C_f(n, m)$, and formally define a ``satisfiability threshold.''
To generate instances of $C_f(n, m)$, we simply choose $I_1, \ldots, I_m$ and $s_1, \ldots, s_m$ uniformly at random. For completeness, we sample without replacement, i.e. there are no repeated variables in a constraint, and no duplicate constraints. However, sampling with replacement does not affect final results. For the rest of this paper we abuse notation and let $C_f(n, m)$ denote a randomly generated CSP instance following this model.
Now we formally discuss satisfiability thresholds. We let $r = m/n$. For many constraint functions $f$, there exist thresholds $r_{f,\text{sat}}$ and $r_{f,\text{unsat}}$ such that
\begin{align*}
\lim_{n \rightarrow \infty} \Pr[C_f(n, rn) \ \text{is satisfiable}] =
\begin{cases}
1 \ \text{if} &\ r < r_{f,\text{sat}}\\
0 \ \text{if} &\ r > r_{f,\text{unsat}}
\end{cases}
\end{align*}
In general, it is unknown whether $r_{f,\text{sat}} = r_{f,\text{unsat}}$, but for some problems such as $k$-SAT (for large $k$) and $k$-XORSAT, affirmative results exist \cite{ding2015proof,pittel2016satisfiability}. If $r_{f,\text{sat}} = r_{f,\text{unsat}}$, then we say that the random CSP $C_f(n, m)$ exhibits a sharp threshold in $m/n$.
We are concerned with finding lower bounds $r_{f,\text{low}}$ on $r_{f,\text{unsat}}$ such that there exists a constant $C > 0$ independent of $n$ so that for sufficiently large $n$,
\begin{equation}
\label{eq:rflower}
\Pr[C_f(n, rn) \ \text{is satisfiable}] > C \ \text{for all} \ r < r_{f,\text{low}}
\end{equation}
For CSP classes with a sharp threshold, \eqref{eq:rflower} implies that
\begin{align*}
\lim_{n \rightarrow \infty} \Pr[C_f(n, rn) \ \text{is satisfiable}] = 1 \ \text{for all} \ r < r_{f,\text{low}}
\end{align*}
We also wish to find upper bounds $r_{f,\text{up}}$ such that
\begin{align*}
\lim_{n \rightarrow \infty} \Pr[C_f(n, rn) \ \text{is satisfiable}] = 0 \ \text{for all} \ r > r_{f,\text{up}}
\end{align*}
For an example of these quantities instantiated on a concrete example, refer to the experiments in Section \ref{sec:experiments}.
We provide a value for $r_{f,\text{up}}$ which was derived earlier in \cite{creignou2003generalized}. \citeauthor{dubois2001upper} \shortcite{dubois2001upper} and \citeauthor{creignou2007expected} \shortcite{creignou2007expected} provide methods for obtaining tighter upper bounds, but the looser values that we use are sufficient for showing that $r_{f,\text{low}}$ is on the same asymptotic order as $r_{f,\text{unsat}}$ for many choices of $f$.
The bound $r_{f,\text{low}}$ depends on both the symmetry and size of the solution set of $f$. The more assignments $u \in \{-1, 1\}^k$ such that $f(u) = 1$, the more likely it is that each constraint is satisfied. Increased symmetry reduces the variance in the number of solutions to $C_{f}(n, rn)$, so solutions are more spread out among possible CSPs in our class and the probability that $C_f(n, rn)$ will have a solution is higher. We will formally quantify this symmetry in terms of the Fourier spectrum of $f$, which we introduce next.
\subsection{Fourier Expansion of Boolean Functions}
We discuss basics of Fourier analysis of boolean functions. For a detailed review, refer to \cite{o2014analysis}. We define the vector space $\mathcal{F}_k$ of all functions mapping $\{-1, 1\}^k$ to $\mathbb{R}$. The set $\mathcal{F}_k$ has the inner product $\langle f_1, f_2 \rangle = \sum_{u \in \{-1, 1\}^k} f_1(u)f_2(u)/2^k$ for any $f_1, f_2 \in \mathcal{F}_k$.
This inner product space has orthonormal basis vectors $\chi_S$, where the parity functions $\chi_S$ follow $\chi_S(u) = \prod_{i \in S} u_i$ for all $S \subseteq [k]$, subsets of the $k$ indices. Because $(\chi_S)_{S \subseteq [k]}$ forms an orthonormal basis, if we write
\begin{equation}
\label{eq:hatfS}
\hat{f}(S) = \langle f, \chi_S \rangle = \frac{1}{2^k} \sum_{u \in \{-1, 1\}^k} f(u)\chi_S(u)
\end{equation}
then we can write $f$ as a linear combination of these vectors: $f = \sum_{S \subseteq [k]} \hat{f}(S) \chi_S$. We note that when $S = \emptyset$, the empty set, $\hat{f}(\emptyset)$ is simply the average of $f$ over $\{-1, 1\}^k$. We will refer to the coefficients $(\hat{f}(S))_{S \subseteq [k]}$ as the Fourier spectrum of $f$. Since these coefficients are well-studied in theoretical computer science \cite{o2014analysis}, the Fourier coefficients of many boolean functions are easily obtained.
\begin{example} [3-SAT]
\label{ex:3satfourier}
For $3$-SAT, $\hat{f}(\emptyset) = 7/8$. $\hat{f}(\{1\}) = \hat{f}(\{2\}) = \hat{f}(\{3\}) = 1/8$, and $\hat{f}(\{1, 2\}) = \hat{f}(\{2, 3\}) = \hat{f}(\{1, 3\}) = -1/8$.
\end{example}
Representing $f$ in the Fourier bases will facilitate our proofs, providing a simple way to express expectations over our random CSPs. The Fourier spectrum can also provide a measure of ``symmetry'' in $f$ - if some values of $\hat{f}(S)$ are high where $|S| = 1$, then satisfying assignments to $f$ are more skewed in the variable corresponding to $S$. We will show how this impacts satisfiability in Section \ref{sec:boundexplained}.
\begin{figure*}
\centering
\begin{tabular}{|c | c | c| c|}
\hline
\Tstrut
CSP class ($f$) & Best lower bound on $r_{f,\text{unsat}}$ & Our bound $r_{f,\text{low}}$ & Upper bound $r_{f,\text{up}}$\\ \hline
\Tstrut
$k$-XORSAT & 1 & $\frac{1}{2}$ & 1\\
\Tstrut
$k$-SAT & $2^k\ln 2 - \frac{1 + \ln 2}{2} - o_k(1)$ & $2^{k - 1} - O(k)$ & $2^k \ln 2$\\
\Tstrut
$k$-NAESAT & $2^{k - 1} \ln 2 - \frac{\ln 2}{2} - \frac{1}{4} - o_k(1)$ & $2^{k - 2} - \frac{1}{2}$ & $2^{k - 1}\ln 2$ \\
\Tstrut
$k$-MAJORITY & ? & $\frac{\frac{1}{2} - k\binom{k - 1} {\frac{k - 1}{2}}^2 2^{-2k +1}}{1 + k\binom{k - 1}{\frac{k - 1}{2}}^2 2^{-2k+ 2}} = 0.111 - o_a(1)$ & $1$ \\
\Tstrut
$a$-MAJ $\otimes 3$-MAJ & ? & $\frac{\frac{1}{2} - 3a\binom{a - 1}{\frac{a - 1}{2}}^2 2^{-2a - 1}}{1 + 3a\binom{a - 1}{\frac{a - 1}{2}}^2 2^{-2a - 2}} = 0.177 - o_a(1)$ & $1$\\
\Tstrut
$k$-MOD-3 & ? & $\frac{1}{4} - o_k(1)$ & $\frac{\ln 2}{\ln 3} + o_k(1)$\\
\Tstrut
OR$_b \otimes$ XOR$_a$ & ? & $2^{b - 1} - 1/2$ & $2^{b - 1} \ln 2$\\
\hline
\end{tabular}
\caption{We compare the best known lower bounds on the satisfiability threshold to our lower and upper bounds. For $k$-XORSAT \cite{pittel2016satisfiability}, $k$-SAT \cite{ding2015proof}, and $k$-NAESAT \cite{coja2012catching}, the numbers listed are known as exact sharp threshold locations. For the last four, we do not know of existing lower bounds. $\otimes$ is the composition operator for boolean functions, and we define these functions in Section \ref{sec:figfunc}.}
\label{fig:bounds}
\end{figure*}
\section{Main Results}
\label{sec:mainresult}
We provide simple formula for $r_{f,\text{up}}$. A similar result is in \cite{creignou2003generalized}, and the full proof is in the appendix.
\begin{proposition}
\label{prop:upper}
For all constraint functions $f$, let
\begin{align*}
r_{f,\text{up}} = \frac{\log 2}{\log 1/\hat{f}(\emptyset)} < \frac{\log 2}{1 - \hat{f}(\emptyset)}
\end{align*}
If $r \ge r_{f,\text{up}}$, $\lim_{n \rightarrow \infty} \Pr[C_f(n, rn) \ \text{is satisfiable}] = 0$.
\end{proposition}
\begin{proof} [Proof Sketch]
We compute the expected solution count for $C_f(n, rn)$. The expected solution count will scale with $\hat{f}(\emptyset)$, since $2^k \hat{f}(\emptyset)$ is simply the number of $u \in \{-1, 1\}^k$ where $f(u) = 1$ and therefore governs how easily each constraint will be satisfied. If $r > r_{f,\text{up}}$, the expected solution count converges to 0 as $n \rightarrow \infty$, so Markov's inequality implies that the probability that a solution exists goes to 0.
\end{proof}
Next, we will present our value for $r_{f,\text{low}}$. First, some notation: let $U = \{u \in \{-1, 1\}^k : f(u) = 1\}$, and let $A$ be the $k \times |U|$ matrix whose columns are the elements of $u$. We will use $A^+$ to denote the Moore-Penrose pseudoinverse of $A$. For a reference on this, see \cite{barata2012moore}. Finally, let $\boldsymbol{1}$ be the $|U|$-dimensional vector of 1's.
\begin{example} [$3$-SAT] For $3$-SAT, $k = 3$ and $|U| = 7$, and we can write $A$ as follows (up to permutation of its columns):
\begin{align*}
A =
\left[
\begin{array} {r r r r r r r}
1 & 1 & 1 & 1 & -1 & -1 & -1\\
1 & 1 & -1 & -1 & 1 & 1 & -1\\
1 & -1 & 1 & -1 & 1 & -1 & 1
\end{array}
\right]
\end{align*}
where columns of $A$ satisfy the $3$-SAT constraint function.
\end{example}
The following main theorem provides the \textit{first computable equation for obtaining lower bounds that are specific to the constraint-generating function $f$.}
\begin{theorem}
\label{thm:main}
For all constraint functions $f$, let
\begin{align*}
r_{f,\text{low}} = \frac{1}{2} \frac{c}{1 - c} \text{ where } c = \hat{f}(\emptyset) - \frac{\boldsymbol{1}^T A^+ A \boldsymbol{1}}{2^k}
\end{align*}
If $r < r_{f,\text{low}}$, then there exists a constant $C>0$ such that $\lim_{n \rightarrow \infty} \Pr[C_f(n, rn) \ \text{is satisfiable}] > C$.
\end{theorem}
The lower bound $r_{f,\text{low}}$ is an increasing function of $c$ which is dependent on two quantities. First, with higher values of $\hat{f}(\emptyset)$, $C_f(n, rn)$ will have more satisfying assignments on average, so $c$ and the threshold value will be higher. Second, $c$ depends on the level of symmetry in the solution set of $f$, which we will show is connected to the Fourier spectrum of $f$. We explain this dependence in Section \ref{sec:boundexplained}.
In comparison, \citeauthor{creignou2003generalized} \shortcite{creignou2003generalized} obtain lower bounds which depend only on the arity of $f$. While we cannot make an exact comparison because \citeauthor{creignou2003generalized} use a different random CSP ensemble, for reference, they provide the general lower bound of $1/(ke^k - k)$ expected constraints per variable for functions of arity $k$. Our bounds are much tighter because of their specificity while remaining simple to compute. To demonstrate, we instantiate our bounds for some example constraint functions in Figure \ref{fig:bounds}. Whereas their bounds are exponentially decreasing in $k$, our bounds are constant or increasing in $k$ for the functions shown.
\subsection{Constraint Functions in Figure \ref{fig:bounds}}
\label{sec:figfunc}
We define the constraint functions in Figure \ref{fig:bounds}. Unless specified otherwise, they will be in the form $f : \{-1, 1\}^{k} \rightarrow \{0, 1\}$.
\begin{enumerate}
\item $k$-SAT: $f(u) = 0$ if $u$ is the all negative ones vector, and $f(u) = 1$ otherwise.
\item $k$-XORSAT: $f(u) = \mathbbm{1}(\chi_{[k]}(u) = -1)$
\item $k$-NAESAT: $f(u) = 0$ if $u$ is the all negative ones or all ones vector, and $f(u) = 1$ otherwise.
\item $k$-MAJORITY: Defined when $k$ is odd, $f(u) = 1$ if more than half of the variables of $u$ are $1$.
\item $a$-MAJ $\otimes$ $3$-MAJ: Defined when $a$ is odd, where $f : \{-1, 1\}^{3a} \rightarrow \{0, 1\}$. Defined as the composition of $a$-MAJORITY on $a$ groups of $3$-MAJORITY, as follows:
\begin{align*}
f(u_1, \ldots, u_{3a}) =\\ f_{a-\text{MAJ}}(f_{3-\text{MAJ}}(u_1, u_2, u_3), \ldots, \\ f_{3-\text{MAJ}}(u_{3a - 2}, u_{3a -1}, u_{3a}))
\end{align*}
\item $k$-MOD-3: $f(u) = 1$ when the number of 1's in $u$ is divisible by $3$, and 0 otherwise.
\item OR$_b \otimes$ XOR$_a$: In this case, $f : \{-1, 1\}^{ab} \rightarrow \{0, 1\}$, and $f$ is the composition of a OR over $b$ groups of XORs over $a$ variables, as follows:
\begin{align*}
f(u_{1}, \ldots, u_{ab}) =\\
f_{\text{OR}_b}(f_{\text{XOR}_a}(u_1, \ldots, u_a), \ldots,\\
f_{\text{XOR}_{a}}(u_{ab - a + 1}, \ldots, u_{ab}))
\end{align*}
\end{enumerate}
While the last four constraint functions have not been analyzed much in the existing CSP literature, these types of general constraints are of practical interest because of \cite{achim2016beyond}, which performs probabilistic inference by solving CSPs based on arbitrary hash functions. For example, \citeauthor{achim2016beyond} \shortcite{achim2016beyond} show that MAJORITY constraints are effective in practice for solving probabilistic inference problems.
\subsection{Connecting Bounds with Fourier Spectrum}
\label{sec:boundexplained}
We explain how the Fourier spectrum can help us interpret Theorem \ref{thm:main}. We first show the connection between $c$ and the Fourier spectrum.
Let $\hat{f}_{S : |S| = 1}$ be the $k$-dimensional vector whose entries are Fourier coefficients of $f$ for size 1 sets. Let $B$ be the $k \times k$ matrix with diagonal entries $B_{ii} = \hat{f}(\emptyset)$ and off-diagonal entries $B_{ij} = \hat{f}(\{i, j\})$ for $i \ne j$.
\begin{example} [3-SAT]
Following the coefficients in Example \ref{ex:3satfourier}, for $3$-SAT, $\hat{f}_{S : |S| = 1} = (1/8, 1/8, 1/8)$ and
\begin{align*}
B = \left[
\begin{array} {r r r}
7/8 & -1/8 & -1/8\\
-1/8 & 7/8 & -1/8\\
-1/8 & -1/8 & 7/8
\end{array}
\right]
\end{align*}
\end{example}
\begin{lemma}
\label{lem:fourierc}
When the rows of $A$ are linearly independent, $c = \hat{f}(\emptyset) - \hat{f}_{S : |S| = 1}^T B^{-1} \hat{f}_{S : |S| = 1}$.
\end{lemma}
From this lemma, we see that larger values of $c$ correspond to smaller $\hat{
|
f}_{S : |S| = 1}$. These terms will measure the amount of ``symmetry'' in the solution set for $f$.
The matrix $B$ and vectors $\hat{f}_{S : |S| = 1}$ are easily obtained for many $f$ since Fourier coefficients are well-studied \cite{o2014analysis}.
Figure \ref{fig:bounds} shows how $\hat{f}(\emptyset) - c$ and $f$ relate. Since $k$-SAT has a mostly symmetric solution set, $\hat{f}_{k-\text{SAT}}(\emptyset) - c = O(k/2^{2k})$ since $c = 1 - 2^{-k} - O(k/2^{2k})$, which is small compared to $\hat{f}_{k-\text{SAT}}(\emptyset)$. The solution set of $k$-NAESAT is completely symmetric as if $f_{k-\text{NAESAT}}(x) = 1$, then $f_{k-\text{NAESAT}}(-x) = 1$. Thus, $f_{k-\text{NAESAT}}$ has 0 weight on Fourier coefficients for sets with odd size so we can compute that $\hat{f}_{k-\text{NAESAT}}(\emptyset) - c = 0$. $k$-MAJORITY, however, is less symmetric, as shown by larger first order coefficients. Here, the bound in Figure \ref{fig:bounds} gives
$\lim_{k \rightarrow \infty} \hat{f}_{k-\text{MAJORITY}}(\emptyset) - c = 1/\pi$, which is large compared to $\hat{f}_{k-\text{MAJORITY}}(\emptyset) \approx 1/2$.
\section{Proof Strategy}
Our proof relies on the second moment method, which has been applied with great success to achieve lower bounds for problems such as $k$-SAT \cite{achlioptas2004threshold} and $k$-XORSAT \cite{dubois20023}. The second moment method is based on the following lemma, which can be derived using the Cauchy-Schwarz inequality:
\begin{lemma}
\label{lem:secondmomentmethod}
Let $X$ be any real-valued random variable. Then
\begin {equation}
\label{eq:secondmom}
Pr[X \ne 0] = \Pr[|X| \ne 0] \ge \frac{\text{E}[|X|]^2}{\text{E}[X^2]} \ge \frac{\text{E}[X]^2}{\text{E}[X^2]}
\end {equation}
\end{lemma}
If $X$ is only nonzero when $C_f(n, rn)$ has a solution, we obtain lower bounds on the probability that a solution exists by upper bounding $\text{E}[X^2]$. For example, we could let $X$ be the number of solutions to $C_f(n, rn)$. However, as shown in \cite{achlioptas2004threshold}, this choice of $X$ fails in most cases. Whether two different assignments satisfy $C_f(n, rn)$ is correlated: if the assignments are close in Hamming distance and one assignment is satisfying, it is more likely that the other is satisfying as well. This will make $\text{E}[X^2]$ much larger than $\text{E}[X]^2$, so \eqref{eq:secondmom} will not provide useful information. Figure \ref{fig:failure} demonstrates this failure for $k$-SAT. \citeauthor{achlioptas2004threshold} \shortcite{achlioptas2004threshold} show formally that the ratio $\text{E}[X]^2/\text{E}[X^2]$ will decrease exponentially (albeit at a slow rate). On the other hand, $k$-NAESAT is ``symmetric'', so the second moment method works directly here. In the plot, $\text{E}[X]^2/\text{E}[X^2]$ for $3$-NAESAT stays above a constant. This also follows formally from our main theorem as well as \cite{achlioptas2002asymptotic}. We formally define our requirements on symmetry in \eqref{eq:derivhalf0}.
We circumvent this issue by weighting solutions to reduce correlations before applying the second moment method. As in \cite{achlioptas2004threshold}, we use a weighting which factors over constraints in $C_f(n, rn)$ and apply the second moment method to the random variable
\begin{equation}
\label{eq:X}
X = \sum_{\sigma \in \{-1, 1\}^n} \prod_{c \in C_f(n, rn)}w(\sigma, c)
\end{equation}
where $C_f(n, rn)$ is a collection of constraints $\{f_{I_1, s_1}, \ldots, f_{I_m, s_m}\}$ and the randomness in $X$ comes over the choices of $I_j, s_j$. Now we can restrict our attention to constraint weightings of the form $w(\sigma, f_{I, s}) = w(\sigma_{I, s})$. In the special case where $w(\sigma_{I, s}) = f(\sigma_{I, s})$, $X$ will simply represent the number of solutions to $C_f(n, rn)$. In general, we require $w(\sigma_{I, s}) = 0$ whenever $f(\sigma_{I, s}) = 0$. This way, if $X \ne 0$, then $C_f(n, rn)$ must have a solution.
For convenience, we assume that the index sets $I_1, \ldots, I_m$ are sampled with replacement. They are chosen uniformly from $[n]^k$. We also allow constraints to be identical. In the appendix, we justify why proofs in this setting carry over to the without-replacement setting in Section \ref{sec:randcsp} and also provide full proofs to the lemmas presented below.
\begin{figure*}
\centering
\subfloat[$\text{E}{[X]}^2/\text{E}{[X^2]}$ vs. $n$ for $3$-SAT and $3$-NAESAT. $X$ is the solution count, $r = 1$.]
{\includegraphics[width=0.32\textwidth]{naesat_sat_plot_n_max_600_k_3_r_1_num_n_30_n_min_1.png}
\label{fig:failure}
}
\subfloat[$g_w(\alpha)$ when $f$ is $k$-XORSAT and $w = f$.]{\includegraphics[width=0.32\textwidth]{g_w_xor_plot_k_max_12_k_min_4.png}
\label{fig:g_w}}
\subfloat[$\psi_r(\alpha)$ with $f$ as $5$-NAESAT and $w = f$.]{\includegraphics[width=0.32\textwidth]{psi_r_plot_num_r_3_k_5_r_max_13_0_r_min_1_0.png}
\label{fig:psi}}
\caption{For concreteness, we provide sample plots of the relevant quantities in our proofs.}
\end{figure*}
In this setting, we will compute the first and second moments of the $X$ chosen in \eqref{eq:X} in terms of the Fourier spectrum of $w$.
\begin{lemma}
\label{lem:firstmomentsq}
The squared first moment of $X$ is given by
\begin{equation}
\label{eq:E[X]sq}
\text{E}[X]^2 = 2^{2n}(\hat{w}(\emptyset)^2)^{rn}
\end{equation}
\end{lemma}
\begin{proof}
We can expand $\text{E}[X]$ as follows:
\begin{align}
\text{E}[X] &= \sum_{\sigma \in \{-1, 1\}^n} \text{E}\left[\prod_{j = 1}^{rn}w(\sigma_{I_j, s_j})\right] \notag \\
&= \sum_{\sigma \in \{-1, 1\}^n} \text{E}[w(\sigma_{I, s})]^{rn} \label{eq:EXint}
\end{align}
where we used the fact that constraints are chosen independently. Now we claim that for any $u \in \{-1, 1\}^k$,
\begin{align*}
\Pr[\sigma_{I, s} = u] = \frac{1}{2^k}
\end{align*}
This follows from the fact that we choose $s$ uniformly over $\{-1, 1\}^k$ and our definition of $\sigma_{I, s}$ in \eqref{eq:sigmaIs}. Thus,
\begin{align*}
\text{E}[w(\sigma_{I, s})] &= \sum_{u \in \{-1, 1\}^k} w(u)\Pr[\sigma_{I, s} = u]\\
&= \frac{1}{2^k}\sum_{u \in \{-1, 1\}^k} w(u)\\
&= \hat{w}(\emptyset)
\end{align*}
Plugging back into \eqref{eq:EXint} gives the desired result.
\end{proof}
Next, we will compute the second moment $\text{E}[X^2]$.
\begin{lemma}
\label{lem:secmoment}
Let $g_w(\alpha) = \sum_{S \subseteq [k]} (2\alpha - 1)^{|S|} \hat{w}(S)^2$. The second moment of $X$ is given by
\begin{equation}
\label{eq:E[Xsq]}
\text{E}[X^2] = 2^n \sum_{j = 0}^n {n \choose j} g_w(j/n)^{rn}
\end{equation}
\end{lemma}
The function $g_w(\alpha)$ is similar to the noise sensitivity of a boolean function \cite{o2003computational} and measures the correlation in the value of $w$ between two assignments $\sigma$, $\tau$ which overlap at $\alpha(\sigma, \tau) n$ locations. As a visual example, Figure \ref{fig:g_w} shows how $g_w(\alpha)$ changes for $k$-XORSAT with varying $k$. The key of our proof is showing that $\text{E}[w(\sigma_{I, s})w(\tau_{I, s})] = g_w(\alpha(\sigma, \tau))$ for a random constraint $f_{I, s}$.
We will now write $\text{E}[X]^2$ in terms of $g_w$. Since $g_w(1/2) = \hat{w}(\emptyset)^2$, plugging this into \eqref{eq:E[X]sq} gives us
\begin{equation}
\label{eq:e[X]sqing}
\text{E}[X]^2 = 2^{2n} g_w(1/2)^{rn}
\end{equation}
This motivates us to apply the following lemma from \cite{achlioptas2004threshold}, which will allow us to translate bounds on $g_w(\alpha)$ into bounds on $\text{E}[X]^2/\text{E}[X^2]$:
\begin{lemma}
\label{lem:achlioptaslem}
Let $\phi$ be any real, positive, twice-differentiable function on $[0, 1]$ and let
\begin{align*}
S_n = \sum_{j = 0}^{n} {n \choose j} \phi(j/n)^n
\end{align*}
Define $\psi$ on $[0, 1]$ as $\psi(\alpha) = \frac{\phi(\alpha)}{\alpha^{\alpha}(1 - \alpha)^{1 - \alpha}}$.
If there exists $\alpha_{\max} \in (0, 1)$ such that $\psi(\alpha_{\max}) > \psi(\alpha)$ for all $\alpha \ne \alpha_{\max}$, and $\psi''(\alpha_{\max}) < 0$, then there exist constants $B, C > 0$ such that for all sufficiently large $n$,
\begin{equation}
\label{eq:lemachlioptasres}
B \psi(\alpha_{\max})^n \le S_n \le C \psi(\alpha_{\max})^n
\end{equation}
\end{lemma}
To apply the lemma, we can define $\phi_r(\alpha) = g_w(\alpha)^r$ and $\psi_r(\alpha) = \frac{\phi_r(\alpha)}{\alpha^{\alpha}(1 - \alpha)^{1 - \alpha}}$. Then from \eqref{eq:e[X]sqing}, we note that
\begin{equation}
\psi_r(1/2)^n = 2^n(g_w(1/2)^r)^n = \text{E}[X]^2/2^n
\label{eq:psiE[X]sq}
\end{equation}
On the other hand, from \eqref{eq:E[Xsq]},
\begin{equation}
\sum_{j = 0}^n {n \choose j} \phi_r(j/n)^n = \text{E}[X^2]/2^n
\label{eq:psiE[Xsq]}
\end{equation}
so if the conditions of Lemma \ref{lem:achlioptaslem} hold for $\alpha_{\max} = 1/2$, we recover that $\text{E}[X]^2/\text{E}[X^2] \ge C$ for some constant $C > 0$.
One requirement for $\psi_r(\alpha)$ to be maximized at $\alpha = 1/2$ is that $\psi_r'(1/2) = 0$. Expanding $\psi_r'(1/2)$ gives $2g_w(1/2)^{r - 1}(rg_w'(1/2)) = 0$.
Since $g_w(1/2) = \hat{w}(\emptyset)^2 > 0$, we thus require
\begin{equation}
\label{eq:derivhalf0}
g_w'(1/2) = 2\sum_{S \subseteq [k] : |S| = 1} \hat{w}(S)^2 = 0
\end{equation}
In order to satisfy \eqref{eq:derivhalf0}, we need $\hat{w}(S) = 0$ for all $S \subseteq [k]$ where $|S| = 1$. To use Lemma \ref{lem:achlioptaslem}, we would like to choose $w$ such that \eqref{eq:derivhalf0} holds. We discuss how to choose $w$ to optimize our lower bounds in the appendix. In the next section, we will provide $r$ so that the conditions of Lemma \ref{lem:achlioptaslem} hold at $\alpha = 1/2$ for arbitrary $w$ when \eqref{eq:derivhalf0} is satisfied.
\subsection{Bounding the Second Moment For Fixed $w$}
We give a general bound on $r$ in terms of our weight function $w$ so that the conditions of Lemma \ref{lem:achlioptaslem} are satisfied for $\alpha = 1/2$. For now, the only constraint we place on $w$ is that \eqref{eq:derivhalf0} holds. The next lemma lets us consider only $\alpha \in [1/2, 1]$.
\begin{lemma}
\label{lem:alphagehalf}
Let $\alpha \ge 1/2$. Then $g_w(\alpha) \ge g_w(1 - \alpha)$.
\end{lemma}
This lemma follows because $g_w(\alpha)$ is a polynomial in $(2\alpha - 1)$ with nonnegative coefficients, and $(2\alpha - 1) > 0$ for $\alpha > 1/2$.
Now we can bound $\psi_r(\alpha)$ for $\alpha \in [1/2, 1]$. Combined with Lemma \ref{lem:alphagehalf}, the next lemma will give conditions on $r$ such that $\psi_r(1/2) > \psi_r(\alpha)$ for all $\alpha \in [0, 1]$.
\begin{lemma}
\label{lem:rbound}
Let the weight function $w$ satisfy \eqref{eq:derivhalf0}. If
\begin{equation}
\label{eq:rcond}
r \le \frac{1}{2}\frac{\hat{w}(\emptyset)^2}{\sum_{S : |S| \ge 2} \hat{w}(S)^2}
\end{equation}
$\psi_r(1/2) > \psi_r(\alpha)$ for $\alpha \in [0, 1]$ and $\psi_r''(1/2) < 0$.
\end{lemma}
Figure \ref{fig:psi} shows how $r$ controls the shape of the function $\psi_r(\alpha)$. As $r$ increases, $\psi_r''(1/2)$ becomes positive and $\psi_r(\alpha)$ will no longer attain a local maximum in that region. The key step in proving Lemma \ref{lem:rbound} is rearranging $\psi_r(1/2) > \psi_r(\alpha)$ and simplify calculations by using approximations for the logarithmic terms that appear.
Our bound on $r$ compares the average of $w$ over $\{-1, 1\}^k$ with the correlations between $w$ and the Fourier basis functions. If $w$ has strong correlations with the other Fourier basis functions, two assignments which are equal at $\alpha n$ variables will likely either be both satisfying or both not satisfying as $\alpha$ approaches $1$. This increases $\text{E}[X^2]$ but not $\text{E}[X]^2$ and makes Lemma \ref{lem:secondmomentmethod} provide a trivial bound if $r$ is too large. Thus, if $w$ has strong correlations with the Fourier basis functions, we must choose smaller $r$ as reflected by \eqref{eq:rcond}.
To get the tightest bounds, we wish to maximize the expression in \eqref{eq:rcond}. Although we prove our lemma for general $w$ requiring only \eqref{eq:derivhalf0}, we also need $w(u) = 0$ whenever $f(u) = 0$ to apply our lemma to satisfiability. Recalling our definition of $X$ in \eqref{eq:X}, this condition ensures that $C_f(n, rn)$ has a solution whenever $X \ne 0$. Thus,
\begin{equation}
\label{eq:wlambdaf}
w(u) = \lambda(u) f(u)
\end{equation}
for some $\lambda : \{-1, 1\}^k \rightarrow \mathbb{R}$. If we disregard \eqref{eq:derivhalf0}, choosing $\lambda(u) = 1$ would maximize the bound on $r$ in \eqref{eq:rcond}. The additional requirement of \eqref{eq:derivhalf0} for the second moment method to succeed can be viewed as a ``symmetrization penalty'' on $r$. In the appendix, we discuss how to choose $w$ to optimize our bound on $r$ while satisfying \eqref{eq:derivhalf0} and \eqref{eq:wlambdaf}.
\begin{figure*}
\centering
\subfloat[$a=2,b=4$]{
\includegraphics[scale=0.32]{tribes_num_r_20_n_30_num_groups_4_num_trials_50_group_size_2}}
\subfloat[$a=2,b=5$]{
\includegraphics[scale=0.32]{tribes_num_r_20_n_20_num_groups_5_num_trials_50_group_size_2}}
\caption{Proportion of CSPs satisfiable out of 50 trials vs. $r$ for tribes functions. We show our bounds for reference.}
\label{fig:experiments}
\end{figure*}
\subsection{Proving the Main Theorem}
\label{sec:mainproof}
We will combine our lemmas to prove Theorem \ref{thm:main}.
\begin{proof} [Proof of Theorem \ref{thm:main}]
We wish to apply the second moment method on $X$ defined in \eqref{eq:X}, where $w$ is a function we use to weigh assignments to individual constraints. We choose $w$ as described in the full version of the paper, which satisfies both \eqref{eq:derivhalf0} and \eqref{eq:wlambdaf}. Since $w$ satisfies \eqref{eq:wlambdaf}, $\Pr[C_f(n, rn) \ \text{is satisfiable}] \ge \Pr[X \ne 0] \ge \text{E}[X]^2/\text{E}[X^2]$
by Lemma \ref{lem:secondmomentmethod}. Now we will use Lemma \ref{lem:rbound} to show that the conditions for Lemma \ref{lem:achlioptaslem} are satisfied for $\phi_r = g_w(\alpha)^r$ and $r$ satisfying \eqref{eq:rcond}.
For our choice of $w$, it follows from the derivations in the appendix that the RHS of \eqref{eq:rcond} becomes
\begin{align*}
r < r_{f,\text{low}} = \frac{1}{2} \frac{c}{1 - c} \ \text{where} \ c = \hat{f}(\emptyset) - \frac{\boldsymbol{1}^T A^+ A \boldsymbol{1}}{2^k}
\end{align*}
where $A$ is defined in Section \ref{sec:mainresult}. There is a slight technicality in directly applying Lemma \ref{lem:achlioptaslem} because $\phi_r$ might not be nonnegative for $\alpha < 1/2$; we discuss this in the appendix. Now using \eqref{eq:psiE[X]sq} and \eqref{eq:psiE[Xsq]}, and applying Lemma \ref{lem:achlioptaslem}, we can conclude that there exists $C > 0$ such that
\begin{align*}
\Pr[C_f(n, rn) \ \text{is satisfiable}] \ge \text{E}[X]^2/\text{E}[X^2] \ge C
\end{align*}
for sufficiently large $n$ and all $r < r_{f,\text{low}}$.
\end{proof}
There remains a question of what $r_{f,\text{low}}$ we can hope achieve using a second moment method proof where $X$ is defined as in \eqref{eq:X}. The following lemma provides some intuition for this:
\begin{lemma}
\label{lem:maxr}
In order for the conditions of Lemma \ref{lem:achlioptaslem} to hold at $\alpha_{\max} = 1/2$ for $X$ in the form of \eqref{eq:X} and any choice of $w$ satisfying \eqref{eq:wlambdaf}, we require
\begin{equation}
\label{eq:maxr}
r < \log 2 / \log \frac{1}{\hat{f}(\emptyset) - \frac{\boldsymbol{1}^T A^+ A \boldsymbol{1}}{2^k}} \le r_{f,\text{up}} = \frac{\log 2}{\log \frac{1}{\hat{f}(\emptyset)}}
\end{equation}
\end{lemma}
The difference of $\boldsymbol{1}^T A^+ A \boldsymbol{1}/2^k$ in the lower logarithm compared to $r_{f,\text{up}}$ can be viewed as a ``symmetrization penalty'' necessary for our proof to work.
While Lemma \ref{lem:maxr} does not preclude applications of the second moment method that do not rely on Lemma \ref{lem:achlioptaslem}, consider what happens for $r$ that do not satisfy \eqref{eq:maxr}. For these $r$, the function $\psi_r(\alpha)$ must obtain a maximum at some $\alpha^* \in [0, 1], \alpha^* \ne 1/2$. If it also happens that $\phi_r(\alpha)$ is nonnegative and twice differentiable on $[0, 1]$, and $\psi_r''(\alpha^*) < 0$, then conditions of Lemma \ref{lem:achlioptaslem} hold, and applying it to $\alpha_{\max} = \alpha^*$ along with \eqref{eq:lemachlioptasres}, \eqref{eq:psiE[X]sq}, and \eqref{eq:psiE[Xsq]} will actually imply that
\begin{align*}
\frac{\text{E}[X]^2}{\text{E}[X^2]} \le \frac{1}{B} \left(\frac{\psi_r(1/2)}{ \psi_r(\alpha^*)}\right)^n
\end{align*}
for some constant $B > 0$, which gives us an exponentially decreasing, and therefore trivial lower bound for the second moment method. Therefore, we believe that \eqref{eq:maxr} is near the best lower bound on $r_{f,\text{unsat}}$ that we can achieve by applying the second moment method on $X$ in the form of \eqref{eq:X}.
\section{Experimental Verification of Bounds}
\label{sec:experiments}
We empirically test our bounds with the goal of examining their tightness. For our constraint functions, we will use tribes functions. The tribes function takes the disjunction of $b$ groups of $a$ variables and evaluates to 1 or 0 based on whether the following formula is true:
\begin{align*}
\text{TRIBES}_{a, b}(x_1, \ldots, x_{ab}) = \vee_{i = 0}^{b - 1} \left(\wedge_{j = 1}^a x_{ia + j}\right)
\end{align*}
where $+1$ denotes true and $-1$ denotes false. For our experiments, we randomly generate CSP formulas based on $\text{TRIBES}_{a, b}$. We use the Dimetheus\footnote{https://www.gableske.net/dimetheus} random CSP solver to solve these formulas, or report if no solution exists. We show our results in Figure \ref{fig:experiments}. As expected, our values for lower bounds $r_{f,\text{low}}$ are looser than the upper bounds $r_{f,\text{up}}$.
\section{Conclusion}
Using Fourier analysis and the second moment method, we have shown general bounds on $m/n$, the ratio of constraints to variables; for $m/n$ below these bounds, there is constant probability that a random CSP is satisfiable. We demonstrate that our bounds are easily instantiated and can be applied to obtain novel estimates of the satisfiability threshold for many classes of CSPs. Our bounds depend on how easy it is to symmetrize solutions to the constraint function. We provide a heuristic argument to approximate the best possible lower bounds that our application of the second moment method can achieve; these bounds differ from upper bounds on the satisfiability threshold by a ``symmetrization penalty.'' Thus, an interesting direction of future research is to determine whether we can provide tighter upper bounds that account for symmetrization, or whether symmetrization terms are an artificial product of the second moment method.
\section{Acknowledgements}
This work was supported by the Future of Life Institute
(grant 2016-158687) and by the National Science Foundation (grant 1649208).
\bibliographystyle{aaai}
|
\section{Introduction}
\label{sec:intro}
\IEEEPARstart{T}{he} Capacitated Vehicle Routing Problem (CVRP) is a classical combinatorial optimization problem, which aims to optimize the routes for a fleet of vehicles with capacity constraints to serve a set of customers with demands. Compared with the assumption of multiple identical vehicles in homogeneous CVRP, the settings of vehicles with different capacities (or speeds) are more in line with the real-world practice, which leads to the heterogeneous CVRP (HCVRP)~\cite{golden1984fleet,kocc2015hybrid}. According to the objectives, CVRP can also be classified as min-max and min-sum ones, respectively. The former objective requires that the longest (worst-case) travel time (or distance) for a vehicle in the fleet should be as satisfying as possible since fairness is crucial in many real-world applications\cite{wang2016min,duran2011pre,bertazzi2015min,ma2010min,alabas2008self,hashi2016gis,liu2006optimization}, and the latter objective aims to minimize the total travel time (or distance) incurred by the whole fleet~\cite{haimovich1985bounds,lysgaard2004new,szeto2011artificial,wang2017vehicle}. In this paper, we study the problem of HCVRP with both min-max and min-sum objectives, i.e., MM-HCVRP and MS-HCVRP.
Conventional methods for solving HCVRP include exact and heuristic ones. Exact methods usually adopt branch-and-bound or its variants as the framework and perform well on small-scale problems~\cite{baldacci2009unified,lysgaard2004new,francca1995m,haimovich1985bounds}, but may consume prohibitively long time on large-scale ones given the exponential computation complexity. Heuristic methods usually exploit certain hand-engineered searching rules to guide the solving processes, which often consume much shorter time and are more desirable for large-scale
problems in reality~\cite{mostafa2017solving,li2010adaptive,szeto2011artificial,yakici2017heuristic}. However, such hand-engineered rules largely rely on human experience and domain knowledge, thus might be incapable of engendering solutions with high quality.
Moreover, both conventional exact and heuristic methods always solve the problem instances independently, and fail to exploit the patterns that potentially shared among the instances.
Recently, researchers tend to apply deep reinforcement learning (DRL) to automatically learn the searching rules in heuristic methods for solving routing problems including CVRP and TSP~\cite{bello2017neural,xin2021multi,nazari2018reinforcement,kool2018attention,chen2019learning,li2021heterogeneous}, by discovering the underlying patterns from a large number of instances. Generally, those DRL models are categorized as two classes, i.e., construction and improvement methods, respectively. Starting with an empty solution, the former constructs a solution by sequentially assigning each customer to a vehicle until all customers are served. Starting with a complete initial solution, the latter selects either candidate nodes (customers or depot) or heuristic operators, or both to improve and update the solution at each step, which are repeated until termination. By further leveraging the advanced deep learning architectures like attention mechanism to guide the selection,
those DRL models are able to efficiently generate solutions with much higher quality compared to conventional heuristics. However, existing works only focus on solving homogeneous CVRP which intrinsically cope with vehicles of the same characteristics, in the sense that the complete route of the fleet could be derived by repeatedly dispatching a single vehicle. Consequently, the key in those works is to solely select the next node to visit excluding the selection of vehicles, since there is only one vehicle essentially. Evidently, those works would be far less effective when applied to solve the more practical HCVRP, given the following issues: 1) The assumption of homogeneous vehicles is unable to capture the discrepancy in vehicles; 2) The vehicle selection is not explicitly considered, which should be of equal importance to the node selection in HCVRP; 3) The contextual information in the attention scheme is insufficient as it lacks states of other vehicles and (partially) constructed routes, which may render it incapable of engendering high-quality solutions in view of the complexity of HCVRP.
In this paper, we aim to solve the HCVRP with both min-sum and min-max objectives while emphasizing on addressing the aforementioned issues. We propose a novel neural architecture integrated with the attention mechanism to improve the DRL based construction method, which combines the decision-making for vehicle selection and node selection together to engender solutions of higher quality. Different from the existing works that construct the routes for each vehicle of the homogeneous fleet in sequence, our policy network is able to automatically and flexibly select a vehicle from a heterogeneous fleet at each step. In specific, our policy network adopts a Transformer-style \cite{vaswani2017attention} encoder-decoder structure, where the decoder consists of two parts, i.e., vehicle selection decoder and node selection decoder, respectively.
With the problem features (i.e., customer location, customer demand and vehicle capacity) processed by the encoder for better representation, the policy network first selects a vehicle from the fleet using the vehicle selection decoder based on the states of all vehicles and partial routes, and then selects a node for this vehicle using the node selection decoder at each decoding step. This process is repeated until all customers are served.
Accordingly, the major contribution of this paper is that we present a deep reinforcement learning method to solve CVRP with multiple heterogeneous vehicles, which is intrinsically different from the homogeneous ones in existing works, as the latter is lacking in selecting vehicles from a fleet. Specifically, we propose an effective neural architecture that integrates the vehicle selection and node selection together, with rich contextual information for selection among the heterogeneous vehicles, where every vehicle in the fleet has the chance to be selected at each step. We test both min-max and min-sum objectives with various sizes of vehicles and customers. Results show that our method is superior to most of the conventional heuristics and competitive to the state-of-the-art heuristic (i.e., SISR) with much shorter computation time. With comparable computation time, our method achieves much better solution quality than that of other DRL method. In addition, our method generalizes well to problems with larger customer sizes.
The remainder of the paper is organized as follows. Section \ref{sec:related work} briefly reviews conventional methods and deep models for routing problems. Section \ref{sec:problem descriptiton} introduces the mathematical formulation of MM-HCVRP and MS-HCVRP and the reformulation in the RL (reinforcement learning) manner. Section \ref{sec:method} elaborates our DRL framework. Section \ref{sec:experiments} provides the computational experiments and analysis. Finally, Section \ref{sec:conclusion} concludes the paper and presents future works.
\section{Related Works}
\label{sec:related work}
In this section, we briefly review the conventional methods for solving HCVRP with different objective functions, and deep models for solving the general VRPs.
The heterogeneous CVRP (HCVRP) was first studied in~\cite{golden1984fleet}, where the Clarke and Wright procedure and partition algorithms were applied to generate the lower bound and estimate optimal solution. An efficient constructive heuristic was adopted to solve HCVRP in~\cite{prins2002efficient} by merging small start trips for each customer into a complete one, which was also capable for multi-trip cases.
Baldacci and Mingozzi~\cite{baldacci2009unified} presented a unified exact method to solve HCVRP, reducing the number of variables by using three bounding procedures.
Feng et al.~\cite{feng2019solving} proposed a novel evolutionary multitasking algorithm to tackle the HCVRP with time window, and occasional driver, which can also solve multiple optimization tasks simultaneously.
The CVRP with min-sum objective was first proposed by Dantzig and Ramsey~\cite{dantzig1959truck}, which was assumed as the generalization of Travelling Salesman Problem (TSP) with capacity constraints.
To address the large-scale multi-objective optimization problem (MOP), a competitive swarm optimizer (CSO) based search method was proposed in~\cite{tian2019efficient, cheng2014competitive, cheng2016test}, which conceived a new particle updating strategy to improve the search accuracy and efficiency. By transforming the large-scale CVRP (LSCVRP) into a large-scale MOP, an evolutionary multi-objective route grouping method was introduced in~\cite{xiao2019evolutionary}, which employed a multi-objective evolutionary algorithm to decompose the LSCVRP into small tractable sub-components.
The min-max objective was considered in a multiple Travelling Salesman Problem (TSP)~\cite{francca1995m}, which was solved by a tabu search heuristic and two exact search schemes.
An ant colony optimization method was proposed to address the min-max Single Depot CVRP (SDCVRP)~\cite{narasimha2011ant}.
The problem was further extended to the min-max multi-depot CVRP~\cite{narasimha2013ant}, which could be reduced to SDCVRP using an equitable region partitioning approach.
A swarm intelligence based heuristic algorithm was presented to address the rich min-max CVRP~\cite{yakici2017heuristic}.
The min-max cumulative capacitated vehicle routing problem, aiming to minimize the last arrival time at customers, was first studied in~\cite{sze2017cumulative,golden1997adaptive}, where a two-stage adaptive variable neighbourhood search (AVNS) algorithm was introduced and also tested in min-sum objective to verify generalization.
The first deep model for routing problems is Pointer Network, which used supervised learning to solve TSP \cite{vinyals2015pointer} and was later extended to reinforcement learning \cite{bello2017neural}. Afterwards, Pointer Network was adopted to solve CVRP in \cite{nazari2018reinforcement}, where the Recurrence Neural Network architecture in the encoder was removed to reduce computation complexity without degrading solution quality. To further improve the performance, a Transformer based architecture was incorporated by integrating self-attention in both the encoder and decoder~\cite{kool2018attention}. Different from the above methods which learn constructive heuristics, NeuRewriter was proposed to learn how to pick the next solution in a local search framework~\cite{chen2019learning}.
Despite their promising results, these methods are less effective for tackling the heterogeneous fleet in HCVRP. Recently, some learning based methods have been proposed to solve HCVRP. Inspired by multi-agent RL, Vera and Abad \cite{vera2019deep} made the first attempt to solve the min-sum HCVRP through cooperative actions of multiple agents for route construction. Qin et al. \cite{qin2021novel} proposed a reinforcement learning based controller to select among several meta-heuristics with different characteristics to solve min-sum HCVRP. Although yielding better performance than conventional heuristics, they are unable to well handle either the min-max objective or heterogeneous speed of vehicles.
\section{Problem Formulation}
\label{sec:problem descriptiton}
In this section, we first introduce the mathematical formulation of HCVRP with both min-max and min-sum objectives, and then reformulate it as the form of reinforcement learning.
\subsection{Mathematical Formulation of HCVRP}
Particularly, with $n+1$ nodes (customers and depot) represented as $ X=\{ x^i \}_{i=0}^n $ and node $x^0$ denoting the depot, the customer set is assumed to be $X' = X\setminus \{x^0\}$. Each node $x^i \in \mathbb{R}^3$ is defined as $\{ (s^i, d^i)\}$, where $s^i$ contains the 2-dim location coordinates of node $x^i$, and $d^i$ refers to its demand (the demand for depot is 0). Here, we take heterogeneous vehicles with different capacities into account, which respects the real-world situations. Accordingly, let $ V=\{ v^i \}_{i=1}^m$ represent the heterogeneous fleet of vehicles, where each element $v^i$ is defined as $\{(\mathcal{Q}^i) \}$, i.e., the capacity of vehicle $v^i$.
The HCVRP problem describes a process that all fully loaded vehicles start from depot, and sequentially visit the locations of customers to satisfy their demands, with the constraints that each customer can be visited exactly once, and the loading amount for a vehicle during a single trip can never exceed its capacity.
Let $D(x^i,x^j)$ be the Euclidean distance between $x^i$ and $x^j$. Let $y_{ij}^v$ be a binary variable, which equals to 1 if vehicle $v$ travels directly from customer $x^i$ to $x^j$, and 0 otherwise. Let $l_{ij}^v$ be the remaining capacity of the vehicle $v$ before travelling from customer $x^i$ to customer $x^j$. For simplification, we assume that all vehicles have the same speed $f$, which could be easily extended to take different values. Then, the MM-HCVRP could be naturally defined as follows,
\begin{equation}
\min\ \max_{v\in V}(\sum_{i \in X} \sum_{j \in X} \frac{D(x^i,x^j)}{f} y_{ij}^v).
\end{equation}
subject to the following six constraints,
\begin{align}
\sum_{v \in V}\sum_{j \in X} y_{ij}^v = 1,\qquad i \in X'
\label{constraint1}
\end{align}
\begin{align}
\sum_{i \in X} y_{ij}^v - \sum_{k \in X} y_{jk}^v = 0,\qquad v\in V, j \in X'
\label{constraint2}
\end{align}
\begin{align}
\sum_{v \in V} \sum_{i \in X} l_{ij}^v - \sum_{v \in V} \sum_{k \in X} l_{jk}^v = d^j,\qquad j \in X'
\label{constraint3}
\end{align}
\begin{align}
d^j y_{ij}^v\leq l_{ij}^v \leq \left(\mathcal{Q}^v - d^i\right) \cdot y_{ij}^v,\qquad v\in V,i\in X, j \in X
\label{constraint4}
\end{align}
\begin{align}
y_{ij}^v = \left\{ 0,1 \right\},\qquad v\in V,i\in X, j \in X
\label{constraint5}
\end{align}
\begin{align}
l_{ij}^v \geq 0,\ d^i \geq 0, \qquad v\in V,i\in X, j \in X.
\label{constraint6}
\end{align}
The objective of the formulation is to minimize the maximum travel time among all vehicles. Constraint (\ref{constraint1}) and (\ref{constraint2}) ensure that each customer is visited exactly once and each route is completed by the same vehicle. Constraint (\ref{constraint3}) guarantees that the difference between amount of goods loaded by a vehicle before and after serving a customer equals to the demand of that customer. Constraint (\ref{constraint4}) enforces that the amount of goods for any vehicle is able to meet the demands of the corresponding customers and never exceed its capacity. Constraint (\ref{constraint5}) defines the binary variable and constraint (\ref{constraint6}) imposes the non-negativity of the variables.
The MS-HCVRP shares the same constraints with MM-HCVRP, while the objective is formulated as follows,
\begin{equation}
\text{min} \sum_{v\in V}\sum_{i \in X} \sum_{j \in X} \frac{D(x^i, x^j)}{f^v} y_{ij}^v,
\end{equation}
where $f^v$ represents the speed of vehicle $v$, and it may vary with different vehicles. Thereby, it is actually minimizing the total travel time incurred by the whole fleet.
\subsection{Reformulation as RL Form}
\label{RL_form}
Reinforcement learning (RL) was originally proposed for sequential decision-making problems, such as self-driving cars, robotics, games, etc~\cite{liu2015reinforcement,modares2015optimized,wang2016reinforcement,bai2019adaptive,wen2019online,nguyen2020deep}. The construction of routes for HCVRP step by step can be also deemed as a sequential decision-making problem. In our work, we model such process as a Markov Decision Process (MDP) \cite{bellman1957markovian} defined by 4-tuple $M=\{S,A,\tau, r\}$ (An example of the MDP is illustrated in the supplementary material).
Meanwhile, the detailed definition of the state space $S$, the action space $A$, the state transition rule $\tau$, and the reward function $r$ are introduced as follows.
\textbf{State:} In our MDP, each state $s_t\!=\!(V_t, X_t)\!\in\!S$ is composed of two parts. The first part is the vehicle state $V_t$, which is expressed as $V_t\!=\!\{v_t^1, v_t^2, .... v_t^m \}\!=\!\{(o_t^1, T_t^1, G_t^1), (o_t^2, T_t^2, G_t^2), ..., (o_t^m, T_t^m, G_t^m) \}$, where $o_t^i$ and $T_t^i$ represent the remaining capacity and the accumulate travel time of the vehicle $v^i$ at step $t$, respectively. $G_t^i=\{g_0^i, g_1^i, ..., g_t^i \}$ represents the partial route of the vehicle $v^i$ at step $t$, where $g_j^i$ refers to the node visited by the vehicle $v^i$ at step $j$. Note that the dimension of partial routes (the number of nodes in a route) for all vehicles keeps the same, i.e., if the vehicle $v^i$ is selected to serve the node $x^j$ at step $t$, other vehicles still select their last served nodes.
Upon departure from the depot (i.e., $t=0$), the initial vehicle state is set to $V_0 = \{(\mathcal{Q}^1, 0, \{0\}), (\mathcal{Q}^2, 0, \{0\}), ..., (\mathcal{Q}^m, 0, \{0\}) \}$ where $\mathcal{Q}^i$ is the maximum capacity of vehicle $v^i$.
The second part is the node state $S_t$, which is expressed as $X_t=\{x_t^0, x_t^1, ..., x_t^n \}=\{(s^0, d_t^0), (s^1, d_t^1), ..., (s^n, d_t^n) \}$, where $s^i$ is a 2-dim vector representing the locations of the node, and $d_t^i$ is a scalar representing the demand of node $i$ ($d_t^i$ will become 0 once that node has been served). Here, we do not consider demand splitting, and only nodes with $d^i >0$ need to be served.
\textbf{Action:} The action in our method is defined as selecting a vehicle and a node (a customer or the depot) to visit. In specific, the action $a_t \in A$ is represented as $(v_t^i, x_t^j)$, i.e., the selected node $x^j$ will be served (or visited) by the vehicle $v^i$ at step $t$. Note that only one vehicle is selected at each step.
\textbf{Transition:} The transition rule $\tau$ will transit the previous state $s_t$ to the next state $s_{t+1}$ based on the performed action $a_t = (v_t^i, x_t^j)$, i.e., $s_{t+1} = (V_{t+1}, X_{t+1}) = \tau(V_{t}, X_{t})$.
The elements in vehicle state $V_{t+1}$ are updated as follows,
\begin{equation}
o_{t+1}^k = \left\{\begin{matrix}
o_t^k - d_t^j, & \text{if } k=i,\\
o_t^k, & \text{otherwise},
\end{matrix}\right.
\end{equation}
\begin{equation}
T_{t+1}^k = \left\{\begin{matrix}
T_t^k + \frac{D({g_t^k}, x^j)}{f}, & \text{if } k=i,\\
T_t^k, & \text{otherwise},
\end{matrix}\right.
\end{equation}
\begin{equation}
G_{t+1}^k = \left\{\begin{matrix}
[G_t^k, x^j], & \text{if } k=i,\\
[G_t^k, g_t^k], & \text{otherwise},
\end{matrix}\right.
\end{equation}
where $g_t^k$ is the last element in $G _t^k$, i.e., last visited customer by vehicle $v^k$ at step $t$, and $[\cdot, \cdot, \cdot]$ is the concatenation operator.
The element in node state $X_{t+1}$ is updated as follows,
\begin{equation}
d_{t+1}^l = \left\{\begin{matrix}
0, & \text{if } l=j,\\
d_t^l, & \text{otherwise},
\end{matrix}\right.
\end{equation}
where each demand will retain 0 after being visited.
\textbf{Reward:} For the MM-HCVRP, to minimize the maximum travel time of all vehicles, the reward is defined as the negative value of this maximum, where the reward is calculated by accumulating the travel time of multiple trips for each vehicle, respectively. Accordingly, the reward is represented as $R = - \max_{v\in V} \{ \sum_{t=0}^T \boldsymbol{r}_t\}$, where $\boldsymbol{r}_t$ is the incremental travel time for all vehicles at step $t$. Similarly, for the MS-HCVRP, the reward is defined as the negative value of the total travel time of all vehicles, i.e., $R=-\sum_{i=1}^m \sum_{t=1}^T \boldsymbol{r}_t$.
Particularly, assume that node $x^j$ and $x^k$ are selected at step $t$ and $t+1$, respectively, which are both served by the vehicle $v^i$, then the reward $\boldsymbol{r}_{t+1}$ is expressed as a $m$-dim vector as follows,
\begin{equation}
\begin{split}
\boldsymbol{r}_{t+1} = r(s_{t+1}, a_{t+1}) &= r((V_{t+1}, X_{t+1}), (v_{t+1}^i, x_{t+1}^k)) \\
&=\! \{ 0,...,0, D(\!x^j, \!x^k\!)/f,0,..., 0 \},
\end{split}
\end{equation}
where $D(x^j, x^k)/f$ is the time consumed by the vehicle $v^i$ for traveling from node $x^j$ to $x^k$, with other elements in $r(s_{t+1}, a_{t+1})$ equal to 0.
\section{Methodology}
\label{sec:method}
In this section, we introduce our deep reinforcement learning (DRL) based approach for solving HCVRP with both min-max and min-sum objectives. We first propose a novel attention-based deep neural network to represent the policy, which enables both vehicle selection and node selection at each decision step. Then we describe the procedure for training our policy network.
\begin{figure}[t]
\centering
\setlength{\belowcaptionskip}{-0.5cm}
\includegraphics[width=0.38\textwidth,height=58mm, trim=0 5 0 20]{figs/framework.png}
\captionsetup{justification=justified}
\caption{The framework of our policy network. With raw features of the instance processed by the encoder, our policy network first selects a vehicle ($v_t^i$) using the vehicle selection decoder and then a node ($x_t^j$) using the node selection decoder for this vehicle to visit at each route construction step $t$. Both the selected vehicle and node constitute the action at that step, i.e., $a_t=(v_t^i, x_t^j)$, where the partial solution and state are updated accordingly. To a single instance, the encoder is executed once, while the vehicle and node selection decoders are executed multiple times to construct the solution.
}
\label{fig:framework}
\end{figure}
\begin{figure*
\centering
\setlength{\belowcaptionskip}{-0.5cm}
\includegraphics[width=0.88\textwidth, trim={1mm 0mm 1mm 1mm},clip]{figs/DRL_model.png}
\captionsetup{justification=justified}
\caption{
Architecture of our policy network with $m$ heterogeneous vehicles and $n$ customers. It is worth noting that our vehicle selection decoder leverages the vehicle features (last node location and accumulated travel time), the route features (max pooling of the routes for $m$ vehicles), and their combinations to compute the probability of selecting each vehicle.
}
\label{fig:DL_model}
\end{figure*}
\subsection{Framework of Our Policy Network}
In our approach, we focus on learning a stochastic policy $\pi_{\theta}(a_t|s_t)$ represented by a deep neural network with trainable parameter $\theta$. Starting from the initial state $s_0$, i.e., an empty solution, we follow the policy $\pi_{\theta}$ to construct the solution by complying with the MDP in section~\ref{RL_form} until the terminate state $s_\mathcal{T}$ is reached, i.e., all customers are served by the whole fleet of vehicles. The $\mathcal{T}$ is possibly longer than $n+1$ due to the fact that sometimes vehicles need to return to the depot for replenishment. Accordingly, the joint probability of this process is factorized based on the chain rule as follows,
\begin{equation}
\hspace{-3mm}
p(s_{\mathcal{T}}|s_0) = \prod_{t=0}^{{\mathcal{T}}-1} \pi_{\theta}(a_t|s_t) p(s_{t+1}|s_{t}, a_{t}),
\end{equation}
where $p(s_{t+1}|s_{t}, a_{t}) = 1$ always holds since we adopt the deterministic state transition rule.
As illustrated in Fig.~\ref{fig:framework}, our policy network $\pi_{\theta}$ is composed of an encoder, a vehicle selection decoder and a node selection decoder. Since a given problem instance itself remains unchanged throughout the decision process, the encoder is executed only once at the first step ($t\!=\!0$) to simplify the computation, while its outputs could be reused in subsequent steps ($ t\!>\!0 $) for route construction. To solve the instance, with raw features processed by the encoder for better representation, our policy network first selects a vehicle ($v^i$) from the whole fleet via the vehicle selection decoder and identify its index, then selects a node ($x^j$) for this vehicle to visit via the node selection decoder at each route construction step. The selected vehicle and node constitute the action for that step, which is further used to update the states. This process is repeated until all customers are served.
\subsection{Architecture of Our Policy Network}
Originating from the field of natural language processing~\cite{vaswani2017attention}, the Transformer model has been successfully extended to many other domains such as image processing~\cite{li2019entangled,yu2019multimodal}, recommendation systems~\cite{sun2019bert4rec,chen2019behavior} and vehicle routing problems~\cite{kool2018attention,xin2020step} due to its desirable capability to handle sequential data. Rather than the sequential recurrent or convolutional structures, the Transformer mainly hinges on the self-attention mechanism to learn the relations between arbitrary two elements in a sequence, which allows more efficient parallelization and better feature extraction without the limitation of sequence-aligned recurrence. Regarding the general vehicle routing problems, the input is a sequence of customers characterized by locations and demands, and the construction of routes could be deemed as a sequential decision-making, where the Transformer has desirable potential to engender high quality solutions with short computation time. Specially, the Transformer-style models~\cite{kool2018attention,xin2020step} adopt an encoder-decoder structure, where the encoder aims to compute a representation of the input sequence based on the multi-head attention mechanism for better feature extraction and the decoder sequentially outputs a customer at each step based on the problem-related contextual information until all customers are visited. To solve the HCVRP with both min-max and min-sum objectives, we also propose a Transformer-style models as our policy network, which is designed as follows.
As depicted in Fig.~\ref{fig:DL_model}, our policy network adopts an encoder-decoder structure and the decoder consists of two parts, i.e., vehicle selection decoder and node selection decoder. Based on the stipulation that any vehicle has the opportunity to be selected at each step, our policy network is able to search in a more rational and broader action space given the characteristics of HCVRP. Moreover, we enrich the contextual information for the vehicle selection decoder by adding the features extracted from all vehicles and existing (partial) routes. In doing so, it allows the policy network to capture the heterogeneous roles of vehicles, so that decisions would be made more effectively from a global perspective. To better illustrate our method, an example of two instances with seven nodes and three vehicles is presented in Fig.~\ref{fig:detail_model}. Next, we introduce the details of our encoder, vehicle selection decoder, and node selection decoder, respectively.
\textbf{Encoder.}
The encoder embeds the raw features of a problem instance (i.e., customer location, customer demand, and vehicle capacity) into a higher-dimensional space, and then processes them through attention layers for better feature extraction. We normalize the demand $d_0^i$ of customer $x^i$ by dividing the capacity of each vehicle to reflect the differences of vehicles in the heterogeneous fleet, i.e., $\tilde x^i = (s^i, d_0^i / \mathcal{Q}^1, d_0^i / \mathcal{Q}^2, ...d_0^i / \mathcal{Q}^m)$.
Similar to the encoder of Transformer in~\cite{vaswani2017attention,kool2018attention}, the enhanced node feature $\tilde x^i$ is then linearly projected to $h^i_0$ in a high dimensional space with dimension $dim=128$. Afterwards, $h^i_0$ is further transformed to $h^i_N$ through $N$ attention layers for better feature representation, each of which consists of a multi-head attention (MHA) sublayer and a feed-forward (FF) sublayer.
The $l$-th MHA sublayer uses a multi-head self-attention network to process the node embeddings $h_l = ( h^0_l, h^1_l, ..., h^n_l )$. We stipulate that $dim_k=\frac{dim}{Y}$ is the \emph{query/key} dimension, $dim_v=\frac{dim}{Y}$ is the \emph{value} dimension, and $Y=8$ is the number of heads in the attention.
The $l$-th MHA sublayer first calculates the attention value $Z_{l,y}$ for each head $y\! \in \! \{1,2,...,Y \}$ and then concatenates all these heads and projects them into a new feature space which has the same dimension as the input $h_l$. Concretely, we show these steps as follows,
\begin{equation}
Q_{l,y} = h_l W^Q_{l,y},\ K_{l,y} = h_l W^K_{l,y},\ V_{l,y} = h_l W^V_{l,y},
\label{eq:qkv}
\end{equation}
\begin{equation}
Z_{l,y} \!=\! \text{softmax}(\frac{{Q_{l,y}} {K_{l,y}}^T}{\sqrt{dim_k}}) V_{l,y},
\label{eq:head}
\end{equation}
\begin{equation}
\begin{split}
\text{MHA} (h_l) &= \text{MHA} (h_l W^Q_l, h_l W^K_l, h_l W^V_l) \\
&=\text{Concat}(Z_{l,1}, Z_{l,2},..., Z_{l,Y}) W^O_l,
\end{split}
\label{eq:atten}
\end{equation}
where $W^Q_{l}, W^K_{l} \in\! \mathbb{R}^{ Y \times dim \times dim_k}$, $W^V_{l} \in \mathbb{R}^{Y \times dim \times dim_v}$, and $W^O_l\in \mathbb{R}^{dim \times dim}$ are trainable parameters in layer $l$ and are independent across different attention layers.
Afterwards, the output of the $l$-th MHA sublayer is fed to the $l$-th FF sublayer with ReLU activation function to get the next embeddings $h_{l+1}$.
Here, a skip-connection~\cite{he2016deep} and a batch normalization (BN) layer~\cite{ioffe2015batch} are used for both MHA and FF sublayers, which are summarised as follows,
\begin{equation}
r^i_{l} = BN(h_l^i + \text{MHA}^i(h_l)),
\label{multi-head}
\end{equation}
\begin{equation}
h^i_{l+1} = BN(r^i_{l} + FF(r^i_{l})).
\label{FF}
\end{equation}
Finally, we define the final output of the encoder, i.e., $h_N^i$, as the node embeddings of the problem instance, and the mean of the node embeddings, i.e., $\hat{h}_N=\frac{1}{n} \sum_{i \in X} h_N^i$, as the graph embedding of the problem instance, which will be reused for multiple times in the decoders.
\begin{figure*
\centering
\setlength{\belowcaptionskip}{-0.5cm}
\includegraphics[width=1.0\textwidth, trim={0mm 1mm 1mm 0mm},clip]{figs/detail_graph.png}
\captionsetup{justification=justified}
\caption{An illustration of our policy network for two instances with seven nodes and three vehicles, where the red frame indicates the two stacked instances with same data structure. Given the current state $s_t$, the features of nodes and vehicles are processed through the encoder to compute the node embeddings and the graph embedding. In the vehicle selection decoder, the node embeddings of the three tours for three vehicles in current state $s_t$, i.e., $\{\{{h}^0, {h}^1\}, \{{h}^0, {h}^3\}, \{{h}^0, {h}^5\}\}$, are processed for route feature extraction, and the current location and the accumulated travel time of three vehicles are processed for vehicle feature extraction, which are then concatenated to compute the probability of selecting a vehicle. With the selected vehicle $v^1$ in this example, the current node embedding $h^1$ and the current loading ability of this vehicle are first concatenated and linearly propagated, then added with the graph embedding, which are further used to compute the probability of selecting a node with masked softmax, i.e., $\bar{p}^1=\bar{p}^3=\bar{p}^5=0$. With the selected node $x^2$ in this example, the action is represented as $a_t=\{v^1, x^2\}$ and the state is updated and transited to $s_{t+1}$.
}
\label{fig:detail_model}
\end{figure*}
\textbf{Vehicle Selection Decoder.}
Vehicle selection decoder outputs a probability distribution for selecting a particular vehicle, which mainly leverages two embeddings, i.e., \emph{vehicle feature embedding} and \emph{route feature embedding}, respectively.
\emph{1) Vehicle Feature Embedding: } To capture the states of each vehicle at current step, we define the vehicle feature context $C_t^V \in R^{1 \times 3m}$ at step $t$ as follows,
\begin{equation}
C_t^V = [\tilde{g}_{t-1}^{1}, T_{t-1}^{1}, \tilde{g}_{t-1}^{2}, T_{t-1}^{2},..., \tilde{g}_{t-1}^{m}, T_{t-1}^{m}],
\end{equation}
where $\tilde{g}_{t-1}^{i}$ denotes the 2-dim location of the last node $g_{t-1}^{i}$ in the partial route of vehicle $v^i$ at step $t-1$, and $T_{t-1}^{i}$ is the accumulated travel time of vehicle $v^i$ till step $t-1$.
Afterwards, the vehicle feature context is linearly projected with trainable parameters $W_1$ and $b_1$ and further processed by a 512-dim FF layer with ReLU activation function to engender the vehicle feature embedding $H_t^V$ at step $t$ as follows,
\begin{equation}
H_t^V = FF(W_1 C_t^V + b_1).
\end{equation}
\emph{2) Route Feature Embedding:}
Route feature embedding extracts information from existing partial routes of all vehicles, which helps the policy network intrinsically learn from the visited nodes in previous steps, instead of simply masking them as did in previous works~\cite{bello2017neural,nazari2018reinforcement,vinyals2015pointer,kool2018attention}.
For each vehicle $v^i$ at step $t$, we define its route feature context $\tilde{C}_t^{i}$ as an arrangement of the node embeddimgs (i.e., $h_N^k$ is the node embeddimgs for node $x^k$), corresponding to the node in its partial route $G_{t-1}^i$. Specifically, the route feature context $\tilde{C}_t^{i}$ for each vehicle $v^i$, $i=1,2,...,m$ is defined as follows,
\begin{equation}
\tilde{C}_t^{i} = [\tilde{h}_0^{i}, \tilde{h}_1^{i}, ..., \tilde{h}_{t-1}^{i}],
\label{contexcontex}
\end{equation}
where $\tilde{C}_t^{i} \in R^{t \times dim}$ (the first dimension is of size $t$ since $G_{t-1}^i$ should have $t$ elements at step $t$) and $\tilde{h}_{j}^{i}$ represents the corresponding node embeddings in $h_N$ of the $j$-th node in partial route $G_{t-1}^i$ of vehicle $v^i$. For example, assume $t = 4$ and the partial route of vehicle $v^i$ is $G_{3}^i = \{x^0, x^3,x^3,x^1\}$, then the route feature context of this vehicle at step $t = 4$ would be $\tilde{C}_4^{i} = [ \tilde{h}_0^{i}, \tilde{h}_1^{i}, \tilde{h}_2^{i}, \tilde{h}_3^{i} ] = [ h^0_N, h^3_N, h^3_N, h^1_N ]$.
Afterwards, the route feature context of all vehicles is aggregated by a max-pooling and then concatenated to yield the route context $\hat{C}_t^R$ for the whole fleet, which is further processed by a linear projection with trainable parameters $W_2$ and $b_2$ and a 512-dim FF layer to engender the route feature embedding $H_t^R$ at step $t$ as follows,
\begin{equation}
\bar{C}_t^{i} = max(\tilde{C}_t^{i}), i=1,2,...,m,
\end{equation}
\begin{equation}
\hat{C}_t^R=[\bar{C}_t^{1}, \bar{C}_t^{2}, ..., \bar{C}_t^{m}],
\end{equation}
\begin{equation}
H_t^R = FF(W_2 \hat{C}_t^R + b_2).
\end{equation}
Finally, the vehicle feature embedding $H_t^V$ and route feature embedding $H_t^R$ are concatenated and linearly projected with parameter $W_3$ and $b_3$, which is further processed by a softmax function to compute the probability vector as follows,
\begin{equation}
H_t = W_3 [H_t^V, H_t^R] + b_3,
\end{equation}
\begin{equation}
p_t = \text{softmax} (H_t),
\label{eq:veh_prob}
\end{equation}
where $p_t \in R^m$ and its element $p_t^i$ represents the probability of selecting vehicle $v^i$ at time step $t$. Depending on different strategies, the vehicle can be selected either by retrieving the maximum probability greedily, or sampling according to the vector $p_t$. The selected vehicle $v^i$ is then used as input to the node selection decoder.
\textbf{Node Selection Decoder.}
Given node embeddings from the encoder and the selected vehicle $v^i$ from the vehicle selection decoder, the node selection decoder outputs a probability distribution $\bar p_t$ over all unserved nodes (the nodes served in previous steps are masked), which is used to identify a node for the selected vehicle to visit.
Similar to~\cite{kool2018attention}, we first define a context vector $H^c_t$ as follows, and it consists of the graph embedding $\hat{h}_N$, node embedding of the last (previous) node visited by the selected vehicle, and the remaining capacity of this vehicle,
\begin{equation}
H^c_t = [\hat{h}_N, \tilde{h}_{t-1}^i, o_t^i],
\end{equation}
where the second element $\tilde{h}_{t-1}^i$ has the same meaning as the one defined in Eq.~(\ref{contexcontex}) and is replaced with trainable parameters for $t=0$.
The designed context vector highlights the features of the selected vehicle at the current decision step, with the consideration of graph embedding of the instance from the global perspective.
The context vector $H^c_t$ and the node embeddings $h_N$ are then fed into a multi-head attention (MHA) layer to synthesis a new context vector $\hat{H}^c_t$ as a glimpse of the node embeddings~\cite{vinyals2015order}. Different from the self-attention in the encoder, the query of this attention comes from the context vector, while the key/value of the attention come from the node embeddings as shown below,
\begin{equation}
\hat{H}_t^c = \text{MHA} (H_t^c W^Q_c, h_N W^K_c, h_N W^V_c),
\end{equation}
where $W^Q_c, W^K_c$ and $W^V_c$ are trainable parameters similar to Eq.~(\ref{eq:atten}).
We then generate the probability distribution $\bar p_t$ by comparing the relationship between the enhanced context $\hat{H}_t^c$ and the node embeddings $h_N$ through a \emph{Compatibility Layer}. The compatibility of all nodes with context at step $t$ is computed as follows,
\begin{equation}
u_t = C \cdot tanh(\frac{q_t^T k_t}{\sqrt{dim_k}}),
\end{equation}
where $q_t=\hat{H}^c_t W^Q_{comp}$ and $k_t = h_N W^K_{comp}$ are trainable parameters, and $C$ is set to 10 to control the entropy of $u_t$. Finally, the probability vector is computed in Eq.~(\ref{eq:node_prob})
where all nodes visited in the previous steps are masked for feasibility and element $\bar{p}_t^j$ represents the probability of selecting node $x^j$ served by the selected vehicle $v^i$ at step $t$ as follows,
\begin{equation}
\bar{p}_t = \text{softmax} (u_t).
\label{eq:node_prob}
\end{equation}
Similar to the decoding strategy of vehicle selection, the nodes could be selected by always retrieving the maximum $\bar{p}_t^j$, or sampling according to the vector $\bar{p}$ in a less greedy manner.
\IncMargin{0.5em}
\begin{algorithm}[t!]
\small
\SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up}
\SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\SetKwBlock{DoParallel}{foreach $b=1,2,...,B$ do in parallel}{end}
\Input{Initial parameters $ \theta$ for policy network $\pi_\theta$;\\ initial parameters $ \phi$ for baseline network $v_\phi$; \\
number of iterations $I$; \\
iteration size $N$; number of batches $M$;\\
maximum training steps ${\mathcal{T}}$; significance $\alpha$.
}
\ForEach{$iter=1,2,...,I$}
{
Sample $N$ problem instances randomly;\\
\ForEach{$i=1,2,...,M$}{
Retrieve batch $b = N_i$;\\
\ForEach{$t=0,1,...,{\mathcal{T}}$}
{
Pick an action $a_{t,b} \sim \pi_{\theta}(a_{t,b} | s_{t,b})$ ;\\
Observe reward $r_{t,b}$ and next state $s_{t+1,b}$;\\
}
$R_b = -max(\sum\limits_{t=0}^{\mathcal{T}} r_{t,b})$;\\
GreedyRollout with baseline $v_\phi$ and compute its reward $R^{BL}_b$;\\
$\! d_\theta \!\leftarrow \! \frac{1}{B} \!\sum\limits_{b=1}^B\! (R_b \!- \!R^{BL}_b) \nabla_{\theta} log\ \pi_{\theta}(s_{\mathcal{T},b} |s_{0,b})$;\\
$\theta \gets \text{Adam}(\theta, d_\theta)$;\\
}
\If{\textsc{OneSidedPairedTTest}($\pi_{\theta}, v_{\phi}$) $\textless \alpha$} {$\phi \gets \theta$;
}
}%
\caption{Deep Reinforcement Learning Algorithm}
\label{Algorithm:algorithm}
\end{algorithm}
\subsection{Training Algorithm}
\label{sec:training}
The proposed deep reinforcement learning method is summarized in Algorithm~\ref{Algorithm:algorithm}, where we adopt the policy gradient with baseline to train the policy of vehicle selection and node selection for route construction. The policy gradient are characterized by two networks: 1) the policy network, i.e., the policy network $\pi_{\theta}$ aforementioned, picks an action and generates probability vectors for both vehicles and nodes with respect to this action at each decoding step; 2) the baseline network $v_{\phi}$, a greedy roll-out baseline with a similar structure as the policy network, but computes the reward by always selecting vehicles and nodes with maximum probability. A Monte Carlo method is applied to update the parameters to improve the policy iteratively.
At each iteration, we construct routes for each problem instance and calculate the reward with respect to this solution in line 9, and the parameters of the policy network are updated in line 13. In addition, the expected reward of the baseline network $R_b^{BL}$ comes from a greedy roll-out of the policy in line 10. The parameters of the baseline network will be replaced by the parameters of the latest policy network if the latter significantly outperforms the former according to a paired t-test on several instances in line 15. By updating the two networks, the policy $\pi_{\theta}$ is improved iteratively towards finding higher-quality solutions.
\begin{table}
\centering
\caption{Parameter Settings of Heuristic Methods.}
\begin{threeparttable}
\begin{tabular}{c|c}
\toprule
\hline
SISR & \makecell[c]{$\bar{c}=10, L^{max}=10, \alpha=0.01$, $\beta=0.01$, $T_0=100, T_f=1$,\\
$ iter=it(size)\tnote{\#},\ it(40)=1.2 \times 10^7, it(60)=1.8 \times 10^7$,\\
$it(80)=2.4 \times 10^7,it(100)=3.0 \times 10^7,
it(120)=3.6 \times 10^7$,\\
$it(140)=4.2 \times 10^7, it(160)=4.8 \times 10^7$} \\
\hline
VNS & \makecell[c]{$r_n=0.1, r_m=0.25$, $p_{ol}=5$, $p_{ot}=5$, $p_{otd}=1$,\\
$MaxTolerance=0.04$,\\
$iter=it(size)\tnote{$\ddagger$},\ it(40)=500, it(60)=600$,\\
$it(80)=700, it(100)=800, it(120)=900$,\\
$it(140)=1000, it(160)=1100$} \\
\hline
ACO & \makecell[c]{$m=20$, $\omega_0 = 0.2$, $\alpha=0.1$, $\gamma = 0.1$, \\
$\delta = 3$, $\beta = 3$, $Q_0 = \frac{N-1}{\sum^N_{i=0}\sum^N_{j=0} D_{ij}}$,\\
$iter=it(size)\tnote{$\ddagger$},\ it(40)=10^4, it(60)=1.2 \times 10^4$,\\
$it(80)=1.4 \times 10^4, it(100)=1.6 \times 10^4, it(120)=1.8 \times 10^4$,\\
$it(140)=2.0 \times 10^4, it(160)=2.2 \times 10^4$}\\
\hline
FA & \makecell[c]{$\beta = 1$, $\gamma = 1$, $n=25$, $\alpha = 0.001$,\\
$iter=it(size)\tnote{$\ddagger$}, \ it(40)=1000, it(60)=1200$,\\
$it(80)=1400, it(100)=1600, it(120)=1800$,\\
$it(40)=2000, it(160)=2200$\\
}\\
\hline
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[\#] The iterations are increased as the problem size scales up following the original paper.
\item[$\ddagger$] The original papers use same iterations for all problem sizes. We linearly increase the iterations as the problem size scales up.
\end{tablenotes}
\end{threeparttable}
\label{parameter}
\end{table}
\section{Computational Experiments}
\label{sec:experiments}
In this section, we conduct experiments to evaluate our DRL method. Particularly, a heterogeneous fleet of fully loaded vehicles with different capacities, start at a depot node and depart to satisfy the demands of all customers by following certain routes, the objective of which is to minimize the longest or total travel time incurred by the vehicles. Moreover, we further verify our method by extending the experiments to benchmark instances from the CVRPLib~\cite{xavier2019cvrplib}. Note that, the HCVRP with min-max and min-sum objectives are both NP-hard problems, and the theoretical computation complexity grows exponentially as problem size scales up.
\begin{table*}
\caption{DRL Method v.s. Baselines for Three Vehicles (V3).}
\centering
\resizebox{\textwidth}{!}{
\begin{threeparttable}
\begin{tabular}{c|c|c c c|c c c|c c c|c c c|c c c}
\toprule
& & \multicolumn{3}{c|}{V3-C40} & \multicolumn{3}{c|}{V3-C60} & \multicolumn{3}{c|}{V3-C80} & \multicolumn{3}{c|}{V3-C100} & \multicolumn{3}{c}{V3-C120}\\
& Method & Obj. & Gap & Time & Obj. & Gap & Time & Obj. & Gap & Time & Obj. & Gap & Time & Obj. & Gap & Time \\
\midrule
\multirow{9}{*}{\rotatebox{90}{Min-max}} & SISR & 4.00 & 0\% & 245s & 5.58 & 0\% & 468s & 7.27 & 0\% & 752s & 8.89 & 0\% & 1135s & 10.42 & 0\% & 1657s \\
& VNS & 4.17 & 4.25\% & 115s & 5.80 & 3.94\% & 294s & 7.57 & 4.13\% & 612s & 9.20 & 3.49\% & 927s & 10.81 & 3.74\% & 1378s \\
& ACO & 4.31 & 7.75\% & 209s & 6.18 & 10.75\% & 317s
|
& 8.14 & 11.97\% & 601s & 10.05 & 13.05\% & 878s & 11.79 & 13.15\% & 1242s \\
& FA & 4.49 & 12.25\% & 168s & 6.30 & 12.90\% & 285s & 8.32 & 14.44\% & 397s & 10.11 & 13.72\% & 522s & 11.98 & 14.97\% & 667s \\
&AM(Greedy) & 4.85 & 21.25\% & 0.37s & 6.57 & 17.74\% & 0.54s & 8.32 & 14.44\% & 0.82s & 9.98 & 12.26\% & 1.07s & 11.63 & 11.61\% & 1.28s\\
&AM(Sample1280) & 4.36 & 9.00\% & 0.88s & 5.99 & 7.39\% &1.19s & 7.73 & 6.33\% & 1.81s &9.36 & 5.29\% & 2.51s & 10.94 & 4.99\% & 3.37s \\
&AM(Sample12800) & 4.31 & 7.75\% & 1.35s & 5.92 &6.09\% &2.46s & 7.66 & 5.36\% & 3.67s &9.28 & 4.39\% & 5.17s & 10.85 & 4.13\% & 6.93s \\
&DRL(Greedy) & 4.45 & 11.25\% & 0.70s & 6.08 & 8.96\% & 0.82s & 7.82 & 7.57\% & 1.11s & 9.42 & 5.96\% & 1.44s & 10.98 & 5.37\% & 1.94s\\
&DRL(Sample1280) & 4.17 & 4.25\% & 1.25s & 5.77 & 3.41\% &1.43s & 7.48 & 2.89\% & 2.25s & 9.07 & 2.02\% & 3.42s & 10.62 & 1.92\% & 4.52s \\
&DRL(Sample12800) & 4.14 & 3.50\% & 1.64s &5.73 & 2.69\% & 2.97s & 7.44 & 2.34\% & 4.56s &9.02 & 1.46\% & 6.65s &10.56 & 1.34\% & 8.78s \\
\hline
\multirow{10}{*}{\rotatebox{90}{Min-sum}} & Exact-solver & 55.43* & 0\% & 71s & 78.47* & 0\% & 214s & 102.42* & 0\% & 793s & 124.61* & 0\% & 2512s & - & - & - \\
& {SISR} & {{55.79}} & {{0.65\%}} & {(254s)} & {{79.12}} & {{0.83\%}} & {(478s)} & {{103.41}} & {{0.97\%}} & {(763s)} & {{126.19}} & {{1.27\%}} & {(1140s)} & {{149.10}} & {{0\%}} & {(1667s)} \\
&{VNS} & {57.54} & {3.81\%} & {109s} & {81.44} & {3.78\%} & {291s} & {106.18} & {3.67\%} & {547s} &{129.32} & {3.78\%} & {828s} &{152.56} & {2.32\%} & {1217s} \\
&{ACO} & {60.11} & {8.44\%} & {196s} & {86.05} & {9.66\%} & {302s} & {113.75} & {11.06\%} & {593s} & {140.61} & {12.84\%} & {859s} & {166.50} & {11.67\%} & {1189s} \\
&{FA} & {59.94} & {8.14\%} & {164s} & {85.36} & {8.78\%} & {272s} & {112.81} & {10.14\%} & {388s} & {138.92} & {11.48\%} & {518s} & {164.53} & {10.35\%} & {653s} \\
&AM(Greedy) & 66.54 & 20.04\% & 0.49s & 91.19 & 16.21\% & 0.83s & 117.22 & 14.45\% & 1.01s & 141.14 & 13.27\% & 1.23s & 164.57 & 10.38\% & 1.41s\\
&AM(Sample1280) & 60.95 & 9.96\% & 0.92s & 85.74 & 9.26\% &1.17s & 111.78 & 9.14\% & 1.79s &135.61 & 8.83\% & 2.49s & 159.11 & 6.71\% & 3.30s \\
&AM(Sample12800) & 60.26 & 8.71\% & 1.35s & 84.96 & 8,27\% &2.31s & 110.94 & 8.32\% & 3.61s &134.72 & 8.11\% & 5.19s & 158.19 & 6.10\% & 6.86s \\
&DRL(Greedy) & 58.99 & 6.42\% & 0.61s & 83.06 & 5.85\% & 1.02s & 108.44 & 5.88\% & 1.11s & 131.75 & 5.73\% & 1.56s & 154.56 & 3.66\% & 1.96s\\
&DRL(Sample1280) & 57.05 & 2.92\% & 1.18s & 80.46 & 2.54\% & 1.49s & 105.29 & 2.80\% & 2.34s & 128.63 & 3.23\% & 3.38s & 151.23 & 1.43\% & 4.61s \\
&DRL(Sample12800) & 56.84 & 2.54\% & 1.65s & 79.92 & 1.85\% & 2.99s & 104.63 & 2.16\% & 4.63s & 128.19 & 2.87\% & 6.74s & 150.73& 1.09\% & 9.11s \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[()] {The mark () indicates that the time is computed based on JAVA implementation which is publicly available. For VNS, ACO and FA, we re-implement them in Python since their original codes are not available, where C++ or JAVA was used in the original papers. So the reported time might be different from the ones in the original papers.}
\item[*] The mark * indicates that all instances are solved optimally.
\end{tablenotes}
\end{threeparttable}
}
\label{tab:minmax_v3}
\end{table*}
\begin{table*}
\caption{DRL Method v.s. Baselines for Five Vehicles (V5).}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{c|c|c c c|c c c|c c c|c c c|c c c
}
\toprule
& & \multicolumn{3}{c|}{V5-C80} &
\multicolumn{3}{c|}{V5-C100} &
\multicolumn{3}{c|}{V5-C120} &
\multicolumn{3}{c|}{V5-C140} &
\multicolumn{3}{c}{V5-C160} \\
& Method & Obj. & Gap & Time & Obj. & Gap & Time & Obj. & Gap & Time & Obj. & Gap & Time & Obj. & Gap & Time \\
\midrule
\multirow{9}{*}{\rotatebox{90}{Min-max}} & {SISR} & {{3.90}} & {{0\%}} & {(727s)} & {{4.72}} & {{0\%}} & {(1091s)} & {{5.48}} & {{0\%}} & {(1572s)} & {{6.33}} & {{0\%}} & {(1863s)} & {{7.16}} & {{0\%}} & {(2521s)} \\
&{VNS} & {4.15} & {6.41\%} & {725s} & {4.98} & {7.19\%} & {1046s} & {5.81} & {6.02\%} & {1454s} & {6.67} & {5.37\%} & {2213s} & {7.53} & {5.17\%} & {3321s} \\
&{ACO} & {4.50} & {15.38\%} & {612s} & {5.56} & {17.80\%} & {890s} & {6.47} & {18.07\%} & {1285s} & {7.52} & {18.80\%} & {2081s} & {8.51} & {18.85\%} & {2898s} \\
&{FA} & {4.61} & {18.21\%} & {412s} & {5.62} & {19.07\%} & {541s} & {6.58} & {20.07\%} & {682s} & {7.60} & {20.06\%} & {822s} & {8.64} & {20.67\%} & {964s} \\
&AM(Greedy) & 4.84 & 24.10\% & 1.08s & 5.70 & 20.76\% & 1.31s & 6.57 & 19.89\% & 1.74s & 7.49 & 18.33\% & 1.93s & 8.34 & 16.48\% & 2.15s\\
&AM(Sample1280) & 4.32 & 10.77\% & 1.88s & 5.18 & 8.75\% &2.64s & 6.03 & 10.04\% & 3.38s &6.93 & 9.48\% & 4.47s & 7.75 & 8.24\% & 5.73s \\
&AM(Sample12800) & 4.25 & 8.97\% & 3.71s & 5.11 & 8.26\% &5.19s & 5.95 & 8.58\% & 6.94s &6.86 & 8.37\% & 8.73s & 7.69 & 7.40\% & 10.69s \\
&DRL(Greedy) & 4.36 & 11.79\% & 1.29s & 5.20 & 10.17\% & 1.64s & 5.94 & 8.39\% & 2.38s & 6.78 & 7.11\% & 2.43s & 7.61 & 6.28\% & 3.02s\\
&DRL(Sample1280) & 4.08 & 4.62\% & 2.66s & 4.91 & 4.03\% &3.66s & 5.66 & 3.28\% & 5.08s &6.51 & 2.84\% & 6.48s &7.34 & 2.51\% & 8.52s \\
&DRL(Sample12800) & 4.04 & 3.59\% & 5.06s & 4.87 & 3.18\% & 7.20s & 5.62 & 2.55\% & 9.65s &6.47 & 2.21\% & 10.93s &7.30 & 1.96\% & 13.76s \\
\hline
\multirow{9}{*}{\rotatebox{90}{Min-sum}} &Exact-solver & 102.42* & 0\% & 1787s & 124.63* & 0\% & 6085s & - &- & - & - &- & - & - &-& - \\
& {SISR} & {{103.49}} & {{1.04\%}} & {(735s)} & {{126.35}} & {{1.38\%}} & {(1107s)} & {{149.18}} & {{0\%}} & {(1580s)} & {{172.88}} & {{0\%}} & {(1881s)} & {{196.51}} & {{0\%}} & {(2539s)} \\
&{VNS} & {109.91} & {7.31\%} & {538s} &{133.28} & {6.94\%} & {811s} & {156.37} & {4.82\%} & {1386s} &{180.08} & {4.16\%} & {2080s} & {203.95} &{3.79\%} & {2896s
}\\
&{ACO} & {118.58} & {15.78\%} & {608s} & {146.51} & {17.56\%} & {865s} & {171.82} & {15.18\%} & {1269s} & {200.73} & {16.11\%} & {1922s} & {229.64} & {16.86\%} & {2803s} \\
&{FA} & {116.13} & {13.39\%} & {401s} & {142.39} & {14.25\%} & {532s} & {167.87} & {12.53\%} & {677s} & {196.48} & {13.65\%} & {801s} & {223.49} & {13.73\%} & {955s} \\
&AM(Greedy) & 128.31 & 25.28\% & 0.82s & 152.91 & 22.69\% & {1.28s} & 177.39 & 18.91\% & {1.45s} & 201.85 & 16.76\% & {1.69s} & 227.10 & 15.57\% & {1.81s}\\
&AM(Sample1280) & 119.41 & 16.59\% & 1.83s & 144.23 & 15.73\% &2.66s & 168.95 & 13.25\% & 3.63s &193.65 & 12.01\% & 4.68s & 218.67 & 11.28\% & 5.49s \\
&AM(Sample12800) & 118.04 & 15.25\% & 3.74s & 142.79 & 14.57\% &5.20s & 167.45 & 12.25\% & 7.02s &192.13 & 11.13\% & 8.93s & 217.14 & 10.50\% & 11.01s \\
&DRL(Greedy) & 108.43 & 5.87\% & 1.26s & 131.90 & 5.83\% & 1.73s & 154.71 & 3.71\% & 2.11s & 178.78 & 3.41\% & 3.06s & 202.87 & 3.24\% & 3.60s\\
&DRL(Sample1280) & 105.54 & 3.05\% & 2.70s & 128.63 & 3.21\% & 4.15s & 151.39 & 1.48\% & 5.37s & 175.29 & 1.39\% & 6.83s & 199.16 & 1.35\% & 8.68s \\
&DRL(Sample12800) & {104.88} & 2.40\% & 5.38s & {128.17} & 2.84\% & 7.61s & {150.86} & 1.26\% & 9.24s & {174.80} & 1.11\% & 11.30s & {198.66} & 1.09\% & 13.83s \\
\bottomrule
\end{tabular}}
\label{tab:minmax_v5}
\vspace{-1mm}
\end{table*}
\subsection{Experiment Settings for HCVRP}
\label{sec:exp_setting}
We describe the settings and the data generation method (which we mainly follow the classic ways in~\cite{bello2017neural,nazari2018reinforcement,vinyals2015pointer,kool2018attention}) for our experiments. Pertaining to MM-HCVRP, the coordinates of depot and customers are randomly sampled within the unit square $[0,1] \times [0,1]$ using the uniform distribution. The demands of customers are discrete numbers randomly chosen from set $\{ 1,2,..,9 \}$ (demand of depot is 0).
To comprehensively verify the performance, we consider two settings of heterogeneous fleets. The first fleet considers three heterogeneous vehicles (named V3), the capacity of which are set to 20, 25, and 30, respectively. The second fleet considers five heterogeneous vehicles (named V5), the capacity of which are set to 20, 25, 30, 35, and 40, respectively. Our method is evaluated with different customer sizes for the two fleets, where we consider 40, 60, 80, 100 and 120 for V3; and 80, 100, 120, 140 and 160 for V5. In MM-HCVRP, we set the vehicle speed $f$ for all vehicles to be 1.0 for simplification. However, our method is capable of coping with different speeds which is verified in MS-HCVRP.
Pertaining to MS-HCVRP, most of settings are the same as the MM-HCVRP except for the vehicle speeds, which are inversely proportional to their capacities. In doing so, it avoids that only vehicle with the largest capacity is selected to serve all customers to minimize total travel time. Particularly, the speeds are set to $\frac{1}{4}$, $\frac{1}{5}$, and $\frac{1}{6}$ for V3, and $\frac{1}{4}$, $\frac{1}{5}$, $\frac{1}{6}$, $\frac{1}{7}$, and $\frac{1}{8}$ for V5, respectively.
The hyperparameters are shared to train the policy for all problem sizes. Similar to~\cite{kool2018attention}, the training instances are randomly generated on the fly with iteration size of 1,280,000 and are split into 2500 batches for each iteration. Pertaining to the number of iterations, normally more iterations lead to better performance. However, after training with an amount of iterations, if the improvement in the performance is not much significant, we could stop the training before full convergence, which still could deliver competitive performance although not the best.
For example, regarding the model of V5-C160 (5 vehicles and 160 customers) with min-max objective trained for 50 iterations, 5 more iterations can only reduce the gap by less than 0.03\%, then we will stop the training. In our experiments, we use 50 iterations for all problem sizes to demonstrate the effectiveness of our method, while more iterations could be adopted for better performance in practice.
The features of nodes and vehicles are embedded into a 128-dimensional space before fed into the vehicle selection and node selection decoder, and we set the dimension of hidden layers in decoder to be 128~\cite{bello2017neural,nazari2018reinforcement,kool2018attention}. In addition, Adam optimizer is employed to train the policy parameters, with initial learning rate $10^{-4}$ and decaying 0.995 per iteration for convergence. The norm of all gradient vectors are clipped to be within 3.0 and $\alpha$ in Section~\ref{sec:training} is set to 0.05.
Each iteration consumes average training time of 31.61m (minutes), 70.52m (with single 2080Ti GPU), 93.02m, 143.62m (with two GPUs) and 170.14m (with three GPUs) for problem size of 40, 60, 80, 100 and 120 regarding V3, and 105.25m, 135.49m, (with two GPUs) 189.15m, 264.45m and 346.52m (with three GPUs) for problem size of 80, 100, 120, 140 and 160 regarding V5. Pertaining to testing, 1,280 instances are randomly generated for each problem size from the uniform distribution, and are fixed for our method and the baselines. Our DRL code in PyTorch is available.\footnote{\href{https://github.com/Demon0312/HCVRP\_DRL}{https://github.com/Demon0312/HCVRP\_DRL}}
\subsection{Comparison Analysis of HCVRP}
\label{sec:comparison}
For the MM-HCVRP, it is prohibitively time-consuming to find optimal solutions, especially for large problem size. Therefore, we adopt a variety of improved classical heuristic methods as baselines, which include: 1) Slack Induction by String Removals (SISR)~\cite{christiaens2020slack}, a state-of-the-art heuristic method for CVRP and its variants; 2) Variable Neighborhood Search (VNS), a efficient heuristic method for solving the consistent VRP~\cite{xu2018variable}; 3) Ant Colony Optimization (ACO), an improved version of ant colony system for solving HCVRP with time windows~\cite{palma2019two}, where we run the solution construction for all ants in parallel to reduce computation time; 4) Firefly Algorithm (FA), an improved version of standard FA method for solving the heterogeneous fixed fleet vehicle routing problem~\cite{matthopoulos2019firefly}; 5) the state-of-the-art DRL based attention model (AM)~\cite{kool2018attention}, learning a policy of node selection to construct a solution for TSP and CVRP. We adapt the objectives and relevant settings of all baselines so that they share the same one with MM-HCVRP.
We have fine tuned the parameters of the conventional heuristic methods using the grid search~\cite{brito2005grid} for adjustable parameters in their original works, such as the number of shifted points in shaking process, the discounting rate of the pheromones and the scale of the population, and report the best ones in Table~\ref{parameter}. Regarding the iterations, we linearly increase the original ones for VNS, ACO and FA as the problem size scales up for better performance, while the original settings adopt identical iterations across all problem sizes. For SISR, we follow its original setting, where the iterations are increased as the problem size grows up.
To fairly compare with AM, we tentatively leverage two external strategies to select a vehicle at each decoding step for AM, i.e., \emph{by turns} and \emph{randomly}, since it did not cope with vehicle selection originally. The results indicate that vehicle selection \emph{by turns} is better for AM, which is thereby adopted for both min-max and min-sum objectives in our experiments.
Note that, we do not compare with OR-Tools as there is no built-in library or function that can directly solve MM-HCVRP. Moreover, we do not compare with Gurobi or CPLEX either, as our experience shows that they consume days to optimally solve a MM-HCVRP instance even with 3 vehicles and 15 customers. For the MS-HCVRP, with the same three heuristic baselines and AM as used for the MM-HCVRP, we additionally adopted a generic exact solver for solving vehicle routing problems with min-sum objective~\cite{pessoa2020generic}.
The baselines including VNS, ACO, and FA are implemented in Python. For SISR, we adopt a publicly available version\footnote{\href{https://github.com/chenmingxiang110/tsp\_solver}{https://github.com/chenmingxiang110/tsp\_solver}} implemented in JAVA. Note that, the running efficiency of the same algorithm implemented in C++, JAVA, and Python could be considerably different, which will also be analyzed later for running time comparison\footnote{A program implemented in C/C++ might be 20-50 times faster than that of Python, especially for large-scale problem instances. The running efficiency of Java could be comparable to C/C++ with highly optimized coding but could be slightly slower in general.}. All these baselines are executed on CPU servers equipped with the Intel i9-10940X CPU at 3.30 GHz. For those which consume much longer running time, we deploy them on multiple identical servers.
Regarding our DRL method and AM, we apply two types of decoders during testing: 1) Greedy, which always selects the vehicle and node with the maximum probability at each decoding step; 2) Sampling, which engenders $\mathcal{N}$ solutions by sampling according to the probability computed in Eq.~(\ref{eq:veh_prob}) and Eq.~(\ref{eq:node_prob}), and then retrieves the best one. We set $\mathcal{N}$ to 1280 and 12800, and term them as Sample1280 and Sample12800, respectively.
Then we record the performance of our methods and baselines on all sizes of MM-HCVRP and MS-HCVRP instances for both three vehicles and five vehicles in Table~\ref{tab:minmax_v3} and Table~\ref{tab:minmax_v5}, respectively, which include average objective value, gap and computation time of an instance. Given the fact that it is prohibitively time-consuming to optimally solve MM-HCVRP, the gap here is calculated by comparing the objective value of a method with the best one found among all methods.
\begin{figure}[t]
\centering
\setlength{\belowcaptionskip}{-0.3cm}
\includegraphics[width=0.47\textwidth,height=54mm,trim=10 5 10 20]{figs/converge_v3.png}
\caption{The converge curve of DRL method and SISR (V3).}
\label{fig:converge_v3}
\end{figure}
\begin{figure}[t]
\centering
\setlength{\belowcaptionskip}{-0.3cm}
\includegraphics[width=0.47\textwidth,height=54mm,trim=10 5 10 20]{figs/converge_v5.png}
\caption{The converge curve of DRL method and SISR (V5).}
\label{fig:converge_v5}
\end{figure}
From Table \ref{tab:minmax_v3}, we can observe that for the MS-HCVRP with three vehicles, the Exact-solver achieves the smallest objective value and gap and consumes shorter computation time than heuristic methods on V3-C40 and V3-C60. However, its computation time grows exponentially as the problem size scales up. We did not show the results of the Exact-solver for solving instances with more than 100 customers regarding both three vehicles and five vehicles, which consumes prohibitively long time. Among the three variants of DRL method, our DRL(Greedy) outperforms FA and AM(Greedy) in terms of objective values and gap. Although with slightly longer computation time, both Sample1280 and Sample12800 achieve smaller objective values and gaps than Greedy, which demonstrates the effectiveness of sampling strategy in improving the solution quality. Specifically, our DRL(Sample1280) can outstrip all AM variants and ACO. It also outperforms VNS in most cases except for V3-C40, where our DRL(Sample1280) achieves the same gap with VNS. With Sample12800, our DRL further outperforms VNS in terms of objective value and gap and is slightly inferior to the state-of-the-art heuristic, i.e., SISR and the Exact-solver. Pertaining to the running efficiency, although the computation time of our DRL method is slightly longer than that of AM, it is significantly shorter (at least an order of magnitude faster) than that of conventional methods, even if we eliminate the impact of different programming language via roughly dividing the reported running time by a constant (e.g., 30), especially for large problem sizes. Regarding the MM-HCVRP, similarly, our DRL method outperforms VNS, ACO, FA and all AM variants, which performs slightly inferior to SISR but consumes much shorter running time. Among all heuristic and learning based methods, the state-of-the-art method SISR achieves lowest objective value and gap, however, the computation time of SISR grows almost exponentially as the problem scale increases and our DRL(Sample12800) grows almost linearly, which is more obvious in large-scale problem sizes.
In Table~\ref{tab:minmax_v5}, similar patterns could be observed in comparison with that of three vehicles, where the superiority of DRL(Sample12800) to VNS, ACO, FA and AM becomes more obvious. Meanwhile, our DRL method is still competitive to the state-of-the-art method, i.e., SISR, on larger problem sizes in comparison with Table~\ref{tab:minmax_v3}.
Combining both Tables, our DRL method with Sample12800 achieves better overall performance than conventional heuristics and AM on both the MM-HCVRP and MS-HCVRP, and also performs competitively well
against SISR, with satisfactory computation time.
To further investigate the efficiency of our DRL method against SISR, we evaluate their performance with a bounded time budget, i.e., 500 seconds. It is much longer than the computation time of our method, given that our DRL computes a solution in a construction fashion rather than the improvement fashion as in SISR.
In Fig.~\ref{fig:converge_v3}, we record the performance of our DRL method and SISR for MS-HCVRP with three vehicles on the same instances in Table~\ref{tab:minmax_v3} and Table \ref{tab:minmax_v5}, where the horizontal coordinate refers to the computation time and the vertical one refers to the objective value. We depict the computation time of our DRL method using hollow circle, and then horizontally extend it for better comparison. We plot the curve of SISR over time since it improves an initial yet complete solution iteratively. We also record the time when SISR achieves same objective value as our method using filled circle. We can observe that SISR needs longer computation time to catch up the DRL method as the problems size scales up. When the computation time reaches 500 seconds for SISR, our DRL method achieves only slightly inferior objective values with much shorter computation time. For example, the DRL method only needs 9.1 seconds for solving a V3-C120 instance, while SISR needs about 453 seconds to achieve the same objective value. In Fig.~\ref{fig:converge_v5}, we record the results of our DRL method and SISR for five vehicles also with 500 seconds. Similar patterns could be observed to that of three vehicles, where the superiority of our DRL method is more obvious, especially for large-scale problem sizes. For example, on V5-C140 and V5-C160, our DRL method with 11.3 and 13.84 seconds even outperforms the SISR with 500 seconds, respectively. Combining all results above, we can conclude that with a relatively short time limit, our DRL method tends to achieve better performance than the state-of-the-art method, i.e., SISR, and the superiority of our DRL method is more obvious for larger-scale problem sizes. Even with time limit much longer than 500 seconds, our DRL method still achieves competitive performance against SISR.
\begin{figure}[t]
\centering
\setlength{\belowcaptionskip}{-0.4cm}
\includegraphics[width=0.48\textwidth,height=43mm,trim=5 1 5 20]{figs/generalize_minmax_v3.png}
\caption{Generalization performance for MM-HCVRP (V3).}
\label{fig:comparison_v3}
\end{figure}
\begin{figure}[t]
\centering
\setlength{\belowcaptionskip}{-0.3cm}
\include
|
{eq:SEP_SS} have been employed, while $m_2=m_1+0.3$ and $\alpha_2=\alpha_1+0.3$. It is shown that the performance improves when fading parameter $m_i$ increases and/or shadowing parameter $\alpha_i$ decreases, with the former having highest impact. Moreover, it is interesting to note that an important difference between the performance of DS and SS channels is observed. This difference can easily be explained if one considers that in DS scenario, shadowing is present in both scattering regions around the Tx and the Rx. On the other hand, in the SS scenario, shadowing is present in only one of the two regions, which results in improved communication conditions. In Fig.~\ref{fig:2}, the normalized capacity $C/BW$ is plotted as a function of $\overline{\gamma}$ for different values of $\alpha_i$. To obtain this figure, \eqref{eq:DS_capacity} and \eqref{eq:capacity_SS} have been employed, with $m_1=1.5$, $m_2=1.8$, and $\alpha_2=\alpha_1+0.1$. In this figure, it is interesting to note that the performance decreases as $\alpha_i$s increase, with a reducing rate. Finally, {Monte Carlo simulation results}\footnote{Values of the RVs under investigation have been generated in MATLAB\texttrademark using their definitions, i.e., \eqref{eq:R_definition} and \eqref{eq:R_2nd_definition} for the DS and SS scenarios, respectively.} have been also included in Figs.~\ref{fig:1}-\ref{fig:2}, depicting in all cases the correctness of the proposed framework.
\begin{figure}[t]
\centering
\includegraphics[keepaspectratio,width=4in]{fig1.eps}
\caption{BEP as a function of the average input SNR.}\label{fig:1}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[keepaspectratio,width=4in]{fig2.eps}
\caption{Normalized capacity as a function of the average input SNR.}\label{fig:2}
\end{figure}
\section{UAV Selection Strategy}\label{sec:UAV_selection}
In scenarios where $L$ UAVs operate as flying access points, the users' quality of experience might be improved by exploiting the additional available resources. In this context, a quite efficient and practical approach, which is also proposed in this paper, is to select the UAV access point to be connected to, according to a specific UAV association policy. A well-known criterion that has been widely used in similar scenarios is to select the UAV that offers the best instantaneous received SNR per coherence time (CT) interval, e.g., \cite{Ji2019}. Apparently, this approach is mainly based on the assumption of accurate estimations of the received SNR values. However, these estimates might be inaccurate especially in highly dynamic communication environments and/or in cases with increased distance between the Tx and the Rx. It is safe to say that both these scenarios characterize the UAV-enabled communications. An alternative approach, which is also adopted in this paper, is to select the associated UAV by exploiting the information that is available for the shadowing behavior \cite{6310073,8584097}. This approach is capitalizing on the larger decorrelation distance of the large-scale fading as compared to the small-scale fading CT. In this context, the selected UAV is the one providing the maximum averaged received power, i.e., shadowing variable, over a predetermined time interval. It is noted that this approach may be applied in both the uplink and downlink, since in the latter case an index is fed back to the Tx in order to proceed with the signal transmission.
With the above in mind, the received SNR of the proposed shadowing-based UAV-selection mechanism is given by
\begin{equation}
\gamma_{\rm out}=N_{i} \max_{\substack{1\leq i\leq L }} \{I_{i}\}=N_{i} I_{\max},
\end{equation}
where $N_{i}$ depends on the multipath fading of the selected UAV. In order to analytically evaluate the statistics of $\gamma_{\rm out}$, the PDF of $I_{\rm max}$ should be evaluated.
\subsection{Double-Shadowing Scenario}\label{subsec:DS_scenario}
As far as the DS scenario is concerned, the PDF of $I_{\max}$ can be evaluated with the aid of the following theorem.
\begin{theorem}
The PDF of $I_{\max}=\max\limits_{\substack{1\leq i\leq L }} \{I_{i}\}$ is given as
\begin{equation}\label{eq:PDF_max}
\begin{split}
f_{I_{\max}}(\gamma) = & \frac{2L}{\Gamma(\alpha_1)\Gamma(\alpha_2)}\sum_{i_1=0}^{L-1} \sum_{i_2=0}^{i_1}
\binom{L-1}{i_1}\binom{i_1}{i_2} \sum_{\substack{n_0, \cdots, n_{2\alpha_1-1}=0\\ n_0+ \cdots+ n_{2\alpha_1-1}=i_2}}^{i_2} \mathcal{A} \frac{\overline{\gamma}^{q/2}}{\gamma^{q/2+1}} \\ & \times \exp\left( -2 i_2 \left(\frac{\overline{\gamma}}{\gamma}\right)^{1/2} \right) K_{\alpha_2-\alpha_1} \left( 2\left(\frac{\overline{\gamma}}{\gamma}\right)^{1/2}\right),
\end{split}
\end{equation}
where $$\sum_{\substack{n_0, \cdots, n_{2\alpha_1-1}=0\\ n_0+ \cdots+ n_{2\alpha_1-1}=i_2}}^{i_2} = \underset{n_0+n_1+\cdots+n_{2\alpha_1-1}=i_2}{\sum_{n_0=0}^{i_2} \sum_{n_1=0}^{i_2}\cdots \sum_{n_{2\alpha_{1}-1}=0}^{i_2}}, $$
while $\mathcal{A}=\frac{i_2!}{n_0! n_1! \cdots n_{2a_1-1}!} \prod_{j=1}^{2a_1-1} \left( \frac{2^j}{j!}\right)^{n_{j+1}} (-1)^{i_1+i_2},$ and $q=a_1+a_2+\sum_{j=1}^{2a_1-1}j n_{j+1}$.
\end{theorem}
\begin{IEEEproof}
See Appendix~\ref{App:1}.
\end{IEEEproof}
Since $\gamma_{\rm out}$ is actually the product of two independent RVs, its PDF and CDF expressions can, respectively, be defined as
\begin{equation}\label{eq:final_PDF_CDF_defs}
\begin{split}
f_{\gamma_{\rm out}} (\gamma) & = \int_0^\infty \frac{1}{x} f_{N_i} \left( \frac{\gamma}{x}\right) f_{I_{\max}}(x)dx\\
F_{\gamma_{\rm out}} (\gamma) & = \int_0^\infty F_{N_i} \left( \frac{\gamma}{x}\right) f_{I_{\max}}(x)dx.
\end{split}
\end{equation}
The PDF of $\gamma_{\rm out}$ can be derived by substituting \eqref{eq:DN_PDF} and \eqref{eq:PDF_max} in \eqref{eq:final_PDF_CDF_defs} and using \cite[eq. (21)]{C:Adamchik_Meijer} yielding to
\begin{equation}\label{eq:PDF_DS}
\begin{split}
f_{\gamma_{\rm out}}(\gamma) = & \sum_{i_1=0}^{L-1} \sum_{i_2=0}^{i_1}
\binom{L-1}{i_1}\binom{i_1}{i_2} \sum_{\substack{n_0, \cdots, n_{2\alpha_1-1}=0\\ n_0+ \cdots+ n_{2\alpha_1-1}=i_2}}^{i_2} \\ & \frac{\times \gamma^{-1}\mathcal{A}L \mathcal{S}_1}{\left(i_2+1 \right)^{q-1/2}} G \substack{2,2
\\ 2,2} \left( \frac{\tilde{\gamma}\gamma}{ (i_2+1)^2}
\Big| \substack{\frac{3/2-q}{2},\frac{5/2-q}{2}
\\ \null
\\ m_1,m_2} \right).
\end{split}
\end{equation}
Moreover, the corresponding CDF is given by
\begin{equation}\label{eq:CDF_DS_out}
\begin{split}
F_{\gamma_{\rm out}}(\gamma) = & \sum_{i_1=0}^{L-1} \sum_{i_2=0}^{i_1}
\binom{L-1}{i_1}\binom{i_1}{i_2} \sum_{\substack{n_0, \cdots, n_{2\alpha_1-1}=0\\ n_0+ \cdots+ n_{2\alpha_1-1}=i_2}}^{i_2} \\ & \times \frac{\mathcal{A}L\mathcal{S}_1}{\left(i_2+1 \right)^{q-1/2}} G \substack{2,3
\\ 3,3} \left( \frac{\tilde{\gamma}\gamma}{ (i_2+1)^2}
\Big| \substack{1,\frac{3/2-q}{2},\frac{5/2-q}{2}
\\ \null
\\ m_1,m_2,0} \right).
\end{split}
\end{equation}
For evaluating the ASNR, \eqref{eq:PDF_DS} is substituted in the corresponding definition, i.e.,
\begin{equation}\label{eq:ASNR_def}
\begin{split}
\bar{\gamma}_{\rm out} &= \int_0^\infty
\gamma f_{\gamma_{\rm out}}(\gamma)d\gamma,
\end{split}
\end{equation}
and using \cite[eq. (7.811/4)]{B:Ryzhik_book}, the following closed-form expression is deduced
\begin{equation}\label{eq:moments_DS}
\begin{split}
\overline{\gamma}_{\rm out} = & \frac{L}{\Gamma(\alpha_1)\Gamma(\alpha_2)}
\sum_{i_1=0}^{L-1} \sum_{i_2=0}^{i_1} \binom{L-1}{i_1}\binom{i_1}{i_2}
\\ & \times \sum_{\substack{n_0, \cdots, n_{2\alpha_1-1}=0\\ n_0+ \cdots+ n_{2\alpha_1-1}=i_2}}^{i_2} \frac{\mathcal{A}\overline{\gamma} 2^{7/2-\alpha_1-\alpha_2} \sqrt{\pi} \Gamma\left(q-5/2 \right)}{\left(i_2+1 \right)^{q-5/2}} .
\end{split}
\end{equation}
An important insight that stems out from \eqref{eq:moments_DS} is that the shaping parameters of the double-Nakagami distribution, i.e., the behavior of the multipath fading, have no impact to the ASNR of the proposed system.
\subsection{Single-Shadowing Scenario}
As far as the SS scenario is concerned, the PDF of $I_{\max}$ is given in \cite[eq. (15)]{8584097}. Based on this expression, and following a similar approach as the one presented in Section ~\ref{subsec:DS_scenario}, the following closed-form expression for the PDF of $\gamma_{\rm out}$ is deduced
\begin{equation}
\begin{split}
f_{\gamma_{\rm out}} (\gamma)& = \frac{\mathcal{B} \mathcal{S}_2}{\gamma}\sum_{\substack{n_1, \cdots, n_{\alpha}=0\\ n_1+ \cdots+ n_{\alpha}=L-1}}^{L-1} G \substack{2,1
\\ 1,2} \left( \frac{\tilde{\gamma}\gamma}{L}
\Big| \substack{1-p
\\ \null
\\ m_1,m_2} \right),
\end{split}
\end{equation}
where $$\mathcal{B}=L^{-p} \frac{L!}{n_1!n_2!\cdots n_\alpha!}\prod_{j=1}^{\alpha} \left( \frac{1}{(j-1)!}\right)^{n_j},$$ and $p=\alpha+\sum_{j=1}^{\alpha} (j-1)n_j$. Moreover, the corresponding expression for the CDF of $\gamma_{\rm out}$ is given by
\begin{equation}\label{eq:CDF_SS_out}
\begin{split}
F_{\gamma_{\rm out}} (\gamma) & = \mathcal{B} \mathcal{S}_2\sum_{\substack{n_1, \cdots, n_{\alpha}=0\\ n_1+ \cdots+ n_{\alpha}=L-1}}^{L-1} G \substack{2,2
\\ 2,3} \left( \frac{\tilde{\gamma}\gamma}{ L}
\Big| \substack{1,1-p
\\ \null
\\ m_1,m_2,0} \right).
\end{split}
\end{equation}
Finally, for the scenario of SS, the ASNR is given by
\begin{equation}\label{eq:moments_SS}
\begin{split}
\overline{\gamma}_{\rm out} & = \frac{L\overline{\gamma}}{\Gamma(\alpha)}\sum_{\substack{n_1, \cdots, n_{\alpha}=0\\ n_1+ \cdots+ n_{\alpha}=L-1}}^{L-1}\mathcal{B} \Gamma\left( p-1\right).
\end{split}
\end{equation}
Here, it should be noted that the analytical results presented in the previous sections allow the system designer to predict the system performance in various communication scenarios, where this particular channel model can be applied. In the next section, it will be shown that the proposed channel model provides an excellent fit to real A2G communication environments.
\subsection{Numerical Results}
\begin{figure}[t]
\centering
\includegraphics[keepaspectratio,width=4in]{fig3.eps}
\caption{OP as a function of the normalized outage threshold for different architectures.}\label{fig:3}
\end{figure}
In Fig.~\ref{fig:3}, assuming the proposed UAV-selection strategy, the OP is plotted as a function of the normalized outage threshold ($\overline{\gamma}/\gamma_T$), for both channel models and different values {of the number of UAVs} $L$. For obtaining this figure, \eqref{eq:CDF_DS_out} and \eqref{eq
|
:CDF_SS_out} have been employed, with $m_1=1.5,$ $m_2=1.8,$ $\alpha_1=2,$ $\alpha_2=2.5$. In this figure, it is depicted that the performance improves as the number of UAVs increases. Moreover, the performance gain degrades as $L$ increases. Similar outcomes can be extracted in Fig.~\ref{fig:4}, where the ASNR is plotted as a function of $\overline{\gamma}$ also for both channel models and different values of $\alpha_i$. For obtaining this figure, \eqref{eq:moments_DS} and \eqref{eq:moments_SS} have been employed, with $\alpha_2=\alpha_1+0.5$. Once more, it is shown that the performance is better in DS model and as $\alpha_i$s increase. It is worth noting that the gap among the curves is higher when severe shadowing conditions exist, i.e., $\alpha_1=3$, as compared to the mild shadowing scenarios.
\begin{figure}[t]
\centering
\includegraphics[keepaspectratio,width=4in]{fig4.eps}
\caption{ASNR as a function of the average input SNR for different architectures.}\label{fig:4}
\end{figure}
\section{Empirical Results and Comparisons}
In this section, empirical results obtained in a measurement campaign will be exploited. These results will be used to justify the good fit of the considered channel model to empirical data as well as to prove the performance improvement achieved by the proposed UAV-selection strategy in real-world scenarios. The measurement campaign took place in Prague, in an urban pedestrian environment, with the Rx positioned on the street level in the middle of a crossroad, while the Tx was mounted on the bottom part of a zeppelin-type airship. The airship moved at a constant speed of 6.2 m/s, in a pre-defined route over the position where the Rx was located. The height above the ground was maintained at approximately 200 m while the Rx was kept stationary, mounted on a tripod at 1.7 m above the ground. The airship was equipped with a GPS sensor, enabling the Tx to point the antennas constantly towards the Rx by using a positioner attached to the bottom part of the ship and allowed the distance between the Rx and the Tx to be calculated. The transmitted signals were received by a single dual-linearly polarized rectangular patch antenna (collocated). More details for the measurement campaign can be found in \cite{nikolaidis2017dual}.
\begin{table}[]\label{tab:1}
\renewcommand{\arraystretch}{1.4}
\caption{K-L and K-S Goodness-of-Fit.}\label{tab:1} \centering
\begin{tabular}{|l|l|l|}
\hline
\textbf{Test} & K-L & K-S \\ \hline
$\bf dN_I$ \textbf{(DIG)} & 6.7\% & 85.96\% \\ \hline
$\bf dN_I$ \textbf{(SIG)} & 6.75\% & 71.44\% \\ \hline
$\bf dK_G$ & 7.24\% & 42.62\% \\ \hline
\end{tabular}
\end{table}
Next, based on the statistical framework developed on the previous section and the empirical data collected in \cite{nikolaidis2017dual}, various comparative results are presented. More specifically, three composite DSc distributions have been investigated, namely the double-Nakagami double-gamma, which is widely known as double-generalized K ($\rm dk_G$) \cite{5426028}, the proposed double-Nakagami double IG (denoted as $dN_I$ (DIG)), and the double-Nakagami with single IG (denoted as $dN_I$ (SIG)). It is noted that the method of moments was employed for estimating the parameters of all distributions. In this context, in order to investigate the suitability of the proposed distributions in fitting the empirical data sets, well-known goodness-of-fit tests have been applied. In particular, the Kullback–Leibler (K-L) divergence criterion \cite{kullback1951information} and the Kolmogorov–Smirnov (K-S) test \cite[eq. (8.320]{B:Papoulis_book} were used. As far as the K-L distance is concerned, it is given by \cite{sen2008vehicle}
\begin{equation}
d_{KL}=\frac12 \left( \sum_i p_i \log \left( \frac{p_i}{q_i}\right)+\sum_i q_i \log \left( \frac{q_i}{p_i}\right)\right),
\end{equation}
where $p_i$ and $q_i$ are the sets of the simulated and empirical PDF values, respectively. The distribution that fits best to the measured data is the one that minimizes the K-L distance. Moreover, the K-S is defined as
\begin{equation}
d_{KS}=\underset{x}{\max} \left| F_e(x)-F_t(x)\right|,
\end{equation}
where $F_e(x)$ and $F_t(x)$ denote the empirical and theoretical CDFs, respectively. By assuming a $95\%$ confidence interval to all candidate distributions, a satisfactory fit can be ensured. The results are tabulated in Table~\ref{tab:1}. In this table, it is shown that the distance
K-L remains below $10\%$, with the models incorporating the IG distribution holding always the smallest values for this criterion. Moreover, the passing rates of K-S tests are quite encouraging in all cases, with the DIG distribution providing the best performance.
The good fit can be also verified in Fig.~\ref{fig:SEG_PDF}, where plots with the theoretical and empirical PDFs and CDFs are provided. {The estimated values of the distributions' parameters, used for obtaining these two figures, are provided in the Table included in Fig.~\ref{fig:SEG_PDF}}. A general observation is that the investigated scenarios are characterized by NLoS conditions (for the end-to-end link) and strong scattering phenomena at the Rx side, due to the moving vehicles and surrounding buildings. This can be easily verified by the shapes of the PDFs and CDFs in Fig.~\ref{fig:SEG_PDF}, in which the direct LoS component is almost totally blocked. In all scenarios, the double-scattered signals may emanate from reflections along the vehicles and the multiple building facades from the structures surrounding the Rx.
\begin{figure}[t]
\centering
\includegraphics[keepaspectratio,width=4in]{FIGURE_6_7.eps}
\caption{Empirical and theoretical PDFs and CDF comparisons of a route segment.}\label{fig:SEG_PDF}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[keepaspectratio,width=4in]{MapEnchanced_v02.eps}
\caption{Measurement environment and the airship route. The red pin indicates the position of the Rx.}\label{fig:RouteOfUAVs}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[keepaspectratio,width=4in]{FigRSSI.eps}
\caption{Rx Power from 3 UAVs and the proposed selection strategy.}\label{fig:Rx_Power}
\end{figure}
The collected data, related to the received signal power, were split in $2$ (or $3$) segments, in order to emulate the received power from $2$ (or $3$) independent UAVs. Each of these segments has been assumed to belong to a ``virtual" UAV (indicated with specific colors in Figs.~\ref{fig:RouteOfUAVs}~and~\ref{fig:Rx_Power}, i.e., blue for UAV 1, green for UAV 2, and yellow for UAV 3). Fig.~\ref{fig:RouteOfUAVs} depicts the ``virtual" UAV's routes. In this context, we were able to compare the performance of the proposed UAV selection strategy with the corresponding of the ``3 independent UAVs". First of all, in Fig.~\ref{fig:Rx_Power}, the received power acquired from the 3 ``virtual" UAVs is depicted. In the same figure, the corresponding power of the output of the proposed scheme is also included, in which the Rx selects the UAV that offers the best SNR, on average, per stationarity region. This average SNR value is evaluated with the help of a sliding window of $\overline{W} = 40 \lambda$, where $\lambda=15$cm, which corresponds to 242 samples. In Fig.~\ref{fig:Proposed_Strategy}, the empirical CDF of the received power is plotted for the virtual UAVs, the proposed scheme, and an alternative UAV-selection approach. In the latter one, the UAV selection process is performed per channel's CT interval and is the most ordinary strategy that is frequent employed in scenarios where the best SNR principle is adopted. The performance improvement due to the proposed shadowing-based selection strategy is apparent from Fig.~\ref{fig:Proposed_Strategy}. This observation holds also for the case where selection is performed between 2 UAVs. What is of particular interest to be noted is that the proposed strategy offers a comparable performance with the CT-based selection. However, for this communication system, the CT was equal to $0.7\lambda$ (or $170$ samples). Thus, it is quite interesting to note that comparing shadowing-based selection, performed per stationarity region, i.e., $40\lambda$, with CT selection, performed in every $0.7\lambda$, results in $57$ times less channel comparisons required for the proposed approach, without any important loss on the performance. {Therefore, this reduction on the channel comparisons is expected to reduce i) the signal processing complexity, ii) the switching operations (among the UAV links), and iii) the requirements for the feedback overhead.}
\begin{figure}[t]
\centering
\includegraphics[keepaspectratio,width=4in]{FigProposedStrategy.eps}
\caption{Rx Power from all UAVs along with the proposed selection strategy and selection per sample.}\label{fig:Proposed_Strategy}
\end{figure}
\section{Conclusions}
In this paper, a new {generic} shadowed DSc channel model has been proposed for describing the received signal behavior in UAV-enabled communication scenarios where DSc coexists with shadowing. Capitalizing on the {convenient mathematical form} of the new composite fading model, important stochastic metrics for the received SNR of a UAV communication system have been analytically derived. These expressions are then used to study the performance of the subjected system in terms of the OP, the BEP, and the channel capacity. Moreover, a new UAV-selection policy has been proposed that intends to offer improved performance with reduced {signal processing} complexity. This new scheme exploits the slow variations of the signal mean values for the UAV association. In this sense, novel closed-form expressions have been derived for the new scheme's statistics of the output SNR and then used to study its performance. The accuracy of the channel model as well as the applicability of the proposed selection strategy have been also verified by empirical data collected in an A2G measurement campaign, proving also the usefulness of the derived results to real-world scenarios. {In our future work, various interesting issues will be addressed, including the impact of interfering effects in a multi-UAV network as well as the adoption of appropriate interference mitigation techniques.}
\appendices
\renewcommand{\theequation}{A-\arabic{equation}}
\setcounter{equation}{0}
\section{Proof for Theorem 1}\label{App:1}
In this Appendix, the proof for Theorem 1 is provided. With the help of the order statistics of independent RVs, the PDF of $I_{ \max}$ is defined as
\begin{equation}\label{eq:Ap1}
f_{I_{\max}} (\gamma)=Lf_{I_i}(\gamma)F_{I_i}(\gamma)^{L-1},
\end{equation}
where $f_{I_i}(\gamma)$ is given by \eqref{eq:DG_PDF} and $F_{I_i}(\gamma)$ is given by
\begin{equation}\label{Ap:2}
\begin{split}
F_{I_i}(\gamma)&= 1-\frac{1}{\Gamma(\alpha_1)\Gamma(\alpha_2)} G \substack{2,1
\\ 1,3} \left( \frac{\overline{\gamma}}{ \gamma}
\Big| \substack{1
\\ \null
\\ \alpha_1,\alpha_2,0} \right)\\
&\stackrel{(a)}{=} 1-\frac{2^{1-2\alpha_1}\sqrt{\pi}}{\Gamma(\alpha_1)\Gamma(\alpha_2)} \gamma\left( 2\alpha_1,2\sqrt{\frac{\overline{\gamma}}{\gamma}}\right)
\\
&\stackrel{(b)}{=} 1-\frac{2^{1-2\alpha_1}\sqrt{\pi} \Gamma(2\alpha_1)}{\Gamma(\alpha_1)\Gamma(\alpha_2)} \left[1-\exp\left( -2\left(\frac{\overline{\gamma}}{\gamma}\right)^{1/2}\right)\right] \sum_{k=0}^{2\alpha_1-1} \frac{1}{k!} \left( 2\left(\frac{\overline{\gamma}}{\gamma}\right)^{1/2}\right)^k.
\end{split}
\end{equation}
In \eqref{Ap:2}, $(a)$ holds due to \cite[eq. (07.34.03.0732.01)]{E:wolfram} (with $\gamma\left(\cdot,\cdot\right)$ denoting the lower incomplete gamma function \cite[eq. (8.350/1)]{B:Ryzhik_book}), while $(b)$ holds due to \cite[eq. (8.353/1)]{B:Ryzhik_book}, under the assumption of $2\alpha_1\in \mathbb{N}$. Next, in order to provide a simplified expression for the $ F_{I_i}(\gamma)^{L-1}$ , the binomial identity has been employed twice resulting in
\begin{equation}
\begin{split}
&F_{I_i}(\gamma)^{L-1} = \sum_{i_1=0}^{L} \sum_{i_2=0}^{i_1}\binom{L}{i_1} \binom{i_1}{i_2} (-1)^{i_1+i_2} \exp\left( - 2i_2\left(\frac{\overline{\gamma}}{\gamma}\right)^{1/2}\right) \left(\sum_{k=0}^{2\alpha_1-1} \frac{2^k}{k!} \left(\frac{\overline{\gamma}}{\gamma}\right)^{k/2} \right)^{i_2} .
\end{split}
\end{equation}
Finally, by employing the multinomial identity, \cite[eq. (24.1.2)]{abramowitz1964handbook}
\begin{equation}\begin{split}
\left( x_1+x_2+\cdots x_L\right)^n&= \underset{n_1+n_2+\cdots+n_L=n}{\sum_{n_1=0}^{n} \sum_{n_2=0}^{n}\cdots \sum_{n_L=0}^{n}} \frac{n!x_1^{n_1}x_2^{n_2}\cdots x_L^{n_L}}{n_1!n_2!\cdots n_L!}
\end{split}
\end{equation}
and using \eqref{eq:Ap1} and \cite[eq. (8.468)]{B:Ryzhik_book} finally yields \eqref{eq:PDF_max} and also completes this proof.
\section*{Acknowledgement}
The authors would like to acknowledge Prof. N. Hatzidiamantis for his valuable help on implementing the K-S tests. P. S. Bithas dedicates this article to Spilio G. Bitha.
\bibliographystyle{IEEEtran}
|
\section*{Acknowledgements}
\input{Acknowledgements}
\printbibliography
\end{document}
\section{Signal simulation}
\label{sec:signal_mc}
Signal samples were generated at next-to-leading order (NLO) in QCD with MadGraph5\_aMC@NLO 2.4.3 \cite{Alwall:2014hca},
using the LQ model of Ref.~\cite {Mandal:2015lca} that adds parton showers to previous fixed-order NLO OCD calculations~\cite{Kramer:2004df},
and the NNPDF 3.0 NLO \cite{Ball:2014uwa} parton distribution functions (PDF),
interfaced with \PYTHIA 8.212 \cite{Sjostrand:2014zea} using the A14 set of tuned parameters~\cite{ATL-PHYS-PUB-2014-021}
for the parton shower and hadronization.
The leptoquark signal production cross-sections are taken from calculations~\cite{xsref} of direct top-squark pair production,
as both are massive, coloured, scalar particles with the same production modes. The calculations are at NLO plus next-to-leading-logarithm accuracy, with uncertainties determined by variations of factorization and renormalization scales, $\alpha_\mathrm{s}$, and PDF variations.
Madspin \cite{Artoisenet:2012st} was used for the decay of the LQ. The parameter $\lambda$ was set to 0.3, resulting in
a LQ width
of about
0.2\% of its mass~\cite{Buchmuller:1986zs,Belyaev:2005ew}.
The samples were produced for a model parameter of $\beta=0.5$, where
desired branching ratios $B$ were obtained by reweighting the samples based on generator information.
Additional samples for $\beta=1$ are
used in the analysis
of the final state with two $b$-jets and two $\tau$-leptons.
Due to the difference between the model parameter $\beta$ and $B$ for third-generation LQs,
the branching ratio $\hat{B}$ in the simulated sample with $\beta=0.5$
can either be calculated from the given parameter $\lambda$ and the resulting decay width or be taken from the generator information of the MC sample.
It is shown in Figure \ref{fig:lq3_br} as a function of the leptoquark mass.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{./common_files/figures/lq3_br_noMCLines_Simulation}
\caption{Branching ratio into charged leptons for $\beta=0.5$ and different LQ masses for
\lqthreeu\ $\rightarrow b \tau / t \nu$ and \lqthreed~$\rightarrow t \tau / b \nu$.
}
\label{fig:lq3_br}
\end{figure}
The reweighting is based on the number of charged leptons $n_{\mathrm{cl}}$ at generator level originating directly from the decaying leptoquarks for each event. The weight $w$ depends on the $\hat{B}$ of the produced MC sample and on the $B$ of the decay channel and
is
\begin{equation}
\nonumber
w(B) = \left(\frac{B}{\hat{B}}\right)^{n_{\mathrm{cl}}} \times \left(\frac{1-B}{1-\hat{B}}\right)^{\left(2-n_{\mathrm{cl}}\right)}.
\end{equation}
\section{Conclusion}
\label{sec:conclusion}
Pair production of scalar third-generation
leptoquarks is investigated for all possible decays of the leptoquarks into a quark ($t$, $b$)
and a lepton ($\tau$, $\nu$) of the third generation.
LHC proton--proton collision
data recorded by the ATLAS detector in 2015 and 2016 at a centre-of-mass energy of \tttev\ are used,
with an integrated luminosity of \ilumi.
The results are based on reinterpretations of previously published ATLAS results as well as a dedicated search,
where no significant excess above the SM background expectation is observed.
Upper limits on the LQ pair-production cross-section as a function of the LQ mass
are reported for both the up-type (\lqthreeu\ $\rightarrow t \nu / b \tau$)
and down-type (\lqthreed\ $\rightarrow b \nu / t \tau$) leptoquarks for
branching ratios into charged leptons equal to zero, 0.5, or unity.
Based on the theoretical prediction for the LQ pair-production cross-section,
these upper limits on the cross-section can be converted to lower limits on the mass,
excluding leptoquarks with masses below about 1~\TeV\ for both LQ types and for both
limiting cases of branching ratios into charged leptons of zero or unity.
In addition, mass limits are shown
as a function of the branching ratio into charged leptons for both the up- and down-type leptoquarks.
Even for intermediate values of the branching ratio, masses below at least 800~\GeV\ are
excluded. These mass limits quickly increase to about 1~\TeV\ for small and large branching ratios.
\section{
Limits on the LQ mass as a function of $B$}
\label{sec:results}
The limits on cross-section and mass
for a fixed value of $B$ that is expected to have the
highest sensitivity for the respective analysis, are presented
in the previous five sections.
Here, Figure~\ref{fig:br_vs_massU}
shows the limits on the LQ mass as a function of $B$ for
all five analyses for \lqthreeu\ and \lqthreed\ pair production.
The region to the left of the contour lines is excluded at \percent{95} confidence level.
The strongest limits in terms of mass exclusion are for \lqthreeu\ for
$B=1$ and $B = 0$ in the $b \tau b \tau$ and $t t +$\ETmiss channel, respectively, and for \lqthreed\ for $B= 0$
in the $b b +$\ETmiss channel. These are the cases where the channels are optimized ($b \tau b \tau$)
or optimal ($t t +$\ETmiss, $b b +$\ETmiss), as discussed in the introduction.
However, as can be seen from Figure~\ref{fig:br_vs_massU},
all channels exhibit good sensitivities to both types of LQs and to a larger range of $B$ values,
except for the \ttmetonel\ channel, which
is not sensitive to \lqthreed\ mainly due to the requirement of exactly one lepton.
Therefore, good sensitivity to both types of LQs at all values of $B$ is obtained,
excluding
masses below 800~\GeV\ for both
\lqthreeu\ and \lqthreed\ independently of the branching ratio, with masses below 1000~\GeV\ and 1030~\GeV\ (970~\GeV\ and 920~\GeV)
being excluded for the limiting cases of $B$ equal to zero and unity for \lqthreeu\ (\lqthreed).
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{common_files/figures/LQ3u_combined_final}
\includegraphics[width=0.8\linewidth]{common_files/figures/LQ3d_combined_final}
\caption{
Limits on the branching ratio into charged leptons
for scalar third-generation up-type (upper panel) leptoquark
pair production (\lqthreeu $\rightarrow b\tau / t\nu $) and down-type (lower panel) leptoquark
pair production (\lqthreed $\rightarrow t\tau / b\nu$) as a function of the leptoquark mass.
The limits are based on a dedicated LQ search for two $b$-jets and two $\tau$-leptons
($b \tau b \tau$),
and reinterpretations of the search for bottom-squark
pair production ($b b +$\ETmiss)~\cite{SUSY-2016-28}, for top-squark pair production with one
($t t +$\ETmiss-$1\ell$)~\cite{SUSY-2016-16}
or zero leptons ($t t +$\ETmiss-$0\ell$)~\cite{stop0LMoriond2017} in the final state, and for top squarks decaying
via $\tau$-sleptons ($\tau \tau b +$\ETmiss)~\cite{SUSY-2016-19}.
The region to the left of
the contour lines is excluded at \percent{95} confidence level.
}
\label{fig:br_vs_massU}
\end{figure}
\section{Introduction}
\label{sec:intro}
Leptoquarks (LQ) are predicted by many extensions \cite{dimop_suss,techni2, techni3, string, comp, pati_salam_colour,georgi_glashow_unification} of the Standard Model (SM) and provide a connection between the quark and lepton sectors, which
exhibit a similar structure in the SM.
Also, recent hints of a potential violation of lepton universality in measurements of $B$-meson decays can be attributed, if confirmed,
to the exchange
of LQs~\cite{Hiller:2014yaa, Gripaios:2014tna, Freytsis:2015qca, Bauer:2015knc, DiLuzio:2017chi, Buttazzo:2017ixm, Cline:2017aed}.
LQs are bosons carrying colour and fractional electrical charge. They possess non-zero baryon and lepton numbers and decay into a quark--lepton pair.
Leptoquarks can be scalar or vector bosons and can be produced singly or in pairs in proton--proton collisions.
This analysis focuses on the pair-production of third-generation scalar leptoquarks, i.e.\ LQs that decay into third-generation
SM particles.
The assumption that LQs can only interact with leptons and quarks of the same family follows the minimal Buchm\"uller--R\"uckl--Wyler model \cite{Buchmuller:1986zs},
which is the benchmark model used in this analysis.
The LQs couple to the quark--lepton pair via a Yukawa interaction. The couplings are determined by two parameters:
a model parameter $\beta$ and the coupling parameter $\lambda$. The coupling to the charged lepton ($\tau$) is given by $\sqrt{\beta} \lambda$, and
the coupling to the $\tau$-neutrino $\nu$ by $\sqrt{1-\beta}\lambda$.
The search is carried out for an up-type (\lqthreeu\ $\rightarrow t \nu / b \tau$)
and a down-type (\lqthreed\ $\rightarrow b \nu / t \tau$) leptoquark.
The LQ model
is identical to the one used for a recent ATLAS search for first- and second-generation scalar LQs
using the dataset from 2015 and 2016, consisting of \ifb{36.1} of data taken at \tttev \cite{Aaboud:2019jcc}.
In the following, the third-generation results are presented for that same dataset, where all possible decays of the pair-produced
\lqthreeu\ and \lqthreed\ into a quark ($t$, $b$) and a lepton ($\tau$, $\nu$) of the third generation are considered.
The results are presented as a function of the leptoquark mass and the branching ratio ($B$) into charged leptons,
in contrast to using mass and $\beta$ as done for the first and second generations.
This is due to the fact that $\beta$ is not equal to the branching ratio for third-generation LQs due to the sizeable top-quark mass.
Previous ATLAS results for third-generation LQs for the case $B = 0$ for both \lqthreeu\ and \lqthreed\
used the $\sqrt{s} = 8$~\TeV\ dataset from 2012 \cite{EXOT-2014-03}.
The most recent CMS results are based on the 2016 dataset targeting $B = 0$ for both \lqthreeu\ and \lqthreed\ \cite{CMS-SUS-18-001}
as well as $B = 1$ for \lqthreeu\
\cite{Sirunyan:2018vhk}
and \lqthreed\ \cite{Sirunyan:2018nkj}.
The results presented here are from a
dedicated LQ search based on a search for pair-produced Higgs bosons
decaying into two $b$-jets and two $\tau$-leptons~\cite{HIGG-2016-16},
where the search
is optimized for the \lqthreeu\ pair production
with $B \approx 1$, and four
reinterpretations of ATLAS searches for supersymmetric particles.
Supersymmetric particles can have similar or even identical experimental signatures and very similar kinematics to pair-produced LQs.
Pair production of the supersymmetric partner of the top (bottom) quark, known as the top (bottom) squark,
has the same experimental signature of a \antibar{t}-pair (\antibar{b}-pair) and missing transverse momentum
as \lqthreeu\ (\lqthreed) pair production with
$B =0$ (see Figure~\ref{fig:feynman-diagrams}).
Hence, the ATLAS searches for top squarks in final states with one \cite{SUSY-2016-16} or zero \cite{stop0LMoriond2017} leptons
and for bottom squarks \cite{SUSY-2016-28}
are optimal when searching for \lqthreeu\ and \lqthreed\ with $B = 0$, respectively.
The final state of two $\tau$-leptons, $b$-jets, and missing transverse momentum
is targeted in another top-squark pair-production search~\cite{SUSY-2016-19}
and is expected to be sensitive to medium and high branching ratios into charged leptons.
For all analyses, the results are presented as a function of $B$ and the leptoquark mass for both \lqthreeu\ and \lqthreed.
\begin{figure}
\begin{center}
\includegraphics[width=0.49\linewidth]{./common_files/figures/LQLQ}
\includegraphics[width=0.49\linewidth]{./common_files/figures/LQLQd}
\caption{
Pair production and decay of \lqthreeu\ and \lqthreed.
}
\label{fig:feynman-diagrams}
\end{center}
\end{figure}
The paper is structured as follows. After a brief description of the ATLAS detector, the Monte Carlo (MC) simulations
of LQ pair production are discussed.
This is followed by a description of the five analyses, starting with the
dedicated search for two $b$-jets and two $\tau$-leptons.
Four brief sections describe the reinterpretations
of searches for supersymmetric particles,
as the published analyses are not modified for this reinterpretation.
Each of the five analysis sections finish with cross-section and mass limits for a fixed value of $B$ to which
the analysis is particularly sensitive.
Finally, the results of all analyses are presented as a function of $B$
and leptoquark mass.
\section{ATLAS detector}
\label{sec:detector}
The ATLAS detector~\cite{PERF-2007-01} is a multipurpose particle detector at the LHC with nearly $4\pi$ coverage around the collision point.\footnote{
ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector
and the $z$-axis along the beam pipe. The $x$-axis points from the IP to the centre of the LHC ring, and the $y$-axis points upward. Cylindrical coordinates ($r$,$\phi$) are used in the transverse plane, $\phi$ being the azimuthal angle around the beam pipe.
The pseudorapidity is defined in terms of the polar angle $\theta$ as $\eta= -\ln\tan(\theta/2)$. Angular distance is measured in units of $\Delta R = \sqrt{(\Delta \eta)^2+(\Delta \phi)^2}$.}
Closest to the beam is the inner detector, which provides charged-particle tracking in the range $|\eta| < 2.5$. During the LHC shutdown between Run 1 and Run 2, a new innermost layer of silicon pixels was added, which improves the track impact parameter resolution and vertex position resolution~\cite{ATLAS-TDR-19,ATL-PHYS-PUB-2015-051}.
The inner detector is surrounded by a superconducting solenoid providing a 2\,T axial magnetic field,
followed by a lead/liquid-argon electromagnetic sampling calorimeter and a steel/scintillator-tile hadronic calorimeter.
The endcap and forward regions are instrumented with liquid-argon calorimeters for both the electromagnetic and hadronic energy measurements up to $|\eta|=4.9$.
The outer part of the detector consists of a muon spectrometer with high-precision tracking chambers for coverage up to $|\eta|=2.7$,
fast detectors for triggering over $|\eta| < 2.4$, and three large superconducting toroidal magnets with eight coils each.
Events are selected by a two-level trigger system consisting of a hardware-based trigger for the first level and a software-based system for the second level~\cite{TRIG-2016-01}.
\section{The $b \tau b \tau$ channel}
To search for pair-produced scalar leptoquarks decaying into \btaubtau,
final states in which one $\tau$-lepton decays leptonically and the other hadronically (\tlhad), as well as the case in which both $\tau$-leptons decay hadronically (\thadhad), are considered. This analysis utilizes the same analysis strategy employed by the ATLAS search for pair-produced Higgs bosons decaying into $bb \tau\tau$ final states \cite{HIGG-2016-16} but optimized for a leptoquark signal with decays into $b \tau b \tau$.
\begin{figure}
\centering
\small
\includegraphics[width=0.75\textwidth]{di_higgs/acceptance_LQ3LQ3_up_btaubtau_smooth.pdf}
\caption{The acceptance times efficiency for the up-type leptoquark signal is shown for the \tlhad\ and \thadhad\ channels as a function of the leptoquark mass. The offline selection is summarized in Table \ref{selection}. The results are given separately for selected events with one and two b-tagged jets.
}
\label{fig:acceptance}
\end{figure}
Object reconstruction in the detector (electrons, muons, $\tau$-leptons, jets, and $b$-jets) employed for \tlhad\ and \thadhad\ channels is the same as in Ref. \cite{HIGG-2016-16}. The data were collected through three triggers, a single-lepton (electron or muon) trigger (SLT), a single-$\tau$-lepton trigger (STT), and a di-$\tau$-lepton trigger (DTT). The offline selection is dependent on the trigger and is summarized in Table \ref{selection}. Events in each channel must pass the associated trigger or else the event is discarded. The selection criteria are chosen to optimize the trigger efficiency for the associated data samples. Events are split into categories according to the multiplicity of $b$-tagged jets. Events with one or two $b$-tagged jets are considered signal-like events (1-tag and 2-tag events), which form two separate signal regions. In events with one $b$-tag, the highest-\pt\ non-tagged jet is considered for leptoquark event reconstruction. The acceptance times efficiency for the \lqthreeu\ signal is shown in Figure \ref{fig:acceptance} for \tlhad\ and \thadhad\ channels as a function of the leptoquark mass with $B = 1$. The decrease in acceptance times efficiency with leptoquark mass is driven by a combination of reduced b-tagging effiency and the efficiency in pairing the bottom quark and $\tau$-lepton. After applying the selection criteria, boosted decision trees (BDTs) are used to improve discrimination between signal and background. The BDTs are only trained on the up-type leptoquark signal. The sensitivity to the down-type leptoquark decay channel is due to the final state $t\tau t\tau \rightarrow Wb\tau Wb\tau$, where the \Wboson bosons decay into jets. Because this analysis does not veto additional jets, it is sensitive to this decay chain, although it is not optimal.
\begin{table}
\centering
\caption{Summary of applied event selection for the \tlhad\ and \thadhad\ channels.}
\label{selection}
\footnotesize
\begin{tabular}{ |l|m{0.85\textwidth}|}
\hline
\tlhad (SLT) &
\begin{itemize}
\item{Exactly one $e$ passing `tight' identification criteria or one $\mu$ passing `medium' identification criteria \cite{PERF-2013-03,PERF-2015-10}. Events containing additional electrons or muons with `loose' identification criteria and \pt $>$ 7 ~\GeV\ are vetoed.}
\item{Exactly one hadronically decaying $\tau$-lepton with transverse momentum (\pt) $>$~25~\GeV\ and $|\eta|$ < 2.3.}
\item{Opposite-sign charge between the $\tau$-lepton and the
light lepton ($e/\mu$).}
\item{At least two central jets in the event with \pt $>$ 60 (20)~\GeV\ for the
leading (subleading) jet.}
\end{itemize}\\
\hline
\thadhad (STT) &
\begin{itemize}
\item{Events containing electrons or muons with `loose' identification criteria and \pt $>$ 7 ~\GeV\ are vetoed.}
\item{Exactly two hadronically decaying $\tau$-leptons with $|\eta| < 2.3$. The leading $\tau$-lepton must have \pt $>$ 100, 140, or 180~\GeV\ for data periods where the trigger \pt~threshold was 80, 125, or 160~\GeV\, respectively. The subleading $\tau$-lepton is required to have \pt\ $>$ 20~\GeV.}
\item{The two $\tau$-leptons must have opposite-sign charge. }
\item{At least two jets in the event with \pt $>$ 45 (20)~\GeV\ for the leading (subleading) jet.}
\end{itemize} \\
\hline
\thadhad (DTT) & \begin{itemize}
\item{Selected events from STT are vetoed as are events containing electrons or muons with `loose' identification criteria and \pt $>$ 7 ~\GeV.}
\item{Exactly two hadronically decaying $\tau$-leptons with $|\eta| < 2.3$. The leading (subleading) $\tau$-lepton must have \pt $>$ 60 (30)~\GeV.}
\item{The two $\tau$-leptons must have opposite-sign charge. }
\item{At least two jets in the event with \pt $>$ 80 (20)~\GeV\
for the leading (subleading) jet.}
\end{itemize}\\
\hline
\end{tabular}
\end{table}
BDTs are trained to separate the signal from the expected backgrounds, and the BDT score distributions are used as the final discriminant to test for the presence of a signal. The BDTs utilize several input variables, shown in the list below, including those derived from a mass-pairing strategy which extracts the most likely $b\tau$ pairs by minimizing the mass difference between the leptoquark candidates:
\begin{itemize}
\item{$s_\text{T}$: the scalar sum of missing transverse momentum (\met), the \pt\ of any reconstructed $\tau$-lepton(s), the \pt of the two highest-\pt jets, and the \pt of the lepton in \tlhad\ events}
\item{$m_{\tau, \mathrm{jet}}$: the invariant mass between the leading $\tau$-lepton and its mass-paired jet
\item{$m_{\ell, \mathrm{jet}}$: the invariant mass between the lepton and its matching jet from the mass-pairing strategy (\tlhad\ only)}
\item{$\Delta R (\mathrm{lep}, \mathrm{jet})$: the $\Delta R$ between the electron or muon (leading $\tau$-lepton) and jet in \tlhad\ (\thadhad)}
\item{\MET $\phi$ centrality: quantifies the $\phi$ separation between the \met and $\tau$-lepton(s). Full definition is in Ref.~\cite{HIGG-2016-16}}
\item{$p_\text{T}^{\tau}$}: the \pt of the leading $\tau$-lepton}
\item{$\Delta\phi(\mathrm{lep},\MET)$: the opening angle between the lepton and \met (\tlhad\ only)}
\end{itemize}
Kinematic distributions for \lephad\ and \hadhad\ signal regions in 2-tag events are shown in Figure \ref{fig:bdtinput}. Separate BDTs are trained for \tlhad\ and \thadhad\ categories for each mass point of the \lqthreeu\ MC sample and for each $b$-tag signal region.
The signal samples used in the training include events with a small range of leptoquark masses around the given mass point to ensure the BDT is sensitive to signals that have masses between the hypotheses simulated. In the \tlhad\ channel the training is performed against the dominant $t\bar{t}$ background only. BDTs for the \thadhad\ channel are trained against simulated $t\bar{t}$ and $Z\rightarrow \tau\tau$ events and multi-jet events from data.
\begin{figure}
\centering
{
\includegraphics[width=0.43\textwidth]{di_higgs/LQplotsFeb/lephadInputVars/800/2tag_sT.pdf}
}
{
\includegraphics[width=0.43\textwidth]{di_higgs/LQplotsFeb/hadhadInputVars/800/2tag_sT.pdf}
}\\
{
\includegraphics[width=0.43\textwidth]{di_higgs/LQplotsFeb/lephadInputVars/800/2tag_MTauJet.pdf}
}
{
\includegraphics[width=0.43\textwidth]{di_higgs/LQplotsFeb/hadhadInputVars/800/2tag_LQ1M.pdf}
}\\
{
\includegraphics[width=0.43\textwidth]{di_higgs/LQplotsFeb/lephadInputVars/800/2tag_dRLepJet.pdf}
}
{
\includegraphics[width=0.43\textwidth]{di_higgs/LQplotsFeb/hadhadInputVars/800/2tag_METCentrality.pdf}
}
\caption{Kinematic distributions for \lephad\ (left) and \hadhad\ (right) signal regions in 2-tag events after performing the combined channel fit. The ratio of the data to the sum of the backgrounds is shown in the lower panel. The hatched bands indicate the combined statistical and systematic uncertainties in the background. Distributions include the scalar sum of transverse momentum of reconstructed objects in the event ($s_\text{T}$), the invariant mass between the leading $\tau$-lepton and its mass-paired jet ($m_{\tau, \mathrm{jet}}$), the invariant mass between the lepton and its matching jet from the mass-pairing strategy ($m_{\ell, \mathrm{jet}}$), the $\Delta R$ between the leading $\tau$-lepton and jet ($\Delta R (\mathrm{lep}, \mathrm{jet})$), and the \MET $\phi$ centrality, which quantifies the $\phi$ separation between the \met and $\tau$-lepton(s). }
\label{fig:bdtinput}
\end{figure}
The background estimation techniques in this analysis are the same as used in Ref. \cite{HIGG-2016-16} and are summarized here. In both channels, background processes containing true $\tau$-leptons are taken from simulation. The dominant background processes are $\ttbar$ and $\Ztautau$ produced in association with jets originating from heavy-flavour quarks ($bb,bc,cc$). The $\ttbar$ events are normalized using events with low BDT output score in the \tlhad\ category. Events from $\Ztautau$ plus heavy-flavour jets are normalized using a control region of events that include two muons with combined invariant mass consistent with that of a \Zboson\ boson. Backgrounds in which quark- or gluon-initiated jets are misidentified as hadronically decaying $\tau$-leptons are estimated using data-driven methods. In both channels, $\ttbar$ events are estimated separately from other background sources if one or more reconstructed $\tau$-lepton decays are mis-reconstructed jets (so-called `fake $\tau$-leptons'). In the \tlhad\ channel all fake-$\tau$-lepton contributions from $\ttbar$, $W$+jets, and multi-jet processes are estimated using an inclusive fake-factor method, described in Ref.~\cite{HIGG-2016-16}. Theory uncertainties in the modeling of the $\ttbar$ and $Z$+jets background containing true $\tau_\mathrm{had}$ are assessed by varying the matrix element generator and the parton shower model, and by adjusting the factorization and renormalization scales along with the amount of additional radiation. The resulting variations in the BDT distributions are included as shape uncertainties in the final fit.
In the \thadhad\ channel, the fake-$\tau$-lepton $\ttbar$ component is estimated as follows: the probability for a jet from a hadronic $W$-boson decay to be reconstructed as a hadronically decaying $\tau$-lepton is measured in data. This is used then to correct the MC prediction, after subtracting the predicted number of true $\tau$-leptons from the MC. Three control regions are defined for both \tlhad\ and \thadhad: these include 1-$b$-tag and 2-$b$-tag same-sign lepton events that are mostly events with fake $\tau$-leptons and a $\ttbar$ control region as defined in Ref.~\cite{HIGG-2016-16}. The uncertainty in the modeling is estimated by varying the fake-factors and fake-rates within their statistical uncertainties and varying the amount of true $\tau_\mathrm{had}$ background subtracted. Systematic uncertainties on the extrapolation of the fake $\tau_\mathrm{had}$ backgrounds from the control regions, where they are derived, to the signal regions are estimated by varying the control region definition; an uncertainty due to the difference in the flavour composition of the jet faking the $\tau_\mathrm{had}$ is also assigned based on simulation.
The BDT responses in the 2-tag same-sign and top-quark control regions are shown in Figure \ref{fig:bdtoutput_CR}. The background modeling was checked in these control regions, which validates the signal-sensitive region at high BDT score. The 1-tag and 2-tag \lephad\ and \hadhad\ signal regions are shown in Figures \ref{fig:bdtoutput_low} and \ref{fig:bdtoutput_high} for up-type samples with a mass of 400~\GeV\ ($B=1$) and down-type leptoquark samples with a mass of 800~\GeV\ ($B=1$). The binning choice differs between the figures as this follows an algorithm depending on the number of signal (in the case of the signal region only) and background events in each bin. Yield tables are shown for both 1-tag and 2-tag regions in Table \ref{tab:postfitYields}, where the numbers quoted are after performing the combined fit, described below.
\begin{figure}
\centering
{
\includegraphics[width=0.49\textwidth]{di_higgs/LQplotsFeb/lephadCRs/2tagSS_log.pdf}
}
{
\includegraphics[width=0.49\textwidth]{di_higgs/LQplotsFeb/lephadCRs/2tagtop_log.pdf}
}\\
{
\includegraphics[width=0.49\textwidth]{di_higgs/LQplotsFeb/hadhadCRs/2tagSS_log.pdf}
}
{
\includegraphics[width=0.49\textwidth]{di_higgs/LQplotsFeb/hadhadCRs/2tagtop_log.pdf}
}
\caption{BDT score distributions for \lephad\ (top) and \hadhad\ (bottom) 2-tag channels in the same-sign (left) and top-quark (right) control regions after performing the combined fit to all channels. The ratio of the data to the sum of the backgrounds is shown in the lower panel. The hatched bands indicate the combined statistical and systematic uncertainties in the background.}
\label{fig:bdtoutput_CR}
\end{figure}
\begin{figure}
\centering
{
\includegraphics[width=0.49\textwidth]{di_higgs/LQplotsFeb/postfit_BDTscore/postfitBDTscore_up_400_lephad_1taglog.pdf}
}
{
\includegraphics[width=0.49\textwidth]{di_higgs/LQplotsFeb/postfit_BDTscore/postfitBDTscore_up_400_lephad_2taglog.pdf}
}\\
{
\includegraphics[width=0.49\textwidth]{di_higgs/LQplotsFeb/postfit_BDTscore/postfitBDTscore_up_400_hadhad_1taglog.pdf}
}
{
\includegraphics[width=0.49\textwidth]{di_higgs/LQplotsFeb/postfit_BDTscore/postfitBDTscore_up_400_hadhad_2taglog.pdf}
}
\caption{BDT score distributions for \lephad\ (top) and \hadhad\ (bottom) channels in the 1-tag (left) and 2-tag (right) regions after performing the combined channel fit. The stacked histograms show the various SM background contributions, which are normalized to the expected cross-section. The hatched band indicates the total statistical and systematic uncertainty in the SM background. The error bars on the black data points represent the statistical uncertainty in the data yields. The dashed histogram shows the expected additional yields from a leptoquark signal model for a up-type leptoquark sample with a mass of 400~\GeV\ ($B=1$) added on top of the SM prediction. The ratio of the data to the sum of the backgrounds is shown in the lower panel.}
\label{fig:bdtoutput_low}
\end{figure}
\begin{figure}
\centering
{
\includegraphics[width=0.49\textwidth]{di_higgs/LQplotsFeb/postfit_BDTscore/postfitBDTscore_down_800_lephad_1taglog.pdf}
}
{
\includegraphics[width=0.49\textwidth]{di_higgs/LQplotsFeb/postfit_BDTscore/postfitBDTscore_down_800_lephad_2taglog.pdf}
}\\
{
\includegraphics[width=0.49\textwidth]{di_higgs/LQplotsFeb/postfit_BDTscore/postfitBDTscore_down_800_hadhad_1taglog.pdf}
}
{
\includegraphics[width=0.49\textwidth]{di_higgs/LQplotsFeb/postfit_BDTscore/postfitBDTscore_down_800_hadhad_2taglog.pdf}
}
\caption{BDT score distributions for \lephad\ (top) and \hadhad\ (bottom) channels in the 1-tag (left) and 2-tag (right) regions after performing the combined channel fit. The stacked histograms show the various SM background contributions, which are normalized to the expected cross-section. The hatched band indicates the total statistical and systematic uncertainty in the SM background. The error bars on the black data points represent the statistical uncertainty in the data yields. The dashed histogram shows the expected additional yields from a leptoquark signal model for a down-type leptoquark sample with a mass of 800~\GeV\ ($B=1$) added on top of the SM prediction. The ratio of the data to the sum of the backgrounds is shown in the lower panel.}
\label{fig:bdtoutput_high}
\end{figure}
Systematic uncertainties are considered and propagated through the full analysis. The uncertainties in the luminosity, background modeling, and detector modeling
are calculated as in Ref.~\cite{HIGG-2016-16}. The impact of varying the renormalization/factorization scales and choice of PDF on the
signal acceptance was also investigated.
The fit strategy follows that in Ref.~\cite{HIGG-2016-16}. The BDT output score is the discriminating variable for all channels and signals in a single combined fit of all signal and control regions. A CL$_\mathrm s$ method \cite{Read:2000ru} based on one-sided profile-likelihood test statistics is used to test the signal hypothesis. In the profile-likelihood function, Gaussian constraints are used for shape systematic uncertainties and log-normal constraints for normalization uncertainties. The binning in the BDT output categories at high BDT output score is modified to ensure sufficient statistics. The Standard Model predictions are consistent with the data. Figure~\ref{fig:1d-limits-combined} shows the expected and observed 95\% confidence level (CL) upper limits on the cross-section for scalar up-type and down-type leptoquark pair production as a function of leptoquark mass for the combined \tlhad\ + \thadhad\ channels. The theoretical prediction for the cross-section of scalar leptoquark pair production is shown by the solid line, along with the uncertainties. These limits are used to set upper limits on the leptoquark branching ratio $B(\text{LQ}\rightarrow q \tau)$ as a function of the leptoquark mass. From the data, masses below 1030 GeV and 930 GeV are
excluded for \lqthreeu\ and \lqthreed\, respectively, at 95\% CL for the case
of B equal to unity. The expected exclusion ranges are 1030 GeV and
930 GeV, respectively
\begin{table}
\caption{Post-fit expected numbers of signal and background events, determined from a background-only fit, compared to the observed number of data events after applying the
selection criteria and requiring at least one $b$-tagged jet. Both the up-type and down-type leptoquark samples here use $B=1$. In the \tlhad\ channel, the fake-$\tau$-lepton background includes all processes in which a jet is misidentified as a $\tau$-lepton, while in the \thadhad\ case the fake background from QCD multi-jet processes and $t\bar{t}$ production are derived separately. The $t\bar{t}$ background includes events with true $\tau_\mathrm{had}$ and the very small contribution from leptons misidentified as $\tau_\mathrm{had}$. The `Other' category includes contributions from $W$+jets, $Z$+jets, and diboson processes. The total background is not identical to the sum of the individual components since the latter are rounded for presentation, while the sum is calculated with the full precision before being rounded. The uncertainty in the total background is smaller than that in the $\ttbar$ and multi-jet backgrounds due to these being strongly anti-correlated.}
\label{tab:postfitYields}
\begin{center}
\setlength{\tabcolsep}{0.3pc}
\begin{tabular}{ |l|rcl|rcl|rcl|rcl| }
\hline
Sample & \multicolumn{12}{|c|}{Post-fit yield} \\\cline{2-13}
& \multicolumn{6}{|c|}{\tlhad} & \multicolumn{6}{c|}{\hadhad}\\\cline{2-13}
& \multicolumn{3}{c}{1-tag} & \multicolumn{3}{c|}{2-tag} & \multicolumn{3}{c}{1-tag} & \multicolumn{3}{c|}{2-tag}\\
\hline
$t\bar{t}$ & 17800 &$\pm$& 1500 & 14460 &$\pm$& 980 & 285 &$\pm$& 83 & 238 &$\pm$& 69\\
Single top & 2500 &$\pm$& 180 & 863 &$\pm$& 73 & 63 &$\pm$& 8 & 27 &$\pm$& 3\\
QCD fake-$\tau$ & &-& & &-& & 1860 &$\pm$& 110 & 173 &$\pm$& 34\\
$t\bar{t}$ fake-$\tau$ & &-& & &-& & 200 &$\pm$& 110 & 142 &$\pm$& 79\\
Fake-$\tau$ & 13900 &$\pm$& 1700 & 6400 &$\pm$& 1000 & &-& & &-& \\
$Z \to \tau\tau + (bb, bc, cc)$ & 520 &$\pm$& 160 & 285 &$\pm$& 83 & 258 &$\pm$& 64 & 156 &$\pm$& 36\\
Other & 2785 &$\pm$& 270 & 158 &$\pm$& 26 & 817 &$\pm$& 95 & 21 &$\pm$& 4\\
\hline
Total Background & 37510 &$\pm$& 220 & 22120 &$\pm$& 160 & 3482 &$\pm$& 59 & 756 &$\pm$& 27\\
\hline
Data & \multicolumn{3}{c|}{37527} & \multicolumn{3}{c|}{22117} & \multicolumn{3}{c|}{3469} & \multicolumn{3}{c|}{768}\\
\hline
$m(\lqthreeu) = 400$~\GeV\ & 2140 &$\pm$& 140 & 1950 &$\pm$& 160 & 1430 &$\pm$& 190 & 1430 &$\pm$& 200 \\
$m(\lqthreed) = 400$~\GeV\ & 1420 &$\pm$& 170 & 1096 &$\pm$& 82 & 850 &$\pm$& 110 & 672 &$\pm$& 88 \\
$m(\lqthreeu) = 800$~\GeV\ & 39.1 &$\pm$& 2.8 & 25.2 &$\pm$& 2.3 & 25.6 &$\pm$& 3.9 & 16.8 &$\pm$& 2.7\\
$m(\lqthreed) = 800$~\GeV\ & 23 &$\pm$& 2.3 & 16.6 &$\pm$& 1.4 & 17.8 &$\pm$& 2.8 & 12.4 &$\pm$& 2.2\\
$m(\lqthreeu) = 1500$~\GeV\ & 0.25 &$\pm$& 0.02 & 0.08 &$\pm$& 0.01 & 0.16 &$\pm$& 0.03 & 0.05 &$\pm$& 0.01\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\centering
\small
\begin{minipage}[t]{0.49\textwidth}
\includegraphics[width=1.1\textwidth]{di_higgs/LQplotsFeb/limit_up.pdf}
\end{minipage}
\hfill
\begin{minipage}[t]{0.49\textwidth}
\includegraphics[width=1.1\textwidth]{di_higgs/LQplotsFeb/limit_down.pdf}
\end{minipage}
\caption{Expected and observed 95\% CL upper limits on the cross-section for up-type (left) and down-type (right) scalar leptoquark pair production with $B=1$ as a function of leptoquark mass for the combined \lephad\ and \hadhad\ channels. The observed limit is shown as the solid line. The thickness of the theory curve represents the theoretical uncertainty from PDFs, renormalization and factorization scales, and the strong coupling constant $\alpha_\mathrm{s}$.}
\label{fig:1d-limits-combined}
\end{figure}
\section*{Acknowledgements}
\input{acknowledgements/Acknowledgements}
\printbibliography
\clearpage
\input{atlas_authlist}
\end{document}
\section{The $b b$ plus \ETmiss channel}
The search for direct bottom-squark pair production in either the zero- or one-lepton channel~\cite{SUSY-2016-28} is
reinterpreted in the context of LQ production. The analysis
in the zero-lepton channel targets LQs which both decay into a bottom quark and a neutrino, i.e.\ \lqthreed\ production with $B=0$.
The one-lepton channel, also requiring two $b$-tagged jets and \ETmiss, is expected to provide sensitivity to
\lqthreed\ production at intermediate values of $B$, as well as to small and intermediate values of $B$ for \lqthreeu\ production.
For the zero-lepton SRs (denoted by b0L), events with two high-\pt $b$-tagged jets, zero leptons, and a large amount of \met are selected.
An exclusive jet selection (requiring 2--4 jets) is applied, preventing sizeable sensitivities to \lqthreeu\ production in this SR.
Events with large \met arising from mismeasured jets are rejected using a selection of \dphijetimet $_{(i = 1...4)} > 0.4$.
The two $b$-jets are required to be the two leading jets in the event and their invariant mass must be greater than 200~\GeV.
After these baseline selections, three SRs are constructed using increasingly tighter selections on the contransverse mass \mct\ \cite{Tovey:2008ui},
which
targets pair-produced heavy objects that each decay in an identical way into a visible and an invisible particle.
It is the main discriminating variable in the zero-lepton channel, with overlapping selections of $\mct >$ 350, 450, and 550~\GeV\ distinguishing the SRs.
For the one-lepton SRs (denoted by b1L), events with two $b$-tagged jets, one lepton ($e/\mu$) with $\pt > 27$\,GeV, and large \met are selected.
Unlike the zero-lepton regions, an inclusive jet selection is used, with any two of the jets being tagged as $b$-jets. A selection of $\dphijetimet > 0.4$ for the four leading jets is used to reject events with large \met from mismeasured jets.
Selections on the minimum invariant mass of the lepton and one of the two $b$-jets, $m_{b,\ell}^{\min}$, and on \amtTwo are used to
reduce the \ttbar\ background, while a selection on \mt is used to reject $W$+jets events.
After applying the previously introduced selections, two overlapping SRs are designed using \meff, the scalar sum of the \pt\ of the jets and the \ETmiss, as the main discriminating variable. The SRs are designed with selections of either $\meff >$ 600 or 750~\GeV.
The dominant backgrounds in the analysis are dependent upon the lepton multiplicity of the SR under consideration. For the zero-lepton SRs, the main SM background is $Z$-boson production ($Z \rightarrow \nu\nu$) in association with $b$-jets. Other significant sources of background arise from \ttbar\ pair production, single-top $Wt$ production, and $W$-boson production in association with $b$-jets. Control regions are defined to constrain each of the aforementioned SM backgrounds, which are designed to be kinematically close, yet orthogonal, to the SRs and also
mutually
orthogonal to each other.
A two-lepton CR is used to constrain the $Z$+jets process, where the invariant mass of the leptons is required to be near the $Z$-boson mass.
The leptons are removed from the \met calculation to mimic the expected \met from the $Z \rightarrow \nu\nu$ process. The \ttbar, single-top, and $W$+jets processes are constrained in three one-lepton CRs: one CR with an inverted (relative to the SR) \amtTwo selection to create a region dominated by \ttbar; a region with an inverted \mT\ selection to constrain $W$+jets; and a final region with an inverted $m_{b,\ell}^{\min}$ selection to constrain single-top production. For the one-lepton SRs, the main SM backgrounds are \ttbar\ pair production and single-top production. A one-lepton CR is designed to constrain the \ttbar\ background by inverting the SR \amtTwo selection. The single-top background is constrained using the same CR as used for the zero-lepton single-top background. Signal contamination in these CRs is below 10\% in all regions.
\begin{table}
\caption{Number of observed events and background-only fit results in the SRs. The uncertainties contain both the statistical and
systematic uncertainties. Two example LQ signal samples are also shown for comparison, with various assumptions about $B(\mathrm{LQ} \rightarrow q\tau)$.}
\label{tab:sbottom_SRYields}
\small
\begin{center}
\setlength{\tabcolsep}{0.0pc}
{\small
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lrrrrr}
\noalign{\smallskip}\hline\noalign{\smallskip}
{SR selection} & b0L\_SRA350 & b0L\_SRA450 & b0L\_SRA550 & b1L\_SRA600 & b1L\_SRA750 \\[-0.05cm]
\noalign{\smallskip}\hline\noalign{\smallskip}
Observed events & $81$ & $24$ & $10$ & $21$ & $13$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Fitted bkg events & $70.1 \pm 13.0$ & $21.4 \pm 4.5$ & $7.2 \pm 1.5$ & $23.0 \pm 5.4$ & $14.4 \pm 3.6$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\hline
$m_{\mathrm{LQ}}$ = 750~\GeV \\
\hline
$B$(\lqthreed\ $\rightarrow t\tau$) = 1.0 & $ < 0.1 $ & $ < 0.1 $ & $ < 0.1 $ & $ 0.4 \pm 0.2 $ & $ 0.4 \pm 0.2 $ \\
$B$(\lqthreed\ $\rightarrow t\tau$) = 0.5 & $ 28.4 \pm 1.7 $ & $ 18.1 \pm 1.5 $ & $ 7.6 \pm 0.9 $ & $ 5.1 \pm 0.8 $ & $ 5.0 \pm 0.9 $ \\
$B$(\lqthreed\ $\rightarrow t\tau$) = 0.0 & $ 107.1 \pm 6.7 $ & $ 68.3 \pm 5.8 $ & $ 29.6 \pm 3.7 $ & $ 0.3 \pm 0.2 $ & $ 0.3 \pm 0.2 $ \\
\hline
$B$(\lqthreeu\ $\rightarrow b\tau$) = 1.0 & $ 1.3 \pm 0.6 $ & $ 0.8 \pm 0.5 $ & $ 0.2 \pm 0.2 $ & $ 0.6 \pm 0.4 $ & $ 0.6 \pm 0.3 $ \\
$B$(\lqthreeu\ $\rightarrow b\tau$) = 0.5 & $ 2.4 \pm 0.4 $ & $ 1.5 \pm 0.3 $ & $ 0.3 \pm 0.1 $ & $ 10.2 \pm 1.1 $ & $ 9.6 \pm 0.1 $ \\
$B$(\lqthreeu\ $\rightarrow b\tau$) = 0.0 & $ 2.6 \pm 1.0 $ & $ 1.7 \pm 0.6 $ & $ 0.4 \pm 0.3 $ & $ 16.7 \pm 3.3 $ & $ 14.7 \pm 0.3 $ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\hline
\end{tabular*}
}
\end{center}
\end{table}
The statistical analysis is done in full analogy to the one described in Section~\ref{ttmet-1l}.
The number of observed events in data for each SR and the expected post-fit SM background yields are presented in Table \ref{tab:sbottom_SRYields}.
Two example LQ signal sample yields with \mlq\ = 750~\GeV\ are presented for comparison,
with various assumptions about the branching ratio.
Figure~\ref{fig:sbottom_SRPlots} presents the distributions of four key kinematic variables in the SRs, with two signal samples added on top of the SM background. The top row shows the \mct and \met in the zero-lepton b0L\_SRA350 region. The bottom row shows the \amtTwo and \meff\ in the one-lepton b1L\_SRA600 region.
The combination of the exclusion limits for the SRs is obtained by selecting the signal region with the
best expected limit for each mass point.
The expected and observed exclusion limits on the cross-section for \lqthreed\ and $B = 0$ are shown in Figure~\ref{fig:sbottom_BR_limit}
as a function of the leptoquark mass.
Also shown is the theoretical prediction for the cross-section of scalar leptoquark pair production including the uncertainties.
The expected and observed 95\%~CL lower limits on the mass of a down-type LQ decaying into $b\nu\bar{b}\bar{\nu}$ are 980~\GeV\ and 970~\GeV, respectively.
\begin{figure}
\centering
\includegraphics[width=0.49\linewidth]{sbottom/figures/Approved/MCT_SRA_bb.pdf}
\includegraphics[width=0.49\linewidth]{sbottom/figures/Approved/ETMiss_SRA_bb.pdf}
\includegraphics[width=0.49\linewidth]{sbottom/figures/Approved/tbMET_amt2.pdf}
\includegraphics[width=0.49\linewidth]{sbottom/figures/Approved/tbMET_Meff_p0.pdf}
\caption{Distributions for key kinematic variables in the zero- and one-lepton SR selections for
the contransverse mass, $m_{\mathrm{CT}}$~\cite{Tovey:2008ui} and \met\ in the
zero-lepton SR (top) and $am_{\mathrm{T2}}$~\cite{Konar:2009qr} and $m_{\mathrm{eff}}$ in the one-lepton SRs (bottom).
Two example LQ signal samples are added on top of the SM background, \lqthreed\ (dashed lines top plots) and \lqthreeu\ (dashed lines bottom plots). The assumed $B(\mathrm{LQ} \rightarrow q\tau)$ for both signal samples is zero, and $m_{\mathrm{LQ}}$ is 750~\GeV. The expected SM backgrounds are normalized to the values determined in the fit. The last bin contains the overflow events.}
\label{fig:sbottom_SRPlots}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.6\linewidth]{sbottom/figures/Approved/UpperLimit_BestExpected.pdf}
\caption{
Observed and expected 95\% CL upper limits on the cross-section for
down-type LQ pair production with $B=0$
as a function of the LQ mass.
The $\pm1(2)\sigma$ uncertainty bands around the expected limit represent all sources of statistical and systematic uncertainties.
The thickness of the theory curve represents the theoretical uncertainty from PDFs, renormalization and factorization scales, and the strong coupling constant $\alpha_\mathrm{s}$.
}
\label{fig:sbottom_BR_limit}
\end{figure}
\section{The $t t$ plus \ETmiss channel with zero leptons}
In this section, the LQ reinterpretation of the ATLAS analysis optimized for the search of top-squark pair production~\cite{stop0LMoriond2017}
in final states with zero leptons is discussed.
The signature targeted
|
is two hadronically decaying top quarks and invisible particles, closely matching the signature where both LQs decay into
a top quark and a neutrino.
Three signal regions (SRA, SRB, SRD) of the top-squark search~\cite{stop0LMoriond2017} have the greatest sensitivity to the LQ signal models.
SRA is optimal for high LQ masses, e.g.\ \mLQ $\approx 1$~\TeV, which typically results in high \met\ and top quarks with a significant boost.
SRB is sensitive to medium LQ masses, which tend to have a softer \met\ spectrum and less-boosted top quarks.
SRD targets a resonance decaying into a $b$-quark and an invisible particle, giving sensitivity to \lqthreed\ $\rightarrow b \nu$ events.
A common preselection is defined for all signal regions. At least four jets are required, of which at least one must be $b$-tagged.
The four leading jets (ordered in \pt) must satisfy \pt\ $>$ 80, 80, 40, 40~\GeV, respectively, due to the tendency
for signal events
to have higher-energy jets than background events. Events containing reconstructed electrons or muons are vetoed. The $\met$ trigger threshold motivates the $\met>250$~\GeV\ requirement and rejects most of the background from multi-jet and all-hadronic $\ttbar$
events.
Similarly to the \ttmetonel\ analysis in Section~\ref{ttmet-1l}, hadronically decaying top quarks are reconstructed using jet
reclustering.
SRA and SRB require the presence of two reclustered $R=1.2$ jets.
Both SRA and SRB are divided into three orthogonal, one-bin subregions, which are combined for maximal signal sensitivity.
The categorization in subregions is based on the mass of the subleading (ordered in \pt) reclustered jet (\mantikttwelveone).
In all subregions it is required that the leading reclustered $R=1.2$ jet has a mass (\mantikttwelvezero) of at least 120~\GeV.
The subregions are denoted by TT, TW, and T0 corresponding to requirements of $\mantikttwelveone>120$~\GeV, $60<\mantikttwelveone<120$~\GeV, and $\mantikttwelveone<60$~\GeV, respectively.
In addition to the $R=1.2$ reclustered jet mass, one of the most discriminating variables in SRA is \met, which has to be
above 400~\GeV\ or higher depending on the subregion.
In SRB, the \met\ requirement is looser ($\met>250$~\GeV) than in SRA since the signals that SRB is targeting
tend to have softer \met\ spectra.
Two SRD subregions, SRD-low and SRD-high, are defined for which
at least five jets are required, two of which must be $b$-tagged. Requirements are made on the transverse momenta of the jets as well as on the scalar sum of the transverse momenta of the two $b$-tagged jets, which needs to be above 400~\GeV\ for SRD-high and above 300~\GeV\ for SRD-low.
Tight requirements are also applied to the \mt\ calculated from \met\ and the $b$-tagged jet that has the smallest (\mtbmin) and largest (\mtbmax)
$\Delta \phi$ relative to
the \met\ direction.
The dominant background processes are $Z\to\nu\nu$ in association with $b$-jets,
semileptonic \ttbar\
where one of the $W$ bosons decays into $\tau\nu$, and $\ttbar+Z (\to\nu\nu)$.
To estimate the normalization of these backgrounds, control regions are designed to be as close as possible to individual signal regions
while being strongly enhanced in the background of interest.
For the $Z\to\nu\nu$ background, $Z\to\ell\ell$ control regions are used where the leptons are removed
to mimic the \met\ produced by the $Z\to\nu\nu$ process.
To estimate the \ttbar\ background, a set of one-lepton control regions is used.
Finally, the $\ttbar+Z(\to\nu\nu)$ background is estimated using a $\ttbar+\gamma$ control region where the photon \pt\ is used
to approximate \met.
Several normalizations for the subdominant backgrounds,
such as single-top and $W$+jets production, are also estimated using control regions. The normalizations are calculated using a simultaneous binned profile-likelihood fit. Signal contamination in the control regions, specifically regions used to estimate the normalization of \ttbar, single top, and $W$+jets, is negligible.
The statistical analysis is done similarly to the one described in Section~\ref{ttmet-1l}.
Here the signal yields are extracted during a simultaneous fit to all control regions plus SRD or the three subregion categories of either SRA or SRB.
The two subregions of SRD are not orthogonal and are not statistically combined. The combined limits use the best expected limit among all signal regions.
\begin{table}
\begin{center}
\caption{Number of observed events in SRA, SRB, and SRD, together with the number of fitted background events including their total uncertainty, taken from Ref.~\cite{stop0LMoriond2017} (CR background-only fit). Additionally, the expected number of signal events for different branching ratios and LQ masses close to the exclusion limits are given with statistical uncertainties.
}
\label{tab:stop0LYields}
{\small
\begin{tabular}{c||l|c|c|c|c|c}
\hline\hline
{SR} & & {TT} & {TW} & {T0} & low & high \\ \hline \hline
\multirow{4}{*}{{A}} & Observed & 11 & 9 & 18 & \multirow{4}{*}{{-}} & \multirow{4}{*}{{-}} \\ \cline{2-5}
& SM Total & $8.6\pm2.1$ & $9.3\pm2.2$ & $18.7\pm2.7$ & & \\ \cline{2-5}
& $m(\lqthreeu) = 1000$~\GeV, $B = 0$ & $8.5 \pm 0.7$ & $4.8 \pm 0.6$ & ~$\,5.0 \pm 0.7$ & & \\ \cline{2-5}
& $m(\lqthreed) = 800$~\GeV, $B = 0$ & $3.1 \pm 1.1$ & $3.7 \pm 1.2$ & $ 15.5 \pm 2.5$ & & \\ \cline{2-5}
\hline \hline
\multirow{4}{*}{{B}} & Observed & $38$ & $53$ & $206$ &\multirow{4}{*}{{-}} & \multirow{4}{*}{{-}} \\ \cline{2-5}
& SM Total & $39\pm 8\,$~ & $52 \pm 7\,$~ & $179 \pm 26$ & & \\ \cline{2-5}
& $m(\lqthreeu) = 400$~\GeV, $B = 0.7$ & $26 \pm 7\,$~ & $18 \pm 8\,$~ & $27 \pm 9$ & & \\ \cline{2-5}
& $m(\lqthreed) = 400$~\GeV, $B = 0.9$ & $9 \pm 4$ & $18 \pm 9\,$~ & $ 63 \pm 9$ & & \\ \cline{2-5}
\hline \hline
\multirow{3}{*}{{D}} & Observed & \multicolumn{3}{c|}{-} & 27 & 11 \\ \cline{2-7}
& SM Total & \multicolumn{3}{c|}{-} & $25\pm 6\,$~ & $8.5\pm1.5$ \\ \cline{2-7}
& $m(\lqthreed) = 800$~\GeV, $B = 0.9$ & \multicolumn{3}{c|}{-} & $2.87\pm0.35$ & $1.45\pm0.23$ \\ \cline{2-7}
\hline\hline
\end{tabular}
}
\end{center}
\end{table}
Figure~\ref{fig:finalSRABPlots} shows observed and expected \met\ distributions in SRA, \drbjetbjet\ distributions in SRB,
and \mtbmax\ distributions in SRD-low and SRD-high.
In addition, examples of signal distributions with different LQ masses and branching ratios are added on top of the SM prediction.
These examples are chosen such that these signals are close to being excluded.
The expected and observed numbers of events as well as the expected number of events for the example signals are shown in Table~\ref{tab:stop0LYields} for
each region.
\begin{figure}
\centering
\includegraphics[width=0.49\linewidth]{stop0lepton/plotsSigPlusSM/Met_SRA_TT}
\includegraphics[width=0.49\linewidth]{stop0lepton/plotsSigPlusSM/Met_SRA_T0}
\includegraphics[width=0.49\linewidth]{stop0lepton/plotsSigPlusSM/DRBB_SRB_TT}
\includegraphics[width=0.49\linewidth]{stop0lepton/plotsSigPlusSM/DRBB_SRB_T0}
\includegraphics[width=0.49\linewidth]{stop0lepton/plotsSigPlusSM/MtBMax_SRD_low}
\includegraphics[width=0.49\linewidth]{stop0lepton/plotsSigPlusSM/MtBMax_SRD_high}
\caption{Distributions of \met in SRA (upper panels) and \drbjetbjet\xspace in SRB (middle panels) separately for the individual
top reconstruction categories TT and T0 and the \mtbmax\ distribution in SRD-low and SRD-high (lower panels).
The expected SM backgrounds are normalized to the values determined in the fit. The expected number of signal events for different LQ masses and branching ratios $B$ is
added on top of the SM prediction. The last bin contains the overflow events.
}
\label{fig:finalSRABPlots}
\end{figure}
The expected and observed exclusion limits on the cross-section for \lqthreeu\ and \lqthreed\ for $B = 0$ are shown as a function of the
leptoquark mass in Figure~\ref{fig:stop0LLimit}.
The theoretical prediction for the cross-section of scalar leptoquark pair-production is shown by the solid line along with the uncertainties.
As expected, there is better sensitivity to \lqthreeu\ than to \lqthreed, excluding pair-produced \lqthreeu\
decaying into $t\nu\bar{t}\bar{\nu}$ for
masses smaller than 1000~\GeV\ at 95\%~CL.
The expected limit is \mLQ < 1020~\GeV.
\begin{figure}
\centering
\includegraphics[width=0.49\linewidth]{stop0lepton/xsec_upperlimit_combined_up}
\includegraphics[width=0.49\linewidth]{stop0lepton/xsec_upperlimit_combined_down}
\caption{
Observed and expected 95\% CL upper limits on the cross-section for
up-type (left panel) and down-type (right panel) LQ pair production with $B=0$
as a function of the LQ mass.
The $\pm1(2)\sigma$ uncertainty bands around the expected limit represent all sources of statistical and systematic uncertainties.
In the right panel
the expected limit has an undulated behaviour
due to the use of different signal regions for
different mass points and the statistical uncertainties of the background MC in SRD.
The thickness of the theory curve represents the theoretical uncertainty from PDFs, renormalization and factorization scales, and the strong coupling constant $\alpha_\mathrm{s}$.
}
\label{fig:stop0LLimit}
\end{figure}
\section{The $t t$ plus \ETmiss channel with one lepton}
\label{ttmet-1l}
In this section, the LQ reinterpretation of a dedicated search for top-squark pair production~\cite{SUSY-2016-16} in final states with one lepton is described.
Events where both LQs decay into a top quark and a neutrino are targeted, where one top quark decays hadronically and the other one semileptonically.
Events containing one isolated lepton, jets, and missing transverse momentum in the final state
are considered.
Two signal regions (SR) of the top-squark search~\cite{SUSY-2016-16}, tN\_med and tN\_high, have
optimal sensitivity for medium ($\sim$600~\GeV) and high ($\sim$1~\TeV) masses of the
leptoquark.
The tN\_med SR is additionally binned in \met, while the tN\_high SR is taken as a single-bin cut-and-count experiment.
Both signal regions require at least four jets, at least one $b$-tagged jet, exactly one isolated electron or muon, and high \met.
Variables sensitive to the direction of the \met\ are used, e.g.\ the transverse mass of the lepton and \met, denoted by \mt,\footnote{The transverse mass $\mt$ is defined as $\mt = \sqrt{2p_{\text{T}}^{\text{lep}}\met[1-\cos(\Delta\phi)]}$, where $\Delta\phi$ is the azimuthal angle between the lepton and the missing transverse momentum direction
and $p_\text{T}^\text{lep}$ is the transverse momentum of the charged lepton.} and \amtTwo~\cite{Konar:2009qr}, which targets pair-produced heavy objects
that each decay into a different number of measured and unmeasured particles in the detector.
The hadronically decaying top quark is reconstructed using jet reclustering,
where several jets are combined using the \antikt algorithm~\cite{Cacciari:2008gp}
with a large radius parameter that is initially set to 3.0 and then adjusted in an iterative process~\cite{SUSY-2016-16}.
The various backgrounds are estimated using simulated data. The dominant background consists of $t\bar{t}$ events, which, due to the high \mt requirements, are mainly from dileptonic $t\bar{t}$ decays in which one lepton is not reconstructed ($t\bar{t}$ 2L), even though this decay topology is strongly suppressed by
requiring \amtTwo to be above the top-quark mass.
Other major backgrounds are due to the production of a $W$ boson in association with one or more jets ($W$+jets) and
the production of a $t\bar{t}$ pair in association with a vector boson ($t\bar t + V$), where the latter is dominated by contributions
from $t\bar{t}+Z(\rightarrow\nu\nu)$.
For each SR, a set of dedicated single-bin CRs is defined in order to control the background normalization.
The $t\bar{t}$ 2L CR is defined at high \mt and with a veto on hadronically decaying top-quark candidates, while the $W$+jets CR is defined at low \mt and with a veto on hadronically decaying top-quark candidates. The top-quark candidate veto is fulfilled if either no top-quark candidate is found or if the mass is lower than the SR threshold.
Additionally, a CR for semileptonic $t\bar t$ events ($t\bar{t}$ 1L) is defined, as these events contribute strongly to the other CRs.
A CR for single-top events is defined similarly to the $W$+jets CR, but with at least two \bjets.
The $t\bar t + Z$ background is estimated with a three-lepton selection.
The statistical analysis for a SR is based on a simultaneous likelihood fit to the observed events in the CRs and the SR,
where the background samples and a signal sample are included in all regions.
The fit determines at the same time the background normalization as well as a potential signal contribution.
For each SR, a set of validation regions is defined in addition to the CRs. The validation regions are not part of the fit
but are used to validate the background normalization in this second set of disjunct regions.
Systematic uncertainties
are included as nuisance parameters in the profile-likelihood estimate.
The test statistic is the profile log-likelihood ratio and its distribution is calculated using the asymptotic approximation~\cite{stat}.
The number of observed events in the data in each SR and the expected number of background events
as calculated in
a fit to only the CRs while neglecting a potential signal contamination are taken from Ref.~\cite{SUSY-2016-16} and shown in Tables \ref{tab:tN_high_number_events} and \ref{tab:tN_med_number_events}.
In addition, the expected number of signal events for different leptoquark masses is shown for each SR.
The contamination
from the leptoquark signal in the CRs is below 10\% in all cases.
Figure~\ref{fig:tN_med_pull} shows the \met distribution in the tN\_med SR.
\begin{table}
\centering
\caption{The number of observed events in the cut-and-count SR tN\_high, together with the expected number of background events including their total uncertainties,
taken from Ref.~\cite{SUSY-2016-16}.
Additionally, the expected number of signal events
are given for $B=0$ for up-type LQs of different masses with statistical uncertainties.
}
\label{tab:tN_high_number_events}
\setlength{\tabcolsep}{0.1pc}
\begin{tabular}{l rcl}
\toprule
Observed events & & $8$ & \\
Total SM & $3.8$& $\pm$&$1.0$ \\
\midrule
$m(\lqthreeu) = 800$ \GeV & 11.9&$\pm$&1.8 \\
$m(\lqthreeu) = 900$ \GeV & 9.5&$\pm$&1.2 \\
$m(\lqthreeu) = 1000$ \GeV & 6.7&$\pm$&0.7 \\
$m(\lqthreeu) = 1100$ \GeV ~~~ & 3.7&$\pm$&0.3 \\ \bottomrule
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{The number of observed events in the shape-fit SR tN\_med, together with the expected number of background events
including their total uncertainties, taken from Ref.~\cite{SUSY-2016-16}.
Additionally, the expected number of signal events
are given for $B=0$ for up-type LQs of different masses with statistical uncertainties.
The numbers are given for the four bins characterized by an interval of the \ETmiss variable.
}
\label{tab:tN_med_number_events}
\small
\begin{tabular}{lcccc}
\toprule
\met & [250, 350]\,\GeV & [350, 450]\,\GeV & [450, 600]\,\GeV & >600\,\GeV \\ \midrule
Observed events & $21$ & $17$ & $8$ & $4$ \\
Total SM & $14.6\pm2.8$~ & $11.2\pm2.2$~ & $7.3\pm1.7$ & $3.16\pm0.74$ \\ \midrule
$m(\lqthreeu)$ = 400 \GeV & $166\pm44$~ & ~$\,58\pm32$~ & $11\pm11$ & $5.7\pm5.7$ \\
$m(\lqthreeu)$ = 600 \GeV & $21.0\pm5.6$~ & $49.6\pm8.8$~ & $31.8\pm5.5\,$~ & $1.4\pm2.1$ \\
$m(\lqthreeu)$ = 800 \GeV & ~$\,5.0\pm1.5$~ & $10.6\pm1.7$~ & $11.2\pm2.0\,$~ & $6.3\pm1.4$ \\
$m(\lqthreeu)$ = 1000 \GeV & $\,0.46\pm0.14$ & $\,1.18\pm0.24$ & $2.92\pm0.49$ & $4.61\pm0.64$ \\ \bottomrule
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[width=.55\textwidth]{./stop1lepton/figures/final_plots/tN_med_stack_shape_ratio-800GEV_tnuntnu}
\caption{Observed and expected \met distributions are shown in the tN\_med signal region, as is their ratio.
The error band includes statistical and systematic uncertainties. The expected SM backgrounds are normalized to the values determined in the fit.
The expected number of signal events
for an up-type LQ with $m_{\textnormal{LQ}}=800$~\GeV\ and $B=0$ is
added on top of the SM prediction. The last bin contains the overflow events.
}
\label{fig:tN_med_pull}
\end{figure}
The combination of the exclusion limits for the SRs is obtained by selecting the signal region with the better expected limit for each mass point.
For LQ masses of 950, 1000, and 1100~\GeV, the limit of the tN\_high signal region is selected, otherwise the limit of the tN\_med signal region is selected.
The expected and observed exclusion limits are shown in Figure \ref{fig:mass_limit_beta000}.
The theoretical prediction for the cross-section of scalar leptoquark pair production is shown by the solid line along with the uncertainties. Pair-produced third-generation scalar leptoquarks decaying into $t \nu \bar{t} \bar{\nu}$ are excluded at 95\% CL for \mLQ < 930~\GeV. The expected exclusion range is \mLQ < 1020~\GeV.
\begin{figure}
\centering
\includegraphics[width=0.55\linewidth]{./stop1lepton/figures/final_plots/stop1l_mass_limit_LQ3u_tnutnu}
\caption{
Observed and expected 95\% CL upper limit on the cross-section for
up-type LQ pair production with $B=0$
as a function of the LQ mass.
The $\pm1(2)\sigma$ uncertainty bands around the expected limit represent all sources of statistical and systematic uncertainties.
The thickness of the theory curve represents the theoretical uncertainty from PDFs, renormalization and factorization scales, and the strong coupling constant $\alpha_\mathrm{s}$.
}
\label{fig:mass_limit_beta000}
\end{figure}
\section{The $\tau \tau b$ plus \ETmiss channel}
\label{sec:stopstau}
The ATLAS search for top-squark pair production with decays via $\tau$-sleptons~\cite{SUSY-2016-19}
selects events containing
two $\tau$-leptons,
at least two jets, of which at least one must be $b$-tagged,
and missing transverse momentum.
Its reinterpretation is expected
to have good sensitivity for all $B$
except at very low values, with the maximum sensitivity at intermediate values.
Events are classified according to the decay of the $\tau$-leptons.
Both the \thadhad\ and the \tlhad\ channels are considered.
Two signal selections are defined, one for the \tlhad\ channel (\SRlh) and another one for the \thadhad\ channel (\SRhh).
Both signal selections require the two leptons to have opposite electric charge.
The main discriminating variables
are \met and the stransverse mass \mttwo,
which is a generalization of the transverse mass for final states with two invisible particles~\cite{arxiv:LesterMT2, Barr:2003rg, Lester:2014yga}
and computed from the selected lepton pair and \met.
The dominant background process with the targeted final-state signature is pair production of top quarks.
As in the original analysis,
two types of contributions are discriminated,
depending on whether the identified hadronically decaying $\tau$-lepton(s) in the selected event
are real or candidate particles (typically jets, electrons, or, in rare cases, muons),
which are misidentified as hadronically decaying $\tau$-leptons (fake $\tau$-leptons).
In the \tlhad\ channel,
the contribution of events with fake $\tau$-leptons is estimated using a data-driven method,
which is based on a measurement of the number of hadronically decaying $\tau$-leptons satisfying loose identification criteria
that also pass the tighter analysis selection (fake-factor method).
The contribution of events with true $\tau$-leptons is estimated from simulation,
using a dedicated control-region selection to normalize the overall contribution to the level observed in data.
In the \thadhad\ channel,
both the contributions with fake and with true $\tau$-leptons
are estimated from simulation with data-driven normalization factors obtained from two dedicated control regions.
A requirement on the transverse mass computed from the transverse momentum of the leading $\tau$-lepton and the missing transverse momentum
is used to discriminate between events with true and fake $\tau$-leptons.
The requirement of opposite electric charge is not used for the control region targeting events with fake $\tau$-leptons,
since in that case the charges of the two $\tau$-leptons are not correlated.
Two additional control regions common to both channels are used
to obtain data-driven normalization factors for the background contributions from diboson production
and production of top-quark pairs in association with an additional \Wboson or \Zboson boson.
For these control regions, events that pass a single-lepton trigger and contain at least two signal leptons and two jets are used.
The remaining definitions of these control regions and further details of the background estimation and its validation are in Ref.~\cite{SUSY-2016-19}.
Good agreement between data and predicted background yields is found in validation regions
when the normalization factors derived in the control regions are applied.
\begin{table}
\newcommand{$<0$&\multicolumn{3}{@{}l}{.1}}{$<0$&\multicolumn{3}{@{}l}{.1}}
\centering
\caption{
The expected number of SM background events obtained from the background fit
and the number of observed events in \SRhh and \SRlh,
together with the expected number of signal events for different mass hypotheses $m$, leptoquark types, and branching ratios $B$ into charged leptons.
}
\label{tab:yield_stopstau_SR}
\small
\begin{tabular}{ll r@{}l @{\,$\pm\,$} r@{}l r@{}l @{\,$\pm\,$} r@{}l}
\toprule
& & \multicolumn{4}{c}{SR HH} & \multicolumn{4}{c}{SR LH} \\
\midrule
Observed events & & \multicolumn{4}{c}{$2$} & \multicolumn{4}{c}{$3$} \\
Total SM & & 1&.9&1&.0 & 2&.2&0&.6 \\
\midrule
$ m(\lqthreeu) = 500$ \GeV& $B =0.5$& 10&.8 & 3&.4 & 27 & & 7 & \\
$ m(\lqthreeu) = 750$ \GeV& $B =0 $& $<0$&\multicolumn{3}{@{}l}{.1} & 1& .0 & 0&.3 \\
$ m(\lqthreeu) = 750$ \GeV& $B =0.5$& 2&.6 & 0&.8 & 7&.3 & 1&.5 \\
$ m(\lqthreeu) = 750$ \GeV& $B =1 $& 2&.6 & 0&.9 & 0&.33 & 0&.1 \\
$ m(\lqthreeu) = 1000$ \GeV& $B =0.5$& 0&.3 & 0&.09 & 1&.1 & 0&.3 \\
$ m(\lqthreed) = 500$ \GeV& $B =0.5$& 25 & & 7 & & 49 & & 11 & \\
$ m(\lqthreed) = 750$ \GeV& $B =0 $& $<0$&\multicolumn{3}{@{}l}{.1} & $<0$&\multicolumn{3}{@{}l}{.1} \\
$ m(\lqthreed) = 750$ \GeV& $B =0.5$& 1&.9 & 0&.5 & 6&.2 & 1&.5 \\
$ m(\lqthreed) = 750$ \GeV& $B =1 $& 2&.4 & 1&.1 & 2&.5 & 1&.0 \\
$ m(\lqthreed) = 1000$ \GeV& $B =0.5$& 0&.53 & 0&.16 & 1&.6 & 0&.4 \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{stoptostau/mt2_LH_br05_signalOnTop_final}
\caption{
Distribution of the stransverse mass \mttwoLT~\cite{arxiv:LesterMT2, Barr:2003rg, Lester:2014yga} in the signal region of the \tlhad\ channel before applying the selection requirement on \mttwoLT,
which is indicated by the dashed vertical line and arrow.
The stacked histograms show the various SM background contributions, which are normalized to the values determined in the fit.
The hatched band indicates the total statistical and systematic uncertainty in the SM background.
The error bars on the black data points represent the statistical uncertainty in the data yields.
The dashed histogram shows the expected additional yields from a leptoquark signal model LQ$_3^{\textnormal{u}}$
with $m_{\textnormal{LQ}}=750$~\GeV\ and $B=0.5$ added on top of the SM prediction.
The rightmost bin includes the overflow.
}
\label{fig:stopstau_SRLH_mttwo}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.495\textwidth]{stoptostau/1Dupcomb_v25_br05_final}
\includegraphics[width=0.495\textwidth]{stoptostau/1Ddowncomb_v25_br05_final}
\caption{
Expected cross-section limits,
their combined statistical and systematic uncertainties excluding the theoretical cross-section uncertainty,
and observed cross-sections limits
at \percent{95} CL for $B=0.5$ as a function of the leptoquark mass.
The two channels, \thadhad\ and \tlhad, are combined for the LQ$_3^{\textnormal{u}}$ model (left) and LQ$_3^{\textnormal{d}}$ model (right).
Statistical fluctuations are present in the expected limits
because of the low number of simulated signal events passing the comparably tight signal-region selections.
The thickness of the band around the theoretical cross-section curve represents the uncertainty from PDFs,
renormalization and factorization scales, and the strong coupling constant $\alpha_\mathrm{s}$.
}
\label{fig:stopstau_1D_exclusion}
\end{figure}
The statistical analysis is done similarly to the one described in Section~\ref{ttmet-1l}.
However, the signal regions are independent and can therefore be statistically combined in the fit.
The expected number of events from the background-only fit and the number of observed events in the signal regions
are shown in \tab{tab:yield_stopstau_SR} for the two analysis channels,
together with the predicted event yields for a number of leptoquark signals.
The number of observed events agrees with the predicted Standard Model background. Figure~\ref{fig:stopstau_SRLH_mttwo} shows the distribution of \mttwoLT,
one of the main discriminating variables,
after applying all selection requirements of the signal region \SRlh except for the one on \mttwoLT,
which is indicated by the vertical line and arrow instead.
The stacked histograms in the plot are the expected Standard Model backgrounds.
In addition, the stacked dashed histogram shows the predicted distribution of a benchmark signal model
for up-type leptoquarks with a mass of 750~\GeV\ and branching ratio $B=0.5$. As no significant excess in the signal regions is observed in the data,
upper limits are set on the production cross-section.
Figure~\ref{fig:stopstau_1D_exclusion} shows the expected and observed cross-section limits
as a function of the leptoquark mass both for up- and down-type leptoquarks.
Leptoquark masses up to 780~\GeV\ and 800~\GeV\ are excluded for $B=0.5$ at \percent{95} confidence level
for pair-produced up- and down-type leptoquarks, respectively.
For low leptoquark masses and high branching ratios into charged leptons,
a non-negligible number of simulated signal events pass the control-region selections.
This leads to the background normalization factors being biased towards lower values when obtained from the exclusion fits instead of the background-only fits,
and drives the nuisance parameters for the systematic uncertainties away from their nominal values.
Several tests were performed to check the validity of the fit procedure used to derive the exclusion.
Signal-injection tests confirmed that the fit reliably reproduces the input signal yield.
In addition, it was shown that artificially reducing the input signal yield for excluded phase-space regions
with a high signal contamination in the control regions,
thereby reducing the signal contamination,
would still allow to exclude the leptoquark signal.
|
\section{%
\@startsection
{section}%
{1}%
{\z@}%
{0.8cm \@plus1ex \@minus .2ex}%
{0.5cm}%
{%
\normalfont\small\bfseries
\centering
}%
}%
\def\@hangfrom@section#1#2#3{\@hangfrom{#1#2}\MakeTextUppercase{#3}}%
\def\subsection{%
\@startsection
{subsection}%
{2}%
{\z@}%
{.8cm \@plus1ex \@minus .2ex}%
{.5cm}%
{%
\normalfont\small\bfseries
\centering
}%
}%
\def\subsubsection{%
\@startsection
{subsubsection}%
{3}%
{\z@}%
{.8cm \@plus1ex \@minus .2ex}%
{.5cm}%
{%
\normalfont\small\itshape
\centering
}%
}%
\def\paragraph{%
\@startsection
{paragraph}%
{4}%
{\parindent}%
{\z@}%
{-1em}%
{\normalfont\normalsize\itshape}%
}%
\def\subparagraph{%
\@startsection
{subparagraph}%
{5}%
{\parindent}%
{3.25ex \@plus1ex \@minus .2ex}%
{-1em}%
{\normalfont\normalsize\bfseries}%
}%
\def\section@preprintsty{%
\@startsection
{section}%
{1}%
{\z@}%
{0.8cm \@plus1ex \@minus .2ex}%
{0.5cm}%
{%
\normalfont\small\bfseries
}%
}%
\def\subsection@preprintsty{%
\@startsection
{subsection}%
{2}%
{\z@}%
{.8cm \@plus1ex \@minus .2ex}%
{.5cm}%
{%
\normalfont\small\bfseries
}%
}%
\def\subsubsection@preprintsty{%
\@startsection
{subsubsection}%
{3}%
{\z@}%
{.8cm \@plus1ex \@minus .2ex}%
{.5cm}%
{%
\normalfont\small\itshape
}%
}%
\@ifxundefined\frontmatter@footnote@produce{%
\let\frontmatter@footnote@produce\frontmatter@footnote@produce@endnote
}{}%
\def\@pnumwidth{1.55em}
\def\@tocrmarg {2.55em}
\def\@dotsep{4.5pt}
\setcounter{tocdepth}{3}
\def\tableofcontents{%
\addtocontents{toc}{\string\tocdepth@munge}%
\print@toc{toc}%
\addtocontents{toc}{\string\tocdepth@restore}%
}%
\def\tocdepth@munge{%
\let\l@section@saved\l@section
\let\l@section\@gobble@tw@
}%
\def\@gobble@tw@#1#2{}%
\def\tocdepth@restore{%
\let\l@section\l@section@saved
}%
\def\l@part#1#2{\addpenalty{\@secpenalty}%
\begingroup
\set@tocdim@pagenum{#2}%
\parindent \z@
\rightskip\tocleft@pagenum plus 1fil\relax
\skip@\parfillskip\parfillskip\z@
\addvspace{2.25em plus\p@}%
\large \bf %
\leavevmode\ignorespaces#1\unskip\nobreak\hskip\skip@
\hb@xt@\rightskip{\hfil\unhbox\z@}\hskip-\rightskip\hskip\z@skip
\par
\nobreak %
\endgroup
}%
\def\tocleft@{\z@}%
\def\tocdim@min{5\p@}%
\def\l@section{%
\l@@sections{}{section
}%
\def\l@f@section{%
\addpenalty{\@secpenalty}%
\addvspace{1.0em plus\p@}%
\bf
}%
\def\l@subsection{%
\l@@sections{section}{subsection
}%
\def\l@subsubsection{%
\l@@sections{subsection}{subsubsection
}%
\def\l@paragraph#1#2{}%
\def\l@subparagraph#1#2{}%
\let\toc@pre\toc@pre@auto
\let\toc@post\toc@post@auto
\def\listoffigures{\print@toc{lof}}%
\def\l@figure{\@dottedtocline{1}{1.5em}{2.3em}}
\def\listoftables{\print@toc{lot}}%
\let\l@table\l@figure
\appdef\class@documenthook{%
\@ifxundefined\raggedcolumn@sw{\@booleantrue\raggedcolumn@sw}{}%
\raggedcolumn@sw{\raggedbottom}{\flushbottom}%
}%
\def\tableft@skip@float{\z@ plus\hsize}%
\def\tabmid@skip@float{\@flushglue}%
\def\tabright@skip@float{\z@ plus\hsize}%
\def\array@row@pre@float{\hline\hline\noalign{\vskip\doublerulesep}}%
\def\array@row@pst@float{\noalign{\vskip\doublerulesep}\hline\hline}%
\def\@makefntext#1{%
\def\baselinestretch{1}%
\reset@font
\footnotesize
\leftskip1em
\parindent1em
\noindent\nobreak\hskip-\leftskip
\hb@xt@\leftskip{%
\Hy@raisedlink{\hyper@anchorstart{footnote@\the\c@footnote}\hyper@anchorend}%
\hss\@makefnmark\
}%
#1%
\par
}%
\prepdef
\section{Introduction}
{\it Introduction.}--
Homeostatic regulation plays a central role in
all living, as well as in many technical applications.
Biological parameters, like the blood sugar level,
the heart beating frequency or the average firing
rates of neurons need to be maintained within certain
ranges in order to guarantee survival. The same holds
in the technical regime for the rotation speed of engines
and the velocity of airplanes, to give a few examples.
Homeostasic control in the brain goes beyond the regulation
of scalar variables like the concentration of proteins
and ions, involving the functional stability of neural
activity both on the individual as well as on a
network level~\cite{turrigiano99,davis06,marder06}. We use here the
term `polyhomeostasis' for self-regulating processes aimed
at stabilizing a certain target distribution of dynamical
behaviors. Polyhomeostasis is an important concept
used hitherto mostly implicitly and not yet well studied
from the viewpoint of dynamical system theory.
Polyhomeostasis is present whenever the goal of the
autonomous control is the stabilization of a
non-trivial distribution of dynamical states,
polyhomeostatic control hence generalizes the concept
of homeostasis.
The behavior of animals on intermediate
time scales, to give an example, may be regarded
as polyhomeostatic, aiming at optimizing a distribution
of qualitatively different rewards, like food,
water and protection; animals are not just trying to
maximize a single scalar reward quantity. A
concept loosely related to polyhomeostasis is
homeokinesis, proposed in the context
of closed-loop motion learning~\cite{der99}, having the
aim to stabilize non-trivial but steady-state movements
of animals and robots.
Here we study generic properties of dynamical systems governed
by polyhomeostatic self-regulation using a previously proposed
model~\cite{stemmler99,triesch05a} for regulating the
firing-rate distribution of individual neurons based
on information-theoretical principles.
We show that polyhomeostatic regulation, aiming at
stabilizing a specific target distribution of neural activities
gives rise to non-trivial dynamical states when
recurrent interactions are introduced. We find,
in particular, that the introduction of polyhomeostatic
control to attractor networks leads to a destruction
of all attractors resulting for large networks, as a
function of the average firing rate, in either
intermittent bursting behavior
or self-organized chaos, with both states being
globally attracting in their respective phase spaces.
{\it Firing-rate distributions.}--
We consider a discrete-time, rate
encoding artificial neuron with
input $x\in[-\infty,\infty]$, output
$y\in[0,1]$ and a transfer function $g(z)$,
\begin{equation}
y(t+1) \ =\ g\big(a(t) x(t)+b(t)\big)~,
\qquad g(z)\ =\ {1\over {\rm e}^{-z}+1}~.
\label{eq_neuron}
\end{equation}
The gain $a(t)$ and the threshold
$-b(t)/a(t)$ in (\ref{eq_neuron})
are slow variables, their
time evolution being determined
by polyhomeostatic considerations.
Information is encoded in the brain through
the firing states of neurons and it is therefore
plausible to postulate~\cite{stemmler99},
that polyhomeostatic adaption
for the internal parameters $a(t)$ and $b(t)$
leads to a distribution $p(y)$ for the firing rate
striving to encode as much information as
possible given the functional form (\ref{eq_neuron})
of the transfer function $g(z)$.
The normalized exponential distribution
\begin{equation}
p_\lambda(y) \ =\
{ \lambda\,{\rm e}^{-\lambda y}\over 1-{\rm e}^{-\lambda}},
\qquad
\mu \ =\ {1\over \lambda}
{{\rm e}^{\lambda }-1-\lambda\over {\rm e}^{\lambda}-1}~,
\label{eq_exponential}
\end{equation}
maximizes the Shannon entropy~\cite{cover06},
viz the information content, on the interval $y\in[0,1]$,
for a given expectation value $\mu$.
A measure for the closeness of the two probability
distributions $p(y)$ and $p_\lambda(y)$ is given by
the Kullback-Leibler divergence~\cite{cover06}
\begin{equation}
D_\lambda(a,b)\ =\
\int p(y)\log\left({p(y)\over p_\lambda(y)}\right) dy~,
\label{eq_Kullback_Leibler}
\end{equation}
which is, through (\ref{eq_neuron}), a function of the
internal parameters $a$ and $b$. The Kullback-Leibler
divergence is strictly positive and vanishes only when
the two distributions are identical. By minimizing
$D_\lambda(a,b)$ with respect to $a$ and $b$
one obtains~\cite{triesch05a} the stochastic gradient
rules
\begin{equation}
\begin{array}{rcl}
\Delta a &=& \epsilon_a \left({1/a} + x\Delta \tilde b\right) \\
\Delta b &=& \epsilon_b\, \Delta \tilde b, \qquad
\Delta \tilde b= 1-(2+\lambda)y+\lambda y^2
\end{array}
\label{eq_intrinsic_plasticity}
\end{equation}
which have been called `intrinsic plasticities'~\cite{turrigiano99}.
The respective learning rates $\epsilon_a$ and $\epsilon_b$
are assumed to be small, viz the time evolution of the
internal parameters $a$ and $b$ is slow compared to the
evolution of both $x$ and $y$. For any externally
given time series $x(t)$ for the input, the
adaption rules (\ref{eq_intrinsic_plasticity})
will lead to a distribution of the output firing rates
$y(t)$ as close as possible, given the specification
(\ref{eq_neuron}) for the transfer function,
to an exponential with given mean $\mu$.
\begin{figure}[t]
\centerline{
\includegraphics[width=0.35\textwidth]{stability.eps}
\hspace{4ex}
\includegraphics[width=0.09\textwidth]{oneNeuron.eps}
}
\caption{(color online) Left: As a function of the average firing
rate $\mu$, the region of stability (shaded area) of the
fixpoint $(y^*, b^*)$, see Eq.\ (\ref{eq_fixpoint_yb})
of the one-neuron network in the quasistatic limit.
Right: A self-coupling neuron with internal parameters
$a(t)$ and $b(t)$.
}
\label{fig_stability}
\end{figure}
{\it Attractor relics.}--
In deriving the stochastic gradient
rules (\ref{eq_intrinsic_plasticity}) it has been
assumed that the input $x(t)$ is statistically
independent of the output $y(t)$ (but not vice versa).
This is not any more the case when a set of polyhomeostatically
adapting neurons are mutually interconnected, forming
a recurrent network, Here we will show that networks
based on polyhomeostatic principles will generically
show spontaneous and continuously ongoing activity.
In a first step we analyze systematically the smallest
possible network, viz the single-site loop, obtained
by feeding the output back to the input,
see Fig.\ \ref{fig_stability}, a setup which has
been termed in a different context `autapse' \cite{seung00}.
We use the balanced substitution $x\to y-1/2$ in
Eqs.\ (\ref{eq_neuron}) and (\ref{eq_intrinsic_plasticity}),
the complete set of evolution rules for
the dynamical variables $y(t),\, a(t)$ and $b(t)$
is then
\begin{eqnarray}
\nonumber
y(t+1) & =& g\big(a(t)[y(t)-1/2]+b(t)\big) \\
\label{eq_one_site}
b(t+1) &=& b(t)+\epsilon_b\,\Delta \tilde b(t) \\
a(t+1) &=& a(t)+\epsilon_a \left({1/a(t)}
+ [y(t)-1/2]\Delta \tilde b(t)\right)
\nonumber
\end{eqnarray}
with
\begin{equation}
\Delta \tilde b(t)\ =\ 1-(2+\lambda)y(t+1)+\lambda y^2(t+1)~.
\label{eq_Delta_tile_b}
\end{equation}
Note, that $\Delta \tilde b(t)$ in (\ref{eq_Delta_tile_b})
depends on $y(t+1)$, and not on $y(t)$, as
|
one can easily verify when going through the derivation
of the rules (\ref{eq_intrinsic_plasticity}) for the
intrinsic plasticity. The evolution
equations (\ref{eq_one_site}) are invariant
under $y\leftrightarrow (1-y)$,
$b\leftrightarrow (-b)$,
$a\leftrightarrow a$ and
$\lambda \leftrightarrow (-\lambda)$, the later
corresponding to the interchange
of $\mu\leftrightarrow (1-\mu)$.
We first consider the quasistatic limit $\epsilon_a\ll \epsilon_b$,
viz $a(t)\simeq a$ is approximatively constant.
The fixpoint $(y^*,b^*)$ in the $(y,b)$ plane
is then determined by
\begin{equation}
\begin{array}{rcl}
0 &=& \lambda \left(y^*\right)^2-(2+\lambda)y^*+1 \\
b^* &=& g^{-1}(y^*)-a[y^*-1/2] \\
&=& \log(y^*/(1-y^*))-a[y^*-1/2]
\end{array}
\label{eq_fixpoint_yb}
\end{equation}
A straightforward linear stability analysis shows,
that the fixpoint $(y^*,b^*)$ remains stable for small
gains $a$ and becomes unstable for large gains, see
Fig.\ \ref{fig_stability}. We now go beyond the
quasistatic approximation and consider a small but
finite polyhomeostatic adaption rate $\epsilon_a$ for the gain.
Starting with a small gain $a$ we see, compare
Eq.\ (\ref{eq_one_site}), that the gain necessarily grows
until it hits the boundary towards instability; for
small $\Delta\tilde b$ the growths of the gain
is $a(t)\sim \sqrt{t}$.
\begin{figure}[t]
\centerline{
\includegraphics[width=0.40\textwidth]{oneSiteTimeONE.eps}
}
\caption{(color online) The time dependence of $y(t)$ (red stars)
$b(t)$ (solid line) and of $a(t)-4$ (dashed line) for
the balanced one-site problem (\ref{eq_one_site}).
$\epsilon_a=\epsilon_b=\epsilon=0.01$ and
$\mu=0.28$; the horizontal dotted lines are guides to the eye.
The gain $a(t)$ is initially small and the system
relaxes, since $\epsilon\ll1$, fast to a fixpoint
of $y=g(a[y-1/2]+b)$. Once $a(t)$ surpasses a certain threshold,
compare Fig.\ \ref{fig_stability}, the fixpoint
becomes unstable and the system starts to spike spontaneously.
}
\label{fig_oneSiteTime}
\end{figure}
In other words, a finite adaption rate $\epsilon_a$
for the gain turns the fixpoint
attractor $(y^*,b^*)$ into an attractor relic and the
resulting dynamics becomes non trivial and autonomously
self-sustained. This route towards autonomous neural
dynamics is actually an instance of a more general
principle. Starting with an attractor dynamical system,
viz with a system governed by a finite number of stable
fixpoint, one can quite generally turn these fixpoints
into unstable attractor ruins by coupling locally
to slow degree of freedoms~\cite{gros07,gros09}, in
our case the slow local variables are $a(t)$ and $b(t)$.
Interestingly, there are no saddlepoints present
in attractor relic networks in general,
and in (\ref{eq_one_site}) in particular, and
hence no unstable heteroclines, as in a proposed
alternative route to transient state dynamics
via heteroclinic switching.~\cite{rabinovich01,kirst08}
In Fig.\ \ref{fig_oneSiteTime} the time evolution
is illustrated for $\lambda=3.017$, which corresponds to
$\mu=0.28$, see Eq.\ (\ref{eq_exponential}),
and $\epsilon_a=\epsilon_b=\epsilon=0.01$.
The system remains
in the quasistationary initial regime until the
gain $a$ surpasses a certain threshold. The initial
quasistationary fixpoint becomes therefore unstable
via the same mechanism discussed analytically above,
see Eq.\ (\ref{eq_fixpoint_yb}),
for the regime $\epsilon_a\to 0$, compare
Fig.\ \ref{fig_stability}. The output activity
$y(t)$ oscillates fast between two transient
fixpoints of $y=g(a[y-1/2]+b)$, having a high and
a low value respectively.
This spiking behavior of the neural
activity is driven by spontaneous
oscillations in the threshold
$-b(t)/a(t)$, shifting the intersection of
$g(a[y-1/2]+b)$ with $y$ forth and back.
Evaluating the local Lyapunov exponents we find that
the trajectory is stable against perturbations
for the transient states close to one of the
transient fixpoints of $y=g(a[y-1/2]+b)$ and
sensitive to perturbations and external modulation
during the fast transition periods, an instantiation
of the general notion of transient state
dynamics~\cite{gros07,gros09}.
\begin{figure}[t]
\centerline{
\includegraphics[width=0.40\textwidth]{InverseOneSite.eps}
}
\caption{(color online) The time dependence of
$y(t)$ (red stars) $b(t)+2$ (solid line) and of
$a(t)-4$ (dashed line) for the one-site problem
with inhibitory self-coupling $x\to1/2-y$,
$\epsilon_a=\epsilon_b=\epsilon=0.01$ and
$\mu = 0.28$. A Hopf-bifurcation occurs for the
output $y(t)$ when the initial quasistationary fixpoint
becomes unstable, giving place to a new fixpoint
of period two.
}
\label{fig_inverse}
\end{figure}
{\it Inhibitory self-coupling.}--
So far we discussed, see
(\ref{eq_one_site}) and Fig.\ \ref{fig_oneSiteTime},
a neuron having its output $y(t)$ coupled back
excitatorily to its input via $x\to y-1/2$.
The dynamics changes qualitatively for an
inhibitory self-coupling $x\to 1/2-y$, see
Fig.\ \ref{fig_inverse}. There is now only
a single intersection of $g(a[-y+1/2]+b)$ with $y$.
This intersection corresponds to a stable fixpoint for small
gains $a$. A Hopf-bifurcation~\cite{gros08}
occurs when
$a(t)$ exceeds a certain threshold and a new
fixpoint of period two becomes stable. The
coordinates of this fixpoint of period two
slowly drift, compare Fig.\ \ref{fig_inverse},
due to the residual changes in $a(t)$ and
$b(t)$. Interestingly, the dynamics remains
non-trivial, as a consequence of the continuous
adaption, even in the case of inhibitory
self-coupling.
{\it Self organized chaos.}--
We have studied numerically fully connected
networks of $i=1,..,N$ polyhomeostatically
adapting neurons (\ref{eq_intrinsic_plasticity}),
coupled via
\begin{equation}
x_i(t) \ =\ {\textstyle \sum_{j\ne i}} w_{ij} y_j(t)~.
\label{eq_couplings}
\end{equation}
The synaptic strengths are $w_{ij}=\pm 1/\sqrt{N-1}$,
with inhibition/excitation drawn randomly with
equal probability. The adaption rates are
$\epsilon_a=\epsilon_b=0.01$. We consider
homogeneous networks where all neurons
have the identical $\mu$ for the target output
distributions (\ref{eq_exponential}),
with $\mu=0.15,\, 0.28$ and $\mu=0.5$.
The activity patterns presented in the inset of
Fig.\ \ref{fig_outputActi} are chaotic
for the $\mu=0.28$ network and laminar
with intermittent bursting for the network
with $\mu=0.15$. This behavior is typical
for a wide range of network geometries,
distributions of the synaptic weights and
for all initial conditions sampled. The
respective Kullback-Leibler divergences $D_\lambda$
decrease with time passing, as shown in
Fig.\ \ref{fig_outputActi}, due to the
ongoing adaption process (\ref{eq_intrinsic_plasticity}).
$D_\lambda$ becomes very small, though still finite,
for long times in the self-organized chaotic regime
($\mu=0.28,\, 0.5$), remaining
substantially larger in the intermittent
bursting phase ($\mu=0.15$).
We have evaluated the
global Lyapunov exponent,~\cite{ding07}
finding that two initial close orbits diverge
until their distance in phase space corresponds,
within the available phase space, to the distance of
two randomly selected states, the tellmark sign of
deterministic chaos.~\cite{gros08} The corresponding
global Lyapunov exponent is about 5\% per time-step
for $\mu=0.5$ and initially increases with the decrease
of $\mu$, until the intermittent-bursting regime emerges,
after which the global Lyapunov exponent declines
with the decrease of $\mu$.
The system enters the chaotic regime, in close
analogy to the one-side problem discussed previously,
whenever the adaptive dynamics
(\ref{eq_intrinsic_plasticity}) has pushed the
individual gains $a_i(t)$, of a sufficient number
of neurons, above their respective
critical values. Hence, the chaotic state
is self-organized. Chaotic dynamics
is also observed in the non-adapting limit, with
$\epsilon_a,\epsilon_b\to0$, whenever the
static values of $a_i$ are above the critical
value,
in agreement with the results of a large-$N$
mean field analysis of an analogous continuous
time Hopfield network.~\cite{sompolinsky88}
Subcritical static $a_i$ lead on the other side
to regular dynamics controlled by point attractors.
In Fig.\ \ref{fig_outdist} we present the
distribution of the output activities for networks
with target mean firing rates $\mu=0.28$
and $\mu=0.5$. Shown are in both cases the
firing rate distributions
$p(y)$ of the two neurons having
respectively the largest and the smallest
Kullback-Leibler divergence (\ref{eq_Kullback_Leibler})
with respect to the target
exponential distributions (\ref{eq_exponential}).
Also shown in Fig.\ \ref{fig_outdist} are the
distributions of the network-averaged output activities.
\begin{figure}[t]
\centerline{
\includegraphics[width=0.40\textwidth]{outputActiKL.eps}
}
\caption{(color online) Mean Kullback-Leibler
divergence $D_\lambda$, for $N=500$ networks,
of the real and target output distributions as
a function of number of iterations $t$. The solid black,
dashed-dotted red and the dashed blue lines
correspond respectively to target firing rates
$\mu=0.15$, $\mu=0.28$, and $\mu=0.5$.
Inset: Output activity of two randomly chosen
neurons for target average firing rates $\mu=0.28$ (top)
and $\mu=0.15$ (bottom).
}
\label{fig_outputActi}
\end{figure}
The data presented in Fig.\ \ref{fig_outdist}
shows, that the polyhomeostatically self-organized
state of the network results in firing-rate
distributions close to the target distribution.
This result is quite remarkable. The polyhomeostatic
adaption rules (\ref{eq_intrinsic_plasticity})
are local, viz every neuron adapts its internal
parameters $a_i$ and $b_i$ independently
on its own.
{\it Intermittency.}--
Interestingly, the neural activity presented
in the inset of Fig.\ \ref{fig_outputActi} for
$\mu=0.15$ shows intermittent or bursting behavior.
In the quiet laminar periods the neural activity
$y(t)$ is close to (but slightly below) the target
mean of $0.15$, as one would expect for a
homeostatically regulated system and the local
Lyapunov exponent is negative. The target
distribution of firing rates (\ref{eq_exponential})
contains however also a small but finite weight
for large values for the output $y(t)$, which
the system achieves through intermittent bursts
of activity. In the laminar/bursting periods
the gain $a(t)$ is below/above threshold, acting
such as a control parameter~\cite{platt93}.
\begin{figure}[t]
\begin{center}$
\begin{array}{c}
\includegraphics[width=0.4\textwidth]{outDistA.eps} \\
\includegraphics[width=0.4\textwidth]{outDistB.eps}
\end{array}$
\end{center}
\caption{(color online) Output distributions of the two neurons
with highest (blue diamonds) and lowest (red circles)
Kullback-Leibler divergence (\ref{eq_Kullback_Leibler})
compared to the mean output distribution
(dashed green line) and the target exponential
output distribution (full black line). For a
network with $N=500$ neurons and a target output
mean firing rate (A) $\mu = 0.28$ and (B) $\mu = 0.5$
(identical for all neurons).
Insets: For the same parameters the mean output
distribution of the single self-coupled neuron,
see Fig.~\ref{fig_stability}.
}
\label{fig_outdist}
\end{figure}
Both the intermittent and the chaotic regime are
chaotic in terms of the global Lyapunov being
positive in both regimes. A qualitative difference
can be observed when considering the subspace
of activities $(y_1,\dots,y_N)$. Evaluating
the global Lyapunov exponent in this subspace, viz
the relative time evolution
$(\Delta y_1,\dots,\Delta y_N)$ of
differences in activities, we find this restricted
global Lyapunov to be negative in the intermittent
and positive in the chaotic regime. Details on the
evolution from intermittency to chaos
as a function of $\mu$ and system sizes
will be given elsewhere.
{\it Conclusions.}--It is well known~\cite{arbib02},
that individual and networks of spiking neurons may
show bursting behavior. Here we showed, that networks
of rate encoding neurons are intermittently bursting
for low average firing rates, when polyhomeostatically
adapting, entering a fully chaotic regime for larger
average activity level. Autonomous and polyhomeostatic
self-control of dynamical systems may hence lead quite
in general to non-trivial dynamical states and novel phenomena.
Polyhomeostatic optimization is of relevance in a range
of fields. Here we have discussed it in the framework
of dynamical system theory. Alternative examples
are resource allocation problems, for which a given
limited resource, like time, needs to be allocated to
a set of uses, with the target distribution being the
percentages of resource allocated to the respective uses.
These ramifications of polyhomeostatic optimization will
be discussed in further studies.
\section*{References}
|
\section{Introduction}
\subsection{Models and Results}
\noindent Given an integer $d \geq 1$, let $\mathcal{G}^d = ({\mathbb{Z}}^d, \mathbb{E})$ be the complete graph whose vertex set is ${\mathbb{Z}}^d$, that is, $\mathbb{E} = \{\langle x,y \rangle : x \neq y\in {\mathbb{Z}}^d \}$. On this graph, we define a contact process model with two parameters, a positive real number $\lambda$, the rate of infection, and a random variable $N$ taking values on ${\mathbb{Z}}_{+}$, the range variable. Define the following sequences of random variables:
\begin{itemize}
\item $\{R_{x,n} : x \in {\mathbb{Z}}^d, n \in {\mathbb{N}} \}$, i.i.d. with distribution $\exp(1)$;
\item $\{I_{e,n} : e \in \mathbb{E}, n \in {\mathbb{N}}\}$, i.i.d. with distribution $\exp(\lambda)$;
\item $\{T_{x,n} : x \in {\mathbb{Z}}^d, n \in {\mathbb{N}} \}$, i.i.d. with distribution $\exp(1)$;
\item $\{N_{x,n} : x \in {\mathbb{Z}}^d, n \in {\mathbb{N}} \}$, i.i.d. with distribution $N$.
\end{itemize}
We also assume that all these sequences are independent of each other. For each $x \in {\mathbb{Z}}^d$, the Poisson Process $\mathcal{R}_x = \{ \sum_{k=1}^n R_{x,k} : n \in {\mathbb{N}} \}$ represents the {\em recovering times} of vertex $x$; the Poisson Process $\mathcal{T}_x = \{ \sum_{k=1}^n T_{x,k} : n \in {\mathbb{N}} \}$ determines the {\em change times} at which the infection range of vertex $x$ is updated.
Given $x \in {\mathbb{Z}}^d$, define
\begin{equation}\label{eq: S}
S_{x,0} = 0 \quad \text{and} \quad S_{x,n} = \sum_{k=1}^n T_{x,k}, ~n \geq 1
\end{equation}
and
\begin{equation}\label{eq: R}
r_x(t) = N_{x,n}, ~\forall t \in [S_{x,n-1}, S_{x,n} ), \quad n\geq1.
\end{equation}
The random variable $r_x(t)$ is the {\em range of $x$ at time $t$}.
For each bond $e \in \mathbb{E}$, the Poisson Process $\mathcal{I}_e = \{ \sum_{k=1}^n I_{e,k} : n \in {\mathbb{N}} \}$ determines the possible {\em infection times}. At some time $t\in\mathcal{I}_e$, an infected end-vertex of $e$, say $x$, infects the other end-vertex, say $y$, if $|x-y|\leq r_x(t)$ and $y$ is healthy at time $t$.
For each $t \geq 0$, the function $\zeta_t: {\mathbb{Z}}^d \longrightarrow \{0,1\}$ to be defined below, will denote if the vertex $x \in {\mathbb{Z}}^d$ is \emph{infected} at time $t$ if $\zeta_t(x) = 1$, and \emph{healthy} if $\zeta_t(x) = 0$. The set of infected vertices at time $t$ is denoted by $\zeta_t = \{ x \in {\mathbb{Z}}^d : \zeta_t(x) = 1\}$.
Let $o$ be the origin of ${\mathbb{Z}}^d$. At time $t=0$, we consider the initial condition $\zeta_0$, where $\zeta_0(o)=1$ and $\zeta_0(x)=0$ for all $x\in{\mathbb{Z}}^d\setminus\{o\}$, that is, the model starts with an unique infected vertex, $o$.
Given any $t > 0$, the function $\zeta_t : {\mathbb{Z}}^d \longrightarrow \{0,1\}$ is defined such that $\zeta_t(y) = 1$, if and only if, for some $m \in {\mathbb{N}}$, there exist times $0 = t_0 < t_1 < \cdots < t_m \leq t$ and vertices $o = x_0, \dots, x_m = y$, such that for all $k\in\{0, \dots, m-1\}$, setting $e_k = \langle x_k, x_{k+1} \rangle,$ it holds that:
\begin{itemize}
\item $t_{k+1} \in \mathcal{I}_{e_k}$;
\item $|e_k| \leq r_{x_k}(t_{k+1})$;
\item $[t_k, t_{k+1}] \cap \mathcal{R}_{x_k} = \emptyset$;
\item $[t_m, t] \cap \mathcal{R}_y = \emptyset$.
\end{itemize}
In this case, we say that there is an \emph{infection path} between $(o,0)$ and $(y,t)$. Informally, the first item above says that the infection spreads through the vertices $x_0, \dots, x_m$ at times $t_1< \cdots < t_m$; the second one says that along this infection path the spreading is restricted to the range of infection in each vertex; and the last two items say that there is no recovering times along the infection path.
Once defined the status of each vertex at any time, the construction of our processes is concluded. We refer to this stochastic processes, $(\zeta_t)_{t \geq 0}$, as Contact Processes with Dynamical Range (CPDR for short).
Let $P$ be a probability measure under which this process is defined; although $P$ depends on $\lambda$ and $N$, we omit this from the notation. Below, we state our main results, they characterize the survival of the infection in the CPDR on $\mathcal{G}^d$ in terms of the distribution of $N$.
\smallskip
\begin{theorem}\label{cfinito2}
If $E[N^d] < \infty$, then there exists $\lambda_0$ small enough such that, $\forall\ 0<\lambda<\lambda_0$, it holds that:\begin{equation}
P( \zeta_t \neq \emptyset, ~\forall t \geq 0) = 0.
\end{equation}
\end{theorem}
\smallskip
\begin{theorem}\label{cinf2}
If $\limsup_{n \to \infty} n P(N^d \geq n) > 0$, then
\begin{equation}
P( \zeta_t \neq \emptyset, ~\forall t \geq 0) > 0, \quad \forall \lambda>0.
\end{equation}
\end{theorem}
\smallskip
Analogous results to the theorems above can also be settled on the context of a long-range oriented percolation, where the range of bonds starting from some vertex is bounded by some random variable. More precisely, we will study the following long-range percolation model.
For any $d\geq 2$, let $(\vec{e}_i)_{i=1}^d$ be the canonical base of ${\mathbb{Z}}^d$ and $G=({\mathbb{Z}}^d,\mathbb{E}_v\cup \mathbb{E}_h)$ be the graph where $\mathbb{E}_h=\cup_{n=1}^\infty \{(x,y)\in{\mathbb{Z}}^d\times{\mathbb{Z}}^d:y-x=n.\vec{e}_1\}$ and $\mathbb{E}_v=\{(x,y)\in{\mathbb{Z}}^d\times{\mathbb{Z}}^d:y-x=\vec{e}_i,\ i\in\{2,\dots,d\}\}$. That is, along the lines parallel to the first coordinate axis we have long-range oriented bonds and along the other directions the graph has only nearest-neighbour oriented bonds. On a random subgraph of $G$, we will define a percolation process as follows. The percolation process has three parameters $p,q\in[0,1]$ and $N$ a non-negative integer random variable.
Given a sequence ${\bf N}=(N_x)_{x\in{\mathbb{Z}}^d}$ of i.i.d. random variables with common distribution $N$, we define the oriented random subgraph of $G$,
\begin{equation*}
G_{\bf N}:=({\mathbb{Z}}^d,\mathbb{E}_v\cup (\cup_{x\in{\mathbb{Z}}^d} \{(x,x+n\vec{e}_1)\in{\mathbb{Z}}^d\times{\mathbb{Z}}^d:n\leq N_x\})).
\end{equation*}
In words, $G_{\bf N}$ is obtained from $G$ by deleting all bonds from $x$ whose length is bigger than $N_x$. On the graph $G_{\bf N}$, we consider an independent bond percolation processes where bonds in $\mathbb{E}_h$ and $\mathbb{E}_v$ are {\em open} with probabilities $p$ and $q$, respectively (and {\em closed} with complementary probability). From now on, we call this model as Anisotropic Percolation with Random Range (APRR for short).
Let $P$ be the underlying probability measure defined by this processes (it will be clear in each context if we are dealing with the CPDR model or the APRR model, so there is no problem in denoting the underlying measure for both models by $P$) and, as usual in percolation, we use the notation $(x\rightarrow y)$ to denote the event where there is an oriented open path from the vertex $x$ to the vertex $y$ and $(x\rightarrow \infty)$ to denote the event where the vertex $x$ is connected by oriented open paths to infinitely many vertices of ${\mathbb{Z}}^d$.
We define the percolation function and the critical curve as $\theta(p,q)=P(o\rightarrow \infty)$ and $q_c(p)=\sup\{q:\theta(p,q)=0\}$, respectively (recall that the probability measure $P$ depends on $p,q$ and $N$, thus $\theta$ and $q_c$ also depend on $N$, but for simplicity we drop these parameters in our notation).
As in the CPDR, our goal is to study how the distribution $N$ influences the behavior of the critical curve $q_c$. In this direction we prove the next two theorems.
\smallskip
\begin{theorem}\label{perc1} If $EN<\infty$, then $q_c(p)>0$ for all $p<1$.
\end{theorem}
\smallskip
In the case $EN=\infty$, we have the following partial result:
\smallskip
\begin{theorem} \label{perc2} If $\limsup_{n\rightarrow\infty} nP(N\geq n)>0$, then $q_c(p)=0$ for all $p>0$.
\end{theorem}
\smallskip
Observe that Theorem~\ref{perc2} brings up the question if under the hypothesis
$\limsup_{n\rightarrow\infty} n P(N \geq n)>0$, percolation could occur in the one-dimensional case ($q=0$). The next result shows that even though with $q_c(p)=0$ for all $p>0$, the one-dimensional model may not cannot percolate even if $p=1$.
\smallskip
\begin{theorem}\label{perc3} Consider the case $p=1$ and $q=0$. Let $\beta > 0$ be a constant such that the distribution of $N$ satisfies
\begin{equation*}
P(N\geq n)=1-e^{-{\beta}/{n}}, ~ n \geq 0.
\end{equation*} Then $\theta(1,0)=0$, if $\beta<1$ and $\theta(1,0)>0$, if $\beta >1$.
\end{theorem}
\smallskip
That is, for the one-dimensional model, there is a phase transition in the parameter $\beta$. A natural question that came up from this theorem is: what happens at the critical point $\beta_c=1$? At this moment, we have not any answer for this question. Another natural question that arise is: given any $0 < p < 1$, does this one-dimensional model percolate for some large $\beta$? The next result gives an affirmative answer.
\smallskip
\begin{theorem}\label{perc4}
Consider the case $0 < p < 1 $ and $q=0$. If $N$ has distribution as in Theorem~\ref{perc3}, then $\theta(p,0) > 0$, if $\beta > p^{-1}$.
\end{theorem}
\smallskip
Therefore, there is a phase transition in the parameter $\beta$ and for each $p \in [0,1]$, we have $\beta_c(p) \in [1,p^{-1}]$.
\subsection{Related Works}
\noindent The study of long-range models has its origins in the mathematical-physics literature, indeed before percolation, long-range Ising models was studied by Dyson~\cite{D1,D2} and Frolich-Spencer~\cite{FS}. Long-range percolation was first studied in one-dimensional models, where each bond $e$ with length $n$ is open, independently of each other, with probability $p_n\sim\beta n^{-s}$. Schulman~\cite{Sc} showed that there is no percolation if $s>2$. The remarkable affirmative answer was given by Newman-Schulman~\cite{NS} where was proved that if $s<2$ there is oriented percolation; moreover, there is also non-oriented percolation in the critical case $s=2$, if $\beta$ is large enough and $p_1$ is close to one. This last result for $s=2$ was improved by Aizenman-Newman~\cite{AN}, whereas it was showed that there is a critical $\beta$, in the sense that there is no percolation if $\beta\leq 1$ and non-oriented percolation occurs if $\beta>1$ and $p_1$ is close to one. Afterwards, the oriented case for $s=2$ was solved by Marchetti-Sidoravicius-Vares~\cite{MSV}, also proving that $\beta_c=1$.
For long-range percolation in dimensions $d\geq 2$, an issue with the same taste we have dealt here is the also called {\em truncation question}: given a long-range percolation model that percolates, is there some large integer $K$ such that percolation still occurs if we delete all bonds whose lengths are bigger than $K$? the first work to tackle this question was Meester-Steif~\cite{MS} considering the case $p_n\sim e^{-cn}$. The case where $\sum p_n=\infty$ was first studied by Sidoravicius-Surgailis-Vares~\cite{SSV} for the case $p_n\sim \frac{1}{n\log n}$; Friedli-de Lima~\cite{FL} gave an affirmative answer for the case $d\geq 3$ without any additional hypothesis about $p_n$ (later, generalized for the oriented percolation and contact processes in \cite{ELV}), the case $d=2$ is still an open problem today and some partial answers had been given, see for example \cite{CL} and the references therein. One interesting negative answer was given by Biskup-Crawford-Chayes~\cite{BCC} for a long-range Potts model with $q=3$.
Long-range percolation models had been studied and showed a fruitful tool as a model for social networks, in particular the study of the graph distance on the long-range percolation cluster, the also called {\em chemical distance}. See for example~\cite{BB,Bi1,Bi2,DS} and the references therein.
The contact process was introduced by Harris in 1974 \cite{Ha}, as model for spreading an infection and it has became one of the most studied particle systems model since then, see the books \cite{Li1} and \cite{Li2} for an introduction on this subject. Long-range contact process is one of the several variations of this model and it had already appeared in the literature, probably its roots go back to Spitzer~\cite{Sp} and a phase transition theorem was proven in Bramson-Gray~\cite{BG}. Long-range contact processes variations are also present in the physical literature, see \cite{GHL} and the references therein.
More related to our work, we also would like to mention the paper of Can~\cite{Can}, in which it is studied a long-range contact processes on a percolation cluster in the one-dimensional complete graph, where each bond $\langle i,j\rangle,\ i,j\in{\mathbb{Z}}$ is open with probability $|i-j|^{-s}, s>1$. Contact processes on a dynamical environment was the content of the recent work of Linker-Remenik \cite{LR}, it deals with a contact processes running on a one-dimensional lattice undergoing dynamical percolation, that is, the set of bonds where the contact processes is defined changes along the time. The proposal here is to merge the long-range contact process with a random and dynamical environment, in the sense of the range of infection of each individual changes along the time in a random way.
\subsection{Outline of the paper}
\noindent Section~\ref{s2} is dedicated to the CPDR, we will prove Theorems~\ref{cfinito2} and \ref{cinf2}, in Subsections~\ref{sub21} and \ref{sub22}, respectively.
In Section~\ref{s3}, we consider the APRR model; Theorem~\ref{perc1} is proven in Subsection~\ref{sub31}. The proof of Theorem~\ref{perc2} is similar to the one of Theorem ~\ref{cinf2}, we will point out only the differences between these proofs in Subsection~\ref{sub32}. Theorems~\ref{perc3} and ~\ref{perc4} are proved in Subsection~\ref{sub33}.
Throughout this text we adopt the convention that if $x\in{\mathbb{Z}}^d$, then $x_i$ denote its $i$-th coordinate.
\section{Contact processes with dynamical range}\label{s2}
\noindent In this section, we deal with the CPDR with infection rate $\lambda$ and range distribution $N$. The demonstration of Theorem~\ref{cfinito2} is based on a standard comparison with a subcritical branching processes. In the proof of Theorem~\ref{cinf2}, we will perform a block renormalization argument whose goal is to show that the CPDR dominates a supercritical, anisotropic, oriented and independent percolation model.
\subsection{Proof of Theorem \ref{cfinito2}} \label{sub21}
\noindent The ideia of this proof is to define special subsets of ${\mathbb{Z}}^d\times \mathbb{R}_+$ called \emph{atoms}. These atoms will be a covering for the infection path starting from $o$ and we will show that for $\lambda$ small enough the creation of new atoms is dominated by a subcritical branching process. This type of argument is standard to exhibit the occurrence of subcritical phase in the contact process as well in percolation; however, as the range are unbounded variables, a more detailed analysis is needed.
For each $x \in {\mathbb{Z}}^d$ and each $s > 0$, we define the stopping times
\begin{align}\label{eq: alphabeta}
\alpha(x,s) &= \inf \{ t > s : t \in \mathcal{R}_x\} \quad \text{and} \nonumber \\
\beta(x,s) &= \inf \{ t > \alpha(x,s) : t \in \mathcal{T}_x\}.
\end{align}
The structure of atoms will be defined inductively. Each atom is an ordered pair $A = (x,I)$, where $x \in {\mathbb{Z}}^d$ and $I \subset [0, \infty)$ is an interval. We set $s_0 = 0, a_0=0$, $x_0 = o$, $b_0 = \beta(x_0, a_0)$, $\Gamma_0 = [a_0, b_0)$ and let $A_0 = (x_0,\Gamma_0)$ be the \emph{root atom}. Observe that every part of the infection path in $\{o\}\times{\mathbb{Z}}_+$ until the first recovering time at $o$ is covered by the root atom.
Suppose that for some $n \in {\mathbb{N}}$ the objects
$x_i\in{\mathbb{Z}}^d$, $s_i, a_i, b_i\in\mathbb{R}_+$, $\Gamma_i=[a_i,b_i)$ and the atom $A_i = (x_i,\Gamma_i)$ had already been defined for all $i \in \{0, \dots, n-1\}$. Define the set $\Delta_n := \bigcup_{i=0}^{n-1} \Gamma_i$ and remark that $\Delta_1=[0,\beta(o, 0))$.
Given $s \in \Delta_n$, we say that the time $s$ is $n$-\emph{auspicious} if, there exist $y = y(s) \in {\mathbb{Z}}^d$ and $m = m(s) \in \{0, \dots, n-1\}$ such that
\begin{equation}\label{eq: nbom}
s \in \mathcal{I}_{\langle x_m, y \rangle} \cap \Gamma_m \quad \text{and} \quad |x_m - y| \leq r_{x_m}(s).
\end{equation}
For each $x \in \{ x_0, \dots, x_{n-1}\}$, let
\begin{equation}
\gamma_n(x) = \sup\{b_i : x_i=x, 0 \leq i < n\}.
\end{equation}
We say that an $n$-auspicious time $s$ is $n$-\emph{good} if one of the three situations bellow occurs
\begin{align}\label{eq: Situations}
\text{S1}&) \quad y \notin \{ x_0, \dots, x_{n-1}\}; \nonumber \\
\text{S2}&) \quad s \geq \gamma_n(y); \nonumber\\
\text{S3}&) \quad s < \gamma_n(y) \text{ ~and~ } [s,\gamma_n(y))\cap \mathcal{R}_{y} = \emptyset.
\end{align}
Define the set $D_n := \{s \in \Delta_n : s \text{ is $n-$good} \}$, if $D_n = \emptyset$, we stop the creation of new atoms. Otherwise, we set $s_n = \inf D_n$ and $x_n = y(s_n)$. According to \eqref{eq: Situations}, we have three possible situations: if either S1 or S2 occurs, we define $a_n = s_n$; if S3 occurs, we define $a_n = \gamma_n(y(s_n))$. To conclude our induction step, in all cases we define
\begin{equation}\label{eq: bn}
b_n = \beta(x_n, a_n), \quad \Gamma_n=[a_n, b_n) \quad \text{and} \quad A_n = (x_n, \Gamma_n).
\end{equation}
At this point, is interesting to note that if $s_n < a_n$, then $[s_n, a_n) \subset \Gamma_j$ for some $j \in \{0, \dots, n-1\}$ such that $x_j = x_n$. Indeed, let $j$ be such that $b_j = \gamma_n(x_n)=a_n$, as there is at least one recovering time in the interval $(a_j,b_j)$, by condition S3, we have that $a_j<s_n$, then $[s_n, a_n) \subset [a_j,b_j) = \Gamma_j$.
Once constructed the structure of atoms, we will partition it in generations. For each $n \geq 1$ such that $A_n$ is an atom, we say that the atom $A_{m(s_n)}$ \emph{actives} $A_n$. In this case, we denote $A_{m(s_n)} \leadsto A_n$. We define the 0-th generation as the set $\Upsilon_0 = \{A_0\}$ and for all $k \geq 1$, the $k$-th generation, $\Upsilon_k$, is the set of atoms $A$ such that
\begin{equation}
\exists B \in \Upsilon_{k-1} : B \leadsto A \quad \text{and} \quad A \notin \bigcup_{i=0}^{k-1} \Upsilon_i .
\end{equation}
For each $n \geq 0$ such that $A_n=(x_n,\Gamma_n)$ is an atom, the number of atoms activated by $A_n$, $\left|\{B : A_n \leadsto B \}\right|$, can be bounded by the random variable $X_n$ given by
\begin{equation} \label{branch}
X_n := \sum_{y \in {\mathbb{Z}}^d} \left| \{ t \in \mathcal{I}_{\langle x_n, y \rangle} \cap \Gamma_n : |x_n-y| \leq r_{x_n}(t) \}\right|.
\end{equation}
Due to the lack of memory property of the exponential distribution, the sequence of time duration for each atom $A_n$, $(b_n - a_n)_n$, is an i.i.d. sequence with common distribution $\beta(o,0)$. Moreover, by definitions in \eqref{eq: alphabeta} and \eqref{eq: bn}, each atom ends with a time of change, then their range $r_{x_n}(t)$, $t \in \Gamma_n$ is also independent of each other. Thus, the random variables $(X_n)_n$ defined in \eqref{branch} are i.i.d.
Hence, the stochastic sequence $(|\Upsilon_n|)_{n\geq 0}$ is dominated by a standard branching process whose number of offspring has the same distribution as $X := X_0$.
The next step is to show that there exists $\lambda_0>0$ small enough such that $E[X] < 1$, that is, the branching process is subcritical. The random variable $X$ is determined only by the Poisson processes $\mathcal{R}_o, \mathcal{T}_{o}, \mathcal{I}_{\langle o, x\rangle}, x \in {\mathbb{Z}}^d \setminus \{o\}$ and by the random variables $N_{o,n}$, $n \geq 1$. Using the independence among these variables, we can calculate
\begin{equation}
E[X] = E\left[ \left| \{y \in {\mathbb{Z}}^d : |y| \leq N \} \right|\right] E[\beta(o,0)] E[Y] \leq E[(2N + 1)^d]2\lambda,
\end{equation}
where $Y\sim \exp(\lambda)$ and remark that $\beta(o,0)\sim \mbox{Gamma} (2,1)$. Since $E N^d <\infty$, we can take $\lambda_0 = 1/2E[(2N + 1)^d]>0$, then it holds that $P( |\Upsilon_n| > 0, ~\forall n \geq 0) = 0$, $ \forall~ 0<\lambda<\lambda_0$.
We conclude this proof showing that
$ \{ \zeta_t \neq \emptyset, \forall t \geq 0\} \subset \{| \Upsilon_n | > 0, \forall n \geq 0\}.$
Indeed, we will show that the collection of atoms is a covering for all infection paths, that is, for each $x \in {\mathbb{Z}}^d$, we have $\{t > 0 : \zeta_t(x) = 1\} \subset J_x$, where $J_x = \cup_{n : x_n = x}\Gamma_n$.
Suppose that for some $m \in {\mathbb{N}}$, the sequence of vertices $(o= v_0, \dots, v_m)$ and the sequence of instants $(0 = t_0 < \dots < t_m)$ are such that, for each $k \in \{0, \dots, m-1\}$, it holds that
\begin{equation}
t_{k+1} \in \mathcal{I}_{\langle v_k, v_{k+1}\rangle}, \quad
|v_k - v_{k+1}| \leq r_{v_k}(t_{k+1}) \quad \text{and} \quad
[t_k, t_{k+1}] \cap \mathcal{R}_{v_k} = \emptyset.
\end{equation}
Arguing by induction we will show that, for each $k \in \{0, \dots, m-1\}$, the interval $[t_k, t_{k+1}]$ is covered by atoms at the vertex $v_k$, that is, $[t_k, t_{k+1}] \subset J_{v_k}$. For $k = 0$, we have that $[t_0, t_1 ] \subset \Gamma_0 \subset J_o$, since there is no recovering time in $[t_0, t_1 ]$. Suppose that for some $k \in \{0, \dots, m-1\}$ we have that $[t_{k-1}, t_{k}] \subset J_{v_{k-1}}$. Thus, exists $j \geq 0$ such that $t_k \in \Gamma_j$ and $x_j = v_{k-1}$. Let $\ell = \max\{ n \geq 1 : s_{n-1} \leq t_k\}$. It is clear that, $t_k \geq a_j \geq s_j$, therefore $j < \ell$. In that way, we have that $t_{k} \in \mathcal{I}_{\langle x_j, v_{k}\rangle} \cap \Gamma_j$ and $|v_k - x_j| \leq r_{x_j}(t_{k})$ with $j < \ell$. Thus, according to \eqref{eq: nbom}, $t_k$ is $\ell-$auspicious.
If $t_k$ is $\ell-$good, we have a contradiction: we recall that $s_{\ell} = \inf\{ s \in \Delta_{\ell} : s \text{ is $\ell-$good} \}$, thus $t_k \geq s_{\ell}$, but $\ell = \max\{ n \geq 1 : s_{n-1} \leq t_k\}$ implies that $t_k < s_{\ell}$. Thus, according to situations described in \eqref{eq: Situations}, S1, S2 and S3 do not occur, then $t_{k+1} < \gamma_{\ell}(v_k) = b_i$ for some $i < \ell$ with $x_i = v_k$. Since $i < \ell$, we have $s_i \leq t_k$, then $[t_k, t_{k+1}] \subset [s_i, b_i)$. If $s_i = a_i$ then $[t_k, t_{k+1}] \subset \Gamma_i$; if $s_i < a_i$, we write $[t_k, t_{k+1}] \subset [s_i, a_i) \cup [a_i, b_i)$ and the covering follows since $ [s_i, a_i)\subset \Gamma_{i^*}$ for some $i^* \in \{0, \dots, i-1\}$ such that $x_{i^*} = x_i$, as observed in paragraph subsequent to \eqref{eq: bn}.
\qed
\subsection{Proof of Theorem \ref {cinf2}}\label{sub22}
\noindent
{\textbf{Definitions and events.}}
Initially, we consider the case $d=1$. Define the sets $V=[0,6L)\cap{\mathbb{Z}}_+$, $\Delta = [0,2H)$ and let $\mathcal{B}:= V\times\Delta$ be a block in ${\mathbb{Z}}_+\times\mathbb{R}_+$, where $L$ and $H$ are integers numbers to be chosen later (as functions of $\lambda$ and $N$). For each $v=(v_1,v_2)\in {\mathbb{Z}}_+^2$, define $\mathcal{B}_{v}:=V_{v}\times\Delta_{v}$, where $V_{v}=6Lv_1+V$ and $\Delta_{v}= 2Hv_2 + v_1+\Delta$.
The partition $(\mathcal{B}_{v})_{v\in{\mathbb{Z}}_+^2}$ on ${\mathbb{Z}}_+\times\mathbb{R}_+$ will induce an independent, oriented, anisotropic percolation model on a {\em renormalized graph}
isomorphic to the first quadrant of the square lattice, ${\mathbb{L}}}\renewcommand{\d}{\mathrm{d}_+^2$. This induced percolation processes will be defined in such a way that if there is percolation on the renormalized graph then the infection will survive in the CPDR, that is
$ P( \zeta_t \neq \emptyset, ~\forall t \geq 0) > 0.$
For each $v=(v_1, v_2) \in {\mathbb{Z}}^2_+$ and $k \in [0,H)\cap {\mathbb{Z}}$, define the subsets of $\Delta_v$
\begin{equation}\label{eq: I}
I_v^k := 2Hv_2 + v_1 +[2k,2k + 1) \quad \text{and} \quad J_v^k:= I_v^k + 1,
\end{equation}
and given $(j,k) \in \left( [0,6L) \times [0,H) \right) \cap {\mathbb{Z}}^2$ define the event
\begin{equation}\label{eq: Atil}
\tilde{A}_v(j,k) := \left\{ I_v^k \cap \mathcal{T}_{v^j} \neq \emptyset \right\} \cap \left\{ J_v^k \cap \mathcal{T}_{v^j} = \emptyset \right\},
\end{equation}
where $v^j=6Lv_1+j$. Given that $\tilde{A}_v(j,k)$ occurs, define the random variable
$ m := \max\{n \geq 0 : S_{v^j,n} \in I_v^k\}, $
where $S_{x,n}$ is defined in \eqref{eq: S}, and the event
\begin{equation}\label{eq: A}
A_v(j,k) := \tilde{A}_v(j,k) \cap \left\{N_{v^j,m} \geq 7L \right\} \cap \left\{ \left(I_v^k \cup J_v^k \right)\cap \mathcal{R}_{v^j} = \emptyset \right\}.
\end{equation}
By definition in \eqref{eq: R}, the event $A_v(j,k)$ implies that the range of infection of vertex $v^j$ becomes at least $7L$ at some time $I_v^k$ and it is kept during the whole interval $J_v^k$. That is,
\begin{equation}\label{eq: P1}
r_{v^j}(t) \geq 7L, ~\forall t \in J^k_v.
\end{equation}
The last event in the rhs of \eqref{eq: A} says that if $v^j$ is infected during the interval $I_v^k$, then the infection persists during the whole interval $J_v^k$. That is,
\begin{equation}\label{eq: P2}
A_v(j,k) \cap \left\{ \exists t \in I^k_v : \zeta_{t}(v^j) = 1 \right\} \subset\left\{ \zeta_{t}(v^j) = 1, ~\forall t \in J_v^k \right\}.
\end{equation}
Due to lack of memory of the exponential distribution, all the events $\{A_v(j,k) : v \in {\mathbb{Z}}^2 ,(j,k) \in \big[[0,6L)\times[0,H)\big]\cap {\mathbb{Z}}^2 \}$ are independent of each other. For each $i \in {\mathbb{N}}$, the event $\tilde{B}_{v}^{i}(j,k)$
is defined as:
\begin{equation}\label{eq: Bitil}
\tilde{B}_{v}^{ i}(j,k) := \left\{ \left(I_v^k - 1\right) \cap \mathcal{T}_{v^j+i} \neq \emptyset \right\} \cap \left\{ I_v^k \cap \mathcal{T}_{v^j + i} = \emptyset \right\}.
\end{equation}
If the event $\tilde{B}^{i}_v(j,k)$ occurs, we define the random variable $m^{\ast}_i := \max\{n \geq 0 : S_{v^j+i,n} \in I_v^k-1\}$ and the event
\begin{equation}\label{eq: Bi}
B^{i}_v(j,k) := \tilde{B}^{i}_v(j,k) \cap \left\{N_{v^j+i,m_i^{\ast}} \geq i \right\},
\end{equation}
Observe that the variables $\{{B}^{i}_v(j,k) : i \geq 1\}$ are independent and on the event ${B}^{i}_v(j,k)$, we have that
\begin{equation}\label{eq: P3}
r_{v^j+i}(t) \geq |(v^j + i) - v^j|, ~\forall t \in I_v^k.
\end{equation}
Given a positive integer $M < L$ to be defined latter, define the following event
\begin{equation}\label{eq: B}
B_v(j,k) := \left\{ \sum_{i=1}^L \mathbbm{1}_{\{B^{i}_v(j,k)\}} \geq M \right\}.
\end{equation}
Given $v=(v_1, v_2) \in {\mathbb{Z}}^2_+$, the definition of the next events will depend on the parity of $v_2$, the purpose is to avoid dependence in the percolation processes to be defined on the renormalized graph. If $v_2$ is even, for all $j \in [L, 2L) \cap {\mathbb{Z}}$ and $k \in [0,H)\cap {\mathbb{Z}}$, we define $C_v(j,k)$ as
\begin{equation}
C_v(j,k) := A_v(j,k) \cap B_v(j,k) \cap \left\{ \sum_{m=0}^{H-1} \mathbbm{1}_{ \{A_v(\ell,m)\} } = 0, ~\forall \ell \in [L,j)\cap {\mathbb{Z}} \right\}.
\end{equation}
The three events above are independent, since $A_v(j,k)$ uses only variables indexed by $v^j$, $ B_v(j,k)$ and $\left\{ \sum_{m=0}^{H-1} \mathbbm{1}_{ \{A_v(\ell,m)\} } = 0, ~\forall \ell \in [L,j)\cap {\mathbb{Z}} \right\}$ use only variables indexed by vertices to the right of $v^j$ and to the left of $v^j$, respectively. If $C_v(j,k)$ occurs, we also define
\begin{equation}\label{eq: V-}
\mathcal{V}_v(j,k) := \{ v^j + i : B_v^i(j,k) \text{ occurs} \}.
\end{equation}
Now, if $v_2$ is odd, for each $j \in [4L, 5L) \cap {\mathbb{Z}}$ and $k \in [0,H)\cap {\mathbb{Z}}$, the set $\mathcal{V}_v(j,k)$ is defined as above and the event $C_v(j,k)$ is analogously defined
\begin{equation}\label{eq: C-}
C_v(j,k) := A_v(j,k) \cap B_v(j,k) \cap \left\{ \sum_{m=0}^{H-1} \mathbbm{1}_{ \{A_v(\ell,m)\} } = 0, ~\forall \ell \in [4L,j)\cap {\mathbb{Z}} \right\},
\end{equation}
as in the case $v_2$ even, the three events above are also dependent.
\vspace{.2cm}
\textbf{The Renormalization Procedure.} Once defined all events above, we start to define the percolation on the renormalized graph. The renormalized graph will be the first quadrant of the square lattice ${\mathbb{L}}}\renewcommand{\d}{\mathrm{d}^2_+=({\mathbb{Z}}^2_+, \mathbb{E}_h\cup\mathbb{E}_v)$, where $\mathbb{E}_h = \{ (v, v+\vec{e}_1) : v \in {\mathbb{Z}}^2_+\}$, is the set of horizontal bonds, and $\mathbb{E}_v = \{ ( v, v+\vec{e}_2 ): v \in {\mathbb{Z}}^2_+ \}$ is the set of vertical bonds. On the set $\mathbb{E}_h\cup\mathbb{E}_v$, we define the partial order $\prec$ satisfying:
\begin{itemize}
\item $e\prec f,\ \forall e\in\mathbb{E}_h$ and $\forall f\in\mathbb{E}_v$;
\item Given $e=\left((u_1,u_2),(u_1+1,u_2)\right)$ and $f=\left((v_1,v_2),(v_1+1,v_2)\right)\in\mathbb{E}_h$,\\ $e\prec f$ if and only if $u_2<v_2$ or ($u_2=v_2$ and $u_1<v_1$);
\item Given $e=\left((u_1,u_2),(u_1,u_2+1)\right)$ and $f=\left((v_1,v_2),(v_1,v_2+1)\right)\in\mathbb{E}_v$,\\ $e\prec f$ if and only if $u_2<v_2$ or ($u_2=v_2$ and $u_1<v_1$).
\end{itemize}
Now, we are ready to define an exploration algorithm, concerning special events associated to the renormalized graph ${\mathbb{L}}}\renewcommand{\d}{\mathrm{d}^2_+$. Inductively, we build a random sequence $(C_n,E_n)_{n \geq 0}$, where $C_n \subset {\mathbb{Z}}^2_+$ is the cluster of the origin $o\in{\mathbb{L}}}\renewcommand{\d}{\mathrm{d}_+^2$ and $E_n \subset \mathbb{E}_h \cup \mathbb{E}_v$ is the set of checked bonds up to step $n$. We will also define the functions
\begin{equation}
\tau : \cup_{n\geq 0} C_n \longrightarrow [0,6L) \times [0,H),
\end{equation}
that associates for each vertex $v\in\cup_{n\geq 0} C_n $ coordinates in an appropriate block, these coordinates will be labels of the intervals which the infection spread out in the CPDR;
\begin{equation}
\sigma: \cup_{n\geq 0} C_n \longrightarrow {\mathbb{Z}}_+
\end{equation}
that indicates the infected vertices in the original lattice $\mathcal{G}^1$; and
\begin{equation}
t^{\ast}: \cup_{n\geq 0} C_n \longrightarrow [0,\infty)
\end{equation}
will be defined in such a way that $\zeta_{t^{\ast}(v)}(\sigma(v)) = 1$.
To initialize the exploration process, let $o_* = (0,0) \in {\mathbb{Z}}^2_+$ be the origin of ${\mathbb{L}}}\renewcommand{\d}{\mathrm{d}_+^2$. If $A_{o_*}(0,0)
$ occurs, define
\begin{equation}
C_0 = \{o_*\}, \quad E_0 = \emptyset, \quad \tau(o_*)=(0,0), \quad \sigma(o_*)=o \quad \text{and} \quad t^{\ast}(o_*)=1,
\end{equation}
and observe that
\begin{equation}
A_v(\tau(v)) \text{ occurs ~~and~~} \zeta_{t^{\ast}(v)}(\sigma(v)) = 1, ~\forall v \in C_0.
\end{equation}
Otherwise, the exploration does not start and set $(C_n, E_n) = (\emptyset, \emptyset)$ for all $n \geq 0$.
As our induction hypothesis, suppose that for some $n \geq 0$, the sets $C_n, E_n$ are defined and for all $v \in C_n$ are also defined $\tau(v) = (\tau_1(v), \tau_2(v)),$ $\sigma(v), t^{\ast}(v)$, satisfying the conditions
\begin{equation}\label{eq: PInd}
\sigma(v) = v^{\tau_1(v)}, ~ t^{\ast}(v) = \inf J^{\tau_2(v)}_v, ~
A_v(\tau(v)) \text{ occurs}, ~ \zeta_{t^{\ast}(v)}(\sigma(v)) = 1
\end{equation}
and $\tau_1(v) \in [0,2L)\cap {\mathbb{Z}}$ if $v_2$ is even, and $\tau_1(v) \in [3L,5L)\cap {\mathbb{Z}}$ if $v_2$ is odd.
Define the set $F_{n} = \{ ( v,u )\in\mathbb{E}_h\cup\mathbb{E}_v : v \in C_n \text{ and } u \notin C_n \} \cap E_n^c$. If $F_n = \emptyset$, then we stop the algorithm and set $(C_m, E_m) = (C_n, E_n),\ \forall m > n$. Otherwise, let $( v, u ) \in F_n$ be the minimal bond following the order $\prec$, with $v = (v_1, v_2) \in C_n$ and $u = (u_1, u_2) \notin C_n$ and define $E_{n+1} = E_n \cup \{ ( v,u ) \}$. To define $C_{n+1}$ we will consider four different cases.
The first case is when $( v,u ) \in\mathbb{E}_h$ and $v_2$ is even. As $v$ is fixed we write $\tau(v) = (\tau_1, \tau_2)$.
We declare that the bond $( v, u )$ is {\em open } in ${\mathbb{L}}}\renewcommand{\d}{\mathrm{d}^2_+$ if, for some $j \in [0,L) \cap {\mathbb{Z}}$ the following event occurs:
\begin{equation}\label{eq: aberto1}
A_u(j,\tau_2) \cap \left\{ \mathcal{I}_{\langle \sigma(v), u^j \rangle} \cap J_v^{\tau_2} \neq \emptyset \right\}.
\end{equation}
In this case, we set
\begin{equation}
\tau(u) = (j,\tau_2), \quad \sigma(u) = u^j \quad \text{and} \quad t^{\ast}(u) = \sup J_v^{\tau_2}.
\end{equation}
Observe that if $( v,u )$ is open, it holds $v_2 = u_2$ and $\tau_2(u) = \tau_2(v) = \tau_2$, keeping the same parity. Note also that $u_1 = v_1 + 1$, thus, by definitions in \eqref{eq: I}, we have that $ t^{\ast}(v)+ 1 = t^{\ast}(u) = \sup J_v^{\tau_2} = \inf J_u^{\tau_2}$. By definition, $A_u(\tau(u)) = A_u(j,\tau_2)$ occurs. By the induction hypothesis, we have that $A_v(\tau(v))$ occurs and $\zeta_{t^{\ast}(v)}(\sigma(v)) = 1$, thus \eqref{eq: P1} and \eqref{eq: P2} imply that $r_{\sigma(v)}(t) \geq 7L > |\sigma(v) - \sigma(u)|,\ \forall t \in J_v^k$ and $\zeta_{t}(\sigma(v)) = 1$. By \eqref{eq: P2} and observing that the event in \eqref{eq: aberto1} occurs, we conclude that $\zeta_{t^{\ast}(u)}(\sigma(u)) = 1$. Summarizing, all properties assumed in \eqref{eq: PInd} are also true for $u$.
The case $( v,u ) \in \mathbb{E}_h$ and $v_2$ odd is treated analogously, we only need to replace $j \in [0,L) \cap {\mathbb{Z}}$ by $j \in [3L,4L) \cap {\mathbb{Z}}$. Clearly, all properties above are kept.
Now, suppose that $( v,u ) \in \mathbb{E}_v$ and $v_2$ is even. According to the order $\prec$, it is possible that for some time interval $\Delta_u$, the block $\mathcal{B}_u$ has already been analyzed in the CPDR. This occurs if $w := (u_1-1,u_2)\in C_n$, it means that $( w,u )$ is not open and belongs to $E_n$. More precisely, reminding that $u_2 = v_2 + 1$ is odd, the events $A_u(j,\tau_2(w))$, $j = 3L, \dots, 4L-1$ had already been analyzed. As we will see below, to define what means the event $( v,u )$ is open, we need to analyze only the events concerning the vertices $u^j$, $j=4L, \dots, 6L-1$; this will hold that the status open or not among the analyzed bonds are independent.
We say that the bond $( v,u )$ is open if, for some $j \in [4L, 5L) \cap {\mathbb{Z}}$ and $k \in [0,H)
\cap {\mathbb{Z}} $, the event $C_u(j,k)$ occurs and for some $x \in \mathcal{V}_u(j,k)$ also occurs the event
\begin{equation}\label{eq: aberto2}
D_x:= \Big\{\left( \Delta_v \cup \Delta_u \right) \cap \mathcal{R}_x = \emptyset \Big\} \cap \left\{\mathcal{I}_{\langle \sigma(v), x \rangle} \cap J_v^{\tau_2} \neq \emptyset \right\} \cap \left\{\mathcal{I}_{\langle x, u^j \rangle} \cap I_u^{k} \neq \emptyset \right\}.
\end{equation}
In this case, we set
\begin{equation}
\tau(u) = (j,k), \quad \sigma(u) = u^j \quad \text{and} \quad t^{\ast}(u) = \inf J_v^{k}.
\end{equation}
Remark that $u_2 = v_2 +1$ is odd and $\tau_1(u) = j \in [3L, 5L)\cap {\mathbb{Z}}$. By definition, all properties in \eqref{eq: PInd}, except $\zeta_{t^{\ast}(u)}(\sigma(u)) = 1$, had been already settled. Take some $s \in \mathcal{I}_{\langle \sigma(v), x \rangle} \cap J_v^{\tau_2}$, the hypothesis assumed by $v \in C_n$ imply that $\zeta_s(\sigma(v)) = 1$ and $r_{\sigma(v)}(s) \geq 7L > |\sigma(v) - x|$, then $\zeta_s(x) = 1$. The first condition in \eqref{eq: aberto2} implies that $\zeta_t(x) = 1,$ $\forall t \in \left( \Delta_v \cup \Delta_u \right)\cap[s,\infty)$, in particular $\zeta_t(x) = 1,\ \forall t \in I_u^k$. Thus, under the occurrence of $C_u(j,k)$ and the third event in \eqref{eq: aberto2} (remember the definitions given in \eqref{eq: C-} and \eqref{eq: V-}, and the property \eqref{eq: P3}), we have that $r_x(t) \geq |x-u^j|,\ \forall t \in I_u^k$, then the fact $\zeta_{t^{\ast}(u)}(\sigma(u)) = \zeta_{t^{\ast}(u)}(u^j) =1$ follows from \eqref{eq: P2}.
The case $( v,u ) \in \mathbb{E}_v$ and $v_2$ odd is analogous, replacing $j \in [4L, 5L) \cap {\mathbb{Z}}$ by $j \in [L,2L) \cap {\mathbb{Z}}$.
We emphasize that both events $( v - \vec{e}_2, v )$ is open and $( v, v+ \vec{e}_2)$ is open, accordingly first event in rhs of \eqref{eq: aberto2}, use variables indexed by vertices in $\mathcal{B}_v$. However, there is no dependence, since due the parity of $v_2$, the pairs of sets like $\mathcal{V}_v(\cdot,\cdot)$ and $\mathcal{V}_{v+e_2}(\cdot,\cdot)$ are located in different halves of $\mathcal{B}_v$.
To conclude our inductive construction, define
\begin{equation}\label{eq: Cnum}
C_{n+1} =
\begin{cases} C_n \cup \{u\}, &\text{if } ( v,u ) \text{ is open};\\
C_n, & \text{otherwise}.
\end{cases}
\end{equation}
Denote $C = \cup_{n\geq 0} C_n$. As remarked above, whenever $( v,u )$ is open, it follows that the properties in \eqref{eq: PInd} are also satisfied by $u$. In particular, our induction procedure shows that
\begin{equation}
\zeta_{\sigma(v)}(t^{\ast}(v)) = 1, ~ \forall v \in C.
\end{equation}
Thus, if $\left\vert C\right\vert = \infty$, for all $a \in \mathbb{R}$ there exists $v = (v_1,v_2) \in C $ such that $v_1 + v_2 \geq a$, as $t^{\ast}(v) > v_1 + v_2 $ and $\zeta_t = \emptyset$ is a absorbing state, it holds that $\zeta_{\sigma(v)}(t^{\ast}(v)) = 1$ implies that $\{ \zeta_t \neq \emptyset, ~\forall t \in [0,a]\}$. As $a \in \mathbb{R}$ was chosen arbitrarily, we have that
\begin{equation}
\left\{ \vert C \vert = \infty \right\} \subset\{ \zeta_t \neq \emptyset, ~\forall t \geq 0\}.
\end{equation}
Thus, to conclude this proof it is enough to show that the constants $L$ and $H$ can be chosen in such a way that $P \left( \vert C \vert = \infty \right) > 0$.
\vspace{.2cm}
\textbf{Probability bounds and conclusion.} Given $v \in {\mathbb{Z}}^2_+$ and $(j,k) \in \left( [0,6L) \times [0,H) \right) \cap {\mathbb{Z}}^2$, by equations \eqref{eq: Atil} and \eqref{eq: A}, it holds that
\begin{equation}\label{eq: ProbA}
\rho_L:=P(A_v(j,k)) = (1 - e^{-1}) e^{-1} P(N \geq 7L) e^{-2}.
\end{equation}
And for each $i \in {\mathbb{N}}$ and each $j \in \big[[L,2L) \cup [4L,5L)\big]\cap {\mathbb{Z}}$, the equations \eqref{eq: Bitil} and \eqref{eq: Bi} hold that
\begin{equation}\label{eq: ProbBi}
P(B_v^{i}(j,k)) = (1-e^{-1})e^{-1}P(N \geq i).
\end{equation}
Thus, the hypothesis $\limsup_{n \to \infty} nP(N \geq n) > 0$ implies that
\begin{equation}
\sum_{i \geq 1} P(B_v^{i}(j,k)) = \infty.
\end{equation}
From now on, fix an an arbitrary $\delta> 0$. Recall the definition given in \eqref{eq: B}. The Borel-Cantelli's Lemma implies that given any $M \in {\mathbb{N}}$, there exists $L_0(\delta, M)$ large enough such that
\begin{equation}\label{eq: ProbB}
P(B_v(j,k)) > 1-\delta, ~\forall L \geq L_0.
\end{equation}
To analyze the probabilities of events related to vertical connections, we need to make distinction on the parity of second coordinate. Let us consider $v=(v_1, v_2) \in {\mathbb{Z}}^2$ with $v_2$ odd (the other case is analogous) and let $A_v$ be the union of independent events
\begin{equation}\label{eq: Av}
A_v:=\bigcup_{j=4L}^{5L-1}\bigcup_{k=0}^{H-1} A(j,k).
\end{equation}
Under the occurrence of $A_v$, define
\begin{equation}\label{eq: Jtil}
\tilde{j} := \max \left\{ j ~:~ \sum_{k=
}^{H-1} \mathbbm{1}_{ \{A_v(\ell,k)\} } = 0, ~\forall \ell \in [4L,j)\cap {\mathbb{Z}} \right\}
\end{equation}
and $\tilde{k} := \inf\{k : A_v(\tilde{j}, {k}) \text{ occurs}\}$. By \eqref{eq: C-}, \eqref{eq: ProbA}, \eqref{eq: Jtil} and the bound \eqref{eq: ProbB}, we conclude that
\begin{align}\label{eq: ProbC}
P&\left( \bigcup_{j=4L}^{5L-1}\bigcup_{k=0
}^{H-1} C_v({j,k}) \right) \nonumber\\ &\geq \sum_{j=4L}^{5L-1}\sum_{k=0}^{H-1} \left[ P\left(A_v \cap \{(\tilde{j}, \tilde{k}) = (j,k)\} \right) P\left(C_v(j,k) \big\vert A_v \cap \{(\tilde{j}, \tilde{k}) = (j,k)\} \right) \right] \nonumber\\
&= \sum_{j=4L}^{5L-1}\sum_{k=0}^{H-1} \left[ P\left(A_v \cap \{(\tilde{j}, \tilde{k}) = (j,k)\} \right) P\left(B_v(j,k) \right) \right] \nonumber\\
&>(1-\delta) P(A_v)
= (1-\delta) \left[ 1 - (1-\rho_L)^{LH} \right], ~\forall L \geq L_0.
\end{align}
Recall the definition of $\rho_L$ in \eqref{eq: ProbA}, by the hypothesis $\limsup_{n \to \infty} nP(N \geq n) > 0$, there exists $a = a(N) > 0$ such that $\limsup_{L \to \infty} L\rho_L > a$. Fix $H=H(a,\delta)$ large enough, such that if $\rho_L > a/L$ then
\begin{equation}\label{eq: ProbC2}
\left[ 1 - (1-\rho_L)^{LH} \right] > (1-\delta).
\end{equation}
We fix $M=M(H,\lambda, \delta)$ large enough, such that under the occurrence of the event $C_v(j,k)$, we have that
\begin{equation}\label{eq: ProbM}
P\left(\bigcup_{x \in \mathcal{V}_u(j,k)}D_x \right)\geq 1 - \left[1 - \left( e^{-4H} (1-e^{-\lambda})^2 \right) \right]^M > 1-\delta,
\end{equation}
where $D_x$ is the event defined in \eqref{eq: aberto2}.
Remark that in each step of the algorithm, all explored edges are open independently of each other.
Since for some $j \in [0,L)$ or some $j \in [3L,4L)$ (according to the appropriate parity) the event in \eqref{eq: aberto1} occurs, by equation \eqref{eq: ProbA}, if $e$ is an explored horizontal bond, that is $e \in \mathbb{E}_h \cap \left( \cup_{n \geq 0} E_n\right)$, it holds that
\begin{equation}
p_{h,L}:= P(e~\text{is open}) = 1 - (1 - (1-e^{-\lambda})\rho_L)^{L}.
\end{equation}
Therefore, we can choose $\alpha = \alpha(\lambda, a)$ such that if $\rho_L > a/L$, it holds that $p_{h,L} > \alpha$.
Analogously, let $e \in \mathbb{E}_v \cap \left( \cup_{n \geq 0} E_n\right)$ be an explored vertical bond. According to conditions given in the paragraph of \eqref{eq: aberto2}, by inequalities in \eqref{eq: ProbC} and \eqref{eq: ProbM} it holds that
if, $\rho_L > a/L$ and $L \geq L_0$ then
\begin{equation}
p_{v,L}:= P(e~\text{is open}) > \left[ 1 - (1-\rho_L)^{LH} \right](1-\delta)^2.
\end{equation}
Combining this last inequality with \eqref{eq: ProbC2}, it holds that
\begin{equation}
p_{v,L} > (1-\delta)^3.
\end{equation}
Thus, for $L$ such that $\rho_L > a/L$ and $L \geq L_0$, the exploration processes dominates an oriented, anisotropic and independent percolation model on ${\mathbb{L}}}\renewcommand{\d}{\mathrm{d}^2_+$, where each horizontal or vertical bond is open with probability $\alpha$ and $(1-\delta)^3$, respectively. By a Pierls type argument we can choose $\delta=\delta(\alpha)>0$ small enough such that this anisotropic percolation model is supercritical. This concludes the proof for the case $d=1$.
For $d \geq 2$ the proof is essentially the same. The main difference is on the partition $(\mathcal{B}_v)_{v \in {\mathbb{Z}}_+^2}$. We set $V=\left[ [0,6L)\times[0,L)^{d-1} \right] \cap {\mathbb{Z}}^{d}$, $\Delta = [0,2H)$ and let $\mathcal{B}= V\times\Delta$ be a block in ${\mathbb{Z}}_+^{d} \times \mathbb{R}_+$. For each $v=(v_1,v_2)\in {\mathbb{Z}}_+^2$, define $\mathcal{B}_{v}=V_{v}\times\Delta_{v}$, where $V_{v}=6Lv_1\vec{e}_1+V$ and $\Delta_{v}= 2Hv_2 + v_1+\Delta$. Analogously, throughout the partition $(\mathcal{B}_v)_{v \in {\mathbb{Z}}_+^2}$, we construct a coupling between the CPDR on $\mathcal{G}^d$ and a supercritical percolation model on $\mathbb{L}^2_+$.
Remark that the CPDR survive with positive probability on a \emph{slab} of $\mathcal{G}^d$, that is, on the subgraph induced by the set of vertices $\left[ {\mathbb{Z}} \times [0,L)^{d-1} \right] \cap {\mathbb{Z}}^d$.
\qed
\section{Anisotropic percolation with random range}\label{s3}
\noindent The proof of Theorem~\ref{perc1} has some common features with the proof of Theorem~\ref{cfinito2} like the domination by a subcritical branching process, but the details are different. The proof of Theorem~\ref {perc2}, although much more involved that Theorem~\ref {perc1}, follows the same script that the proof of Theorem~\ref {cinf2} (indeed the former is simpler than the latter).
We remark that the statement of Theorem~\ref{perc2} is easily proved for non-oriented percolation. Indeed, under the hypothesis $\sum_{n\in{\mathbb{N}}} P(N\geq n)= +\infty$, by Borel-Cantelli's Lemma, for each vertex $v$ and for all $p\in (0,1]$, the set
$\{n \in {\mathbb{N}} : \langle -n\vec{e}_1,o\rangle \text{ and } \langle -n \vec{e}_1, v \rangle \text{ are open}\}$
is infinite $\emph{a.s.}$
The last subsection is dedicated to the one dimensional case and Theorems~\ref{perc3} and~\ref{perc4} will be proved.
\subsection{Proof of Theorem~\ref {perc1}}\label{sub31}
\noindent We will consider $d=2$, the same proof can be carried out with very minor modifications for all values of $d > 2$. Given $l\in{\mathbb{N}}$ consider the partition, $(\mathcal{S}^l_v)_{v\in{\mathbb{Z}}^2}$, of ${\mathbb{Z}}^2$ in one-dimensional boxes $\mathcal{S}^l_{(v_1,v_2)}:= (lv_1,v_2)+ \big[ \{0,1,\dots,l-1\}\times \{0\} \big]$. This partition will induce a percolation processes in the renormalized graph $\tilde{G}=({\mathbb{Z}}^2,\mathbb{E}_v\cup \mathbb{E}_h)$ isomorphic to $G$.
We say that a bond $(u,v)$ in the renormalized lattice $\tilde{G}$ (that is $v$ is $u+\vec{e}_2$ or $u+ n\vec{e}_1$ for some $n\in{\mathbb{N}}$) is {\em good} if and only if occurs the event
\begin{equation}
\bigcup_{x\in \mathcal{S}^l_u}\bigcup_{y\in \mathcal{S}^l_v}\left((x,y)\mbox{ is open in } G\right),
\end{equation}
this construction holds that
\begin{equation}\label{renorm}
\{(0,0)\rightarrow\infty\mbox{ in }G\}\subset\{(0,0)\leadsto\infty\mbox{ in }\tilde{G}\},
\end{equation}
here we are using the notation $\leadsto$ to denote connections by good paths in the renormalized graph.
If $(u,v)$ is a vertical bond of $\tilde{G}$, that is $(u,v)\in\mathbb{E}_v$, then
\begin{equation}\label{cotav}
P((u,v)\mbox{ is good})=1-(1-q)^l.
\end{equation}
Given $p < 1$, if $(u,v)\in\mathbb{E}_h$ with $v=u+\vec{e}_1$, we claim that there exists some constant $c(p)<1$, not depending on $l$ or $q$, such that
\begin{equation}\label{cotah1}
P((u,v)\mbox{ is good})<c(p)
\end{equation}
and from now on we fixed $l$ such that
\begin{equation}\label{defl}
\sum_{n\geq l}P(N\geq n)<\frac{1-c}{2}.
\end{equation}
We postpone the demonstration of this claim to the end of this proof. If $(u,v)\in\mathbb{E}_h$ with $v=u+n\vec{e}_1$, $n\geq 2$, it holds that
\begin{align}\label{cotahn}
\nonumber P((u,v)\mbox{ is good}) & \leq\sum_{i=1}^l P(N_{(lu_1+l-i,u_2)}\geq l(n-1)+i)\\
& =\sum_{i=1}^l P(N\geq l(n-1)+i).
\end{align}
We can use the bounds (\ref{cotav}), (\ref{cotah1}) and (\ref{cotahn}) to estimate the expected number of neighbours of any vertex $v$ of $\tilde{G}$ in its good cluster:
\begin{align}\label{cotaviz}
\nonumber E |\{u\in{\mathbb{Z}}^2 :& (v,u) \mbox{ is good} \}|\\
\nonumber &<1-(1-q)^l + c(p) +\sum_{n\geq 2}\sum_{i=1}^l P(N\geq l(n-1)+i)\\
&\leq 1-(1-q)^l + c(p) +\sum_{i\geq l} P(N\geq i).
\end{align}
Due inequalities (\ref{defl}) and (\ref{cotaviz}), we can take $q$ close enough to 0 such that
$E |\{u\in{\mathbb{Z}}^2:(v,u)\mbox{ is good} \}|<1.$
Thus, the percolation processes on the renormalized graph $\tilde{G}$ is dominated by a subcritical branching processes, then by inclusion (\ref{renorm}), we have that $\theta(p,q)=0$, yielding that $q_c(p)>0$, $\forall p<1$.
We conclude proving the claim made above. Given any $p<1$ and $l\in{\mathbb{N}}$ there exists a constant $c:=c(p)<1$ not depending in $l$ (indeed $c$ depends of the distribution $N$), such that
\begin{equation}
P((u,v)\mbox{ is good}) < c(p),\ \forall (u,v)\mbox{ with } v = u+\vec{e}_1.
\end{equation}
For this, it is enough to prove that $P(A)<1$, where $A$ is defined as follows:
\begin{equation}
A:=\bigcup_{i=1}^\infty\bigcup_{j = 0}^{\infty}\Big((-i\vec{e}_1, j\vec{e}_1)\mbox{ is open in }G\Big).
\end{equation}
Define $i_0:=\inf\{i \geq 0 : P(N = i) > 0\}$. Let $A_0 = \emptyset$ and, for $i_0 \geq 1$, consider the event $A_{i_0}$ defined by
\begin{equation}
A_{i_0}:=\bigcup_{i=1}^{i_0}\bigcup_{j = 0}^{\infty}\Big((-i\vec{e}_1,j \vec{e}_1 )\mbox{ is open in }G\Big).
\end{equation}
Observe that, since $p < 1$, it holds that $P(A_{i_0}) < 1$. Therefore
\begin{align*}
P(A^c) \geq P\left(\bigcap_{i> i_0} \{N_{(-i,0)}<i\}\cap A_{i_0}^c\right)
\geq \prod_{i > i_0} (1-P\{N\geq i\})\cdot P\{A_{i_0}^c\}>0,
\end{align*}
since $E N=\sum_{i=1}^\infty P\{N\geq i\}<+\infty$. This concludes the proof of our claim.
\qed
\begin{remark}
Observe that if $i_0=0$, that is $P(N=0)>0$, we can extend Theorem~\ref{perc1} for $p=1$.
\end{remark}
\subsection{Proof of Theorem~\ref {perc2}}\label{sub32}
\noindent We will prove that $P(o\rightarrow\infty)>0$ for all $p,q>0$. It is enough to show this only for $d=2$.
We follow the same script of the proof of Theorem~\ref{cinf2}. Since we are considering oriented percolation, we have some changes on the geometry of the renormalized lattice. Now, the partition $(\mathcal{B}_{v})_{v\in{\mathbb{Z}}_+^2}$ of ${\mathbb{Z}}_+^2$ is defined as follows: given integer numbers $L$ and $H$, for each vertex $v = (v_1, v_2)$ of the lattice $\mathbb{L}_+^2$, let $o(v) = (3Lv_2 + 6Lv_1, Hv_2) \in {\mathbb{Z}}^2$ and define $\mathcal{B}_v = o(v) + \mathcal{B}$, where $\mathcal{B} = \big[ [0,6L) \times [0,H) \big] \cap {\mathbb{Z}}^2$.
For each $v \in {\mathbb{Z}}_+^2$ and each $(j,k) \in \mathcal{B}$, define $v(j,k) = o(v) + (j,k)$ the associated vertex in $\mathcal{B}_v$. Fix $M \leq L$ an integer number, we also define the following events
\begin{align}\label{eq: eventosnovos}
& A_v(j,k) = \{ N_{v(j,k)} \geq 7L \}; \nonumber \\
& B^i_v(j,k) = \{ N_{v(j,k) - i \vec{e_1} } \geq i \}, ~i \geq 1\mbox{ and } \nonumber \\
& B_{v}(j,k) = \{\big\vert \mathcal{V}_v(j,k) \big\vert \geq M\}, \text{ where } \mathcal{V}_v(j,k) = \{ i \in [1,L] : B^i_v(j,k) \text{ occurs} \}.
\end{align}
Moreover, for each $v \in {\mathbb{Z}}_+^2$ if $(j,k) \in \mathcal{B}$ is such that $j \in [2L, 3L)$, define
\begin{equation}
C_v(j,k) = A_v(j,k) \cap B_v(j,k) \cap \left\{ \sum_{m=0}^{H-1} \mathbbm{1}_{ \{A_v(\ell,m)\} } = 0, ~\forall \ell \in (j,L)\cap {\mathbb{Z}} \right\}.
\end{equation}
Now, we define the exploration algorithm on the oriented lattice ${\mathbb{L}}}\renewcommand{\d}{\mathrm{d}^2_+$. Inductively, we build a random sequence $(C_n,E_n)_{n \geq 0}$, where, we recall, $C_n \subset {\mathbb{Z}}^2_+$ and $E_n \subset \mathbb{E}_h \cup \mathbb{E}_v$ is the set of checked bonds up to step $n$. We will also define the \emph{golden coordinates}
\begin{equation}\label{eq: goldtau}
\tau : \cup_{n\geq 0} C_n \longrightarrow [0,3L) \times [0,H),
\end{equation}
that associates for each vertex $v \in \cup_{n\geq 0} C_n$, the \emph{golden vertex}
\begin{equation}\label{eq: goldv}
v^{\ast} = v(\tau(v)) \in \mathcal{B}_v.
\end{equation}
By construction we will have that, if $v \in \cup_{n \geq 0} C_n$
|
then $\big(o \to v^{\ast} \big)$ occurs on the APRR model.
If $A_o(0,0)$ occurs, define $C_0 = \{o\}$, $E_0 = \emptyset,$ and $\tau(o)=(0,0)$. Observe that, according to \eqref{eq: eventosnovos}, \eqref{eq: goldtau} and \eqref{eq: goldv} we have
\begin{equation}
N_{v^{\ast}} \geq 7L \text{~~and~~} o \to v^{\ast}, ~\forall v \in C_0.
\end{equation}
Otherwise, the exploration does not start and set $(C_n, E_n) = (\emptyset, \emptyset)$ for all $n \geq 0$.
As our induction hypothesis, suppose that for some $n \geq 0$, the sets $C_n, E_n$ are defined and for all $v \in C_n$ are also defined $\tau(v)$ and $v^{\ast}$ satisfying the conditions
\begin{equation}\label{eq: HI}
N_{v^{\ast}} \geq 7L \text{~~and~~} o \to v^{\ast}, ~\forall v \in C_n.
\end{equation}
Define the set $F_{n} = \{ ( v,u ) : v \in C_n \text{ and } u \notin C_n \} \cap E_n^c$. If $F_n = \emptyset$, then we stop the algorithm and set $(C_m, E_m) = (C_n, E_n),\ \forall m > n$. Otherwise, let $( v, u ) \in F_n$ be the minimal bond following the order $\prec$ (defined in Section~\ref{sub22}), with $v = (v_1, v_2) \in C_n$ and $u = (u_1, u_2) \notin C_n$ and define $E_{n+1} = E_n \cup \{ ( v,u ) \}$. We denote $\tau(v) = (\tau_1, \tau_2)$. To define $C_{n+1}$ we will consider two different cases.
Suppose that $(v,u) \in \mathbb{E}_h$. We say that $(v,u)$ is open if, for some $j \in [0,L) \cap {\mathbb{Z}}$, it holds the occurrence of the following event
\begin{equation}\label{eq: openv}
\left\{\big( v^{\ast} , u(j,\tau_2) \big) \text{ is open in } G_{\textbf{N}} \right\} \cap
A_u(j,\tau_2).
\end{equation}
In this case we set $\tau(u) = (j,\tau_2)$. Note that since we assume \eqref{eq: HI} hence $\big( v^{\ast},u(j,\tau_2) \big)$ is indeed an edge in the random graph $G_{\textbf{N}}$. According to definition of $A_u(j,k)$ in \eqref{eq: eventosnovos}, combining \eqref{eq: HI} and \eqref{eq: openv}, it holds that $ o \to u^{\ast}$ and $N_{u^{\ast}} \geq 7L$.
Suppose that $(v,u) \in \mathbb{E}_v$. We say that $(v,u)$ is open if, for some $(j,k) \in \big[ [2L,3L)\times [0,H) \big]\cap {\mathbb{Z}}^2$ it holds that $C_u(j,k)$ occurs and also, there exists $i \in \mathcal{V}_u(j,k)$ such that $D_i$ occurs, where $D_i = \cap_{\ell = 1}^4 D_i^{\ell}$ with
\begin{align}\label{eq: openh}
D_i^1 &:= \left\{\big(v^{\ast} , v^{\ast} + (3L - \tau_1 + j - i) \vec{e}_1 \big) \text{ is open in } G_{\textbf{N}} \right\}; \nonumber \\
D_i^2 &:= \left\{\big(u(j,k) - i\vec{e}_1 ,u(j,k) \big) \text{ is open in } G_{\textbf{N}} \right\}; \nonumber \\
D_i^3 &:=\bigcap_{m=0}^H \left\{\big(v(3L+j-i,m), v(3L+j-i,m)+ \vec{e}_2 \big) \text{ is open in } G_{\textbf{N}} \right\}\mbox{ and} \nonumber \\
D_i^4 &:=\bigcap_{m=0}^{H-1} \left\{\big(u(j-i,m), u(j-i,m)+ \vec{e}_2 \big) \text{ is open in } G_{\textbf{N}} \right\}.
\end{align}
In this case we set $\tau(u) = (j,k)$. First note that $A_u(j,k)$ occurs, thus $N_{u^{\ast}} \geq 7L$. Throughout this paragraph, as an auxiliary notation, let $x = v^{\ast} + (3L + j - \tau_1 - i)\vec{e}_1$ and $y = u(j,k) - i\vec{e}_1$. Assuming condition \eqref{eq: HI}, it holds that $(v^{\ast},x)$ is an edge in the random graph $G_{\textbf{N}}$. We also have that $(y, u(j,k))$ is an edge in the random graph $G_{\textbf{N}}$, since $B^i_u(j,k)$ occurs. Note that $x$ and $y$ have the same first coordinate, and since $D_i^3$ and $D_i^4$ occur, all edges in the column contain $x$ and $y$ and contained in $\mathcal{B}_v \cup \mathcal{B}_u$ are open, therefore $x \to y$. Wrapping up, we conclude that $o \to u^{\ast}$ and $N_{u^{\ast}} \geq 7L$.
To finish our inductive procedure, define
$C_{n+1}$ as in \eqref{eq: Cnum}.
We emphasize that, by construction, for each explored edge $e \in \cup_{n \geq 0} \mathbb{E}_n$, the event $\{e \text{ is open}\}$ is independent of all others.
By hypothesis $\limsup_{n \to \infty}nP(N \geq n) > 0$, there exists $a > 0$, such that $\mathcal{L} = \{ L \in {\mathbb{N}} : P(N \geq 7L) > a/L \}$ is infinity. Let $\alpha= \alpha(N,p)$ be such that
\begin{equation}\label{eq: alpha2}
1- \Big( 1- P(N \geq 7L)p \Big)^L \geq \alpha, ~\forall L \in \mathcal{L}.
\end{equation}
Fix an arbitrary $\delta > 0$. Let $H = H(N,\delta)$ be such that
\begin{equation}
1- \Big( 1- P(N \geq 7L) \Big)^{LH} \geq (1-\delta), ~\forall L \in \mathcal{L}.
\end{equation}
Fix $M = M(H,p,q, \delta)$ such that
\begin{equation}
1- \Big( 1- p^2 q^{4H-1} \Big)^{M} \geq (1-\delta), ~\forall L \in \mathcal{L}.
\end{equation}
Since $\sum_{n \geq 1} P(N \geq n) = \infty$, there exists $L_0 = L_0(N,M,\delta)$ such that
\begin{equation}\label{eq: Lzero}
P(B_v(j,k)) \geq (1-\delta), ~\forall L \geq L_0.
\end{equation}
Thus, according to conditions given in paragraphs of \eqref{eq: openv} and \eqref{eq: openh}, if $L \in \mathcal{L}$ and $L \geq L_0$ the inequalities \eqref{eq: alpha2} -- \eqref{eq: Lzero} ensure that, for explored edges, $P(e \text{ is open}) \geq \alpha$, if $e \in \mathbb{E}_h$, and (arguing as in the paragraph of \eqref{eq: ProbC}) $P(e \text{ is open}) \geq (1 - \delta)^3$, if $e \in \mathbb{E}_v$. As $\delta$ was fixed arbitrarily, the theorem follows if we take $\delta$ sufficiently small.
\qed
\subsection{One dimensional percolation with random range}\label{sub33}
\noindent As told in the introduction, Theorem~\ref{perc2} suggests that under the hypothesis $\limsup_{n\rightarrow +\infty}nP(N\geq n)>0$ percolation could be occur in the one dimensional model (that is, $q=0$ in the $d$-dimensional model), since $q_c(p)=0,\ \forall p\in[0,1)$. In fact this is not always true, even if $p=1$. For example, if we consider the random variable $N$ with distribution such that
\begin{equation*}
P(N\geq n)=1-e^{-\frac{\beta}{n}}
\end{equation*}
The Theorem~\ref{perc3} shows that there is a phase transition in the parameter $\beta$, in the sense of the origin percolates with positive probability if $\beta>1$ and with null probability if $\beta\in (0,1)$.
Remark that along this section, we deal with the graph $G:=({\mathbb{Z}},\mathbb{E})$, where $\mathbb{E}:=\{(i,j)\in{\mathbb{Z}}\times{\mathbb{Z}}:i<j\}$ and given the sequence ${\bf N} := (N_i)_{i\in{\mathbb{Z}}}$ of i.i.d. random variables with common distribution $N$, we are considering an standard bond Bernoulli percolation with parameter $p$ on the random graph,
\begin{equation*}
G_{\bf N}:=({\mathbb{Z}},\cup_{i\in{\mathbb{Z}}} \{(i,i+n)\in\mathbb{E}:n\leq N_i\}).
\end{equation*}
\subsubsection*{Proof of Theorem~\ref{perc3}}
\noindent In the case $p=1$, the cluster of origin $C_0$ is a random subset of ${\mathbb{Z}}_+$, measurable with respect to ${\bf N}=\{N_i\}_{i \geq 0}$.
We define the sequence $(i_n)_{n \in{\mathbb{N}}}$, recursively, as follows: $i_0 = -1$, $i_1 = 0$ and $\forall n \geq 1$, if $i_{n} \neq i_{n-1}$ set
$
i_{n+1} = \max \{ i+N_i : i \in (i_{n-1}, i_n]\},
$
otherwise, set $i_{n+1} = i_n$. Define also
\begin{equation}\label{eq: Xn}
X_n := i_{n+1} - i_n, ~n \geq 1.
\end{equation}
Observe that $(X_n)_{n \geq 1}$ is an homogeneous Markov chain and $|C_0| = \infty \Longleftrightarrow X_n \neq 0, \forall n \geq 1$.
To study the asymptotic behaviour of the chain $(X_n)_{n \geq 1}$, we analyse the ratio $W_n:=X_{n+1}/X_n$, $n \geq 1$. For $\beta < 1$, where there is no percolation, the idea will be to define a random variable $\tilde{Y}$, such that the sequence $(W_n)_n$ is dominated by an i.i.d.~sequence with the same distribution of $\tilde{Y}$. For this purpose, consider the auxiliary function $f:(0,\infty) \longrightarrow {\mathbb{N}}$ defined by $f(t) = \lfloor t+1 \rfloor = \max\{n \in {\mathbb{N}} : n \geq t+1\}$.
Given $t \in (0,\infty)$ and $m \in {\mathbb{N}}$, we have that
\begin{align}
P( W_n \leq t \vert X_n = m) &= P(X_{n+1} \leq mt \vert X_n = m) \nonumber \\
&= \prod_{i = i_n + 1}^{i_{n+1}} P(i + N_i - i_{n+1} < f(mt) \vert X_n = m) \nonumber \\
&=\prod_{j=0}^{m-1} P(N < f(mt)+j)
= \exp\left\{ -\beta \sum_{j=0}^{m-1} \frac{1}{f(mt)+j} \right\}.
\end{align}
Denoting
\begin{equation}\label{eq: r}
r(m,t) := \exp\left\{ -\beta\left[ \sum_{j=0}^{m-1} \frac{1}{f(mt)+j} - \int_{0}^{m} \frac{1}{f(mt)+s} ds \right] \right\},
\end{equation}
our expression for $P( W_n \leq t \vert X_n = m)$ can be written as
\begin{align}\label{eq: expressao}
P( W_n \leq t \vert X_n = m) &=\exp\left\{ -\beta \int_{0}^{m} \frac{1}{f(mt)+s} ds \right\} r(m,t) \nonumber \\
&= \left( \frac{f(mt)}{f(mt) + m}\right)^{\beta}r(m,t)
.
\end{align}
Taking the limit when $m\rightarrow +\infty$ in the equation above, it holds for any $t \in (0,\infty)$ that
\begin{equation}\label{eq: convergencia}
\lim_{m \to \infty} P( W_n \leq t \vert X_n = m) = \lim_{m \to \infty} \left[ \left( \frac{f(mt)}{f(mt) + m}\right)^{\beta}r(m,t) \right]
= \left( \frac{t}{t+1}\right)^{\beta}.
\end{equation}
Thus, as $m$ goes to infinity, the conditional distribution of $ W_n \vert X_n = m$ converges weakly to a random variable $Y$ with distribution function given by
\begin{equation}\label{eq: FY}
F_Y(t) := P(Y \leq t) = \begin{cases}
0, &\text{ if } t \leq 0, \\
\left(\frac{t}{t+1}\right)^{\beta}, &\text{ if } t > 0.
\end{cases}
\end{equation}
Given any $t \geq 0$, observe that
$P(\log Y \geq t) = P(Y \geq e^t) = 1- [e^t/(1+ e^t)]^{\beta}$
and
$P(\log Y \leq -t) = P(Y \leq e^{-t})= [e^{-t}/(1+e^{-t})]^{\beta} = 1/(1+ e^t)^{\beta}$.
Thus, $P(\log Y \geq t) = P(\log Y \leq -t)$, if $\beta=1$ which implies that $E[\log Y] = 0$, since $\log Y$ is integrable. Remarking that $P(\log Y \geq t)$ is an increasing function in $\beta$ whilst $P(\log Y \leq -t)$ is a decreasing one. Then,
\begin{equation}\label{eq : ElogY}
E[\log Y] \begin{cases}
< 0, &\text{ if } \beta < 1, \\
> 0, &\text{ if } \beta > 1.
\end{cases}
\end{equation}
We split the proof, considering the cases $0 < \beta < 1$ and $\beta>1$ separately.
\vspace{0.4cm}
\noindent{\bf \underline{Case $0 < \beta < 1$:}}
\vspace{0.4cm}
The idea to obtain the mentioned domination by i.i.d~variables is to follow the method developed in Section 3.1 of~\cite{FGS}. By Equation~\eqref{eq : ElogY}, it holds that $E[\log Y] < 0$ and, for $ t > 0$,
\begin{equation}
E\left[Y^t\right] = \int_{0}^{\infty} P\left(Y \geq s^{1/t}\right) ds = \int_{0}^{\infty} \left[ 1 - \left( \frac{s^{1/t}}{1+s^{1/t}}\right)^{\beta}\right] ds.
\end{equation}
Then, $E[Y^t] < \infty$ for all $0\leq t < 1$. Moreover, we can write
\begin{align}
E\left[Y^{-t}\right] = \int_{0}^{\infty} P\left(Y \leq s^{-1/t}\right) ds &= \int_{0}^{\infty} \left( \frac{s^{-1/t}}{1+s^{-1/t}}\right)^{\beta} ds \nonumber \\ &= \int_{0}^{\infty} \left( \frac{1}{1+s^{1/t}}\right)^{\beta} ds,
\end{align}
concluding that $E[Y^{-t}] < \infty$ for all $0 \leq t < \beta$. This allows us to define the differentiable function $\phi : (-\beta, 1) \longrightarrow \mathbb{R}$ given by
\begin{equation}
\phi(t) := E\left[Y^t\right] = E\left[e^{t\log Y}\right].
\end{equation}
The function $\phi$ satisfies $\phi'(0) = E[\log Y] < 0$ and $\phi(0) = 1$, then there exists $\theta \in (0, 1)$ such that $\phi(\theta) < 1$, that is, $E[Y^{\theta}] < 1$.
Given any $K \in {\mathbb{N}}$, define $M = 2^K$ and the sequence $(a_j)_{j\geq 0}$ given by
\begin{equation}\label{eq: a}
a_j := \begin{cases}
j/M, &j=0, \dots, M^2, \\
2^{K + (j-M^2)}, &j > M^2.
\end{cases}
\end{equation}
For each $j \geq 1$, denote $I_j := (a_{j-1}, a_j]$ and let $Y_M$ be the random variable defined as
\begin{equation}\label{eq: alpha}
Y_M := \sum_{j=1}^{M^2} a_j\mathbbm{1}_{Y \in I_j}.
\end{equation}
Fix some real number $\alpha$ such that $E[Y^{\theta}] < \alpha < 1$, by the Dominated Convergence Theorem, we have that
\begin{equation}
E\left[Y_M^{\theta}\right] < \alpha < 1,
\end{equation}
for all $M$ large enough.
Given any $\delta > 0$, we define the weights
\begin{equation}
p_j(M,\delta) := \begin{cases}
P(Y \in I_j) + \delta, &\text{ if } j = 1, \dots, M^2,\\
1 - \left[ F_Y(a_{j-1})(1 - \beta/a_{j-1} )\right], &\text{ if } j > M^2,
\end{cases}
\end{equation}
and the quantity $C(M,\delta) := \sum_{j\geq 1} p_j(M, \delta)$.
Let $\tilde{Y}$ be a discrete random variable taking values on the set $\{ a_j : j\geq 1\}$ in such a way that
\begin{equation}\label{eq: tildeY}
P\left(\tilde{Y} = a_j\right) = \frac{p_j(M,\delta)}{C(M,\delta)}, ~j \geq 1.
\end{equation}
Then,
$
E[ \tilde{Y}^{\theta} ] = C(M,\delta)^{-1} \sum_{j \geq 1} a_j^{\theta}p_j(M,\delta).
$
Recalling the definitions of $F_Y$ and $(a_j)_{j\geq 0}$ given in \eqref{eq: FY} and \eqref{eq: a}, respectively, we have that
\begin{equation}
\sum_{j > M^2} a_j^{\theta}p_j(M,\delta) = \sum_{\ell \geq 0} \left\{ 2^{(K + \ell + 1)\theta} \left[ 1 - \left( \frac{2^{K+ \ell}}{1 + 2^{K+\ell}}\right)^{\beta} \left(1 - \frac{\beta}{2^{K+\ell}} \right) \right] \right\}.
\end{equation}
Thus, there exists a constant $c(\beta) > 0$ such that the series above can bounded above by $c(\beta) \sum_{\ell \geq 0} 2^{(\theta - 1)(K+\ell)}$ for all $K$ large enough. Since $\theta < 1$, we conclude that $\sum_{j > M^2} a_j^{\theta}p_j(M,\delta)$ tends to $0$ when $K\rightarrow +\infty$.
From now on, we fixed $K$ (hence $M = 2^K$) large enough and $\delta(M)$ small enough such that, by definition of $\alpha$, it holds that
\begin{align}\label{eq: desigualdade_alpha}
E\left[ \tilde{Y}^{\theta} \right] &= \frac{1}{C(M,\delta)} \sum_{j \geq 1} a_j^{\theta}p_j(M,\delta) \nonumber \\ &= \frac{1}{C(M,\delta)} \left[ E\left[Y_M^{\theta} \right] + \delta \sum_{j = 1}^{M^2} a_j^{\theta} + \sum_{j > M^2} a_j^{\theta}p_j(M,\delta) \right] \leq \frac{\alpha}{C(M, \delta)}.
\end{align}
For shortness, in the rest of this proof we denote $C(M,\delta)$ and $p_j(M,\delta)$ only by $C$ and $p_j$, respectively.
Due the convergence in \eqref{eq: convergencia}, for all $j = 1, \dots, M^2$, it holds that $P(W_n \leq a_j \vert X_n = m) \to F_Y(a_j)$ when $m \to \infty$. Thus, there exists $m_0 \in {\mathbb{N}}$ such that
\begin{equation}\label{eq: d1}
P(W_n \in I_j \vert X_n \geq m_0) \leq P(Y \in I_j) + \delta = p_j\ \forall j = 1, \dots, M^2.
\end{equation}
Given the definition of $r(m,t)$ in \eqref{eq: r}, it holds for all $t > 0$ the upper bound $r(m,t) \geq e^{-\beta/f(mt)} \geq 1- \beta/f(mt) \geq 1 - \beta/t$, uniformly in $m \in {\mathbb{N}}$. Then, by Equation~\eqref{eq: expressao}, for all $j > M^2$ it holds that
\begin{align}\label{eq: d2}
P(W_n \in I_j \vert X_n \geq 1) &\leq P(W_n > a_{j-1} \vert X_n \geq 1) \nonumber \\
&\leq 1 - F_Y(a_{j-1}) r(X_n,a_{j-1}) \nonumber \\
&\leq 1 - F_Y(a_{j-1})(1- \beta/a_{j-1}) = p_j.
\end{align}
In possession of inequalities obtained in \eqref{eq: d1} and \eqref{eq: d2}, we are almost done to obtain the domination of $X_{n+1} = \prod_{j=1}^n W_j$ by a product of i.i.d. copies of $\tilde{Y}$.
For each $k \in {\mathbb{N}}$, consider the set $\Lambda_k$ defined as follows:
\begin{equation}\label{eq: Lambdak}
\Lambda_k := \left\{ (\lambda_1, \dots, \lambda_k ) \in {\mathbb{N}}^k: \prod_{j=1}^{\ell} a_{\lambda_j} \geq 1, ~\forall \ell = 1, \dots, k \right\}.
\end{equation}
By equations \eqref{eq: d1} and \eqref{eq: d2}, it holds for all $n \in {\mathbb{N}}$ that,
\begin{align}\label{eq: ex1}
P&\left( X_{n+ \ell} \geq X_n, \forall \ell = 1, \dots, k \vert X_n \geq m_0 \right)
= \nonumber \\
&P\left( \prod_{j=1}^l W_{n+j} \geq 1, \forall \ell = 1, \dots, k \vert X_n \geq m_0 \right) \leq \nonumber \\
&\sum_{\Lambda_k} P\left( W_{n+ j} \in I_{\lambda_j}, j = 1 \dots, k \vert X_n \geq m_0 \right) \leq \sum_{\Lambda_k}\left[ \prod_{j=1}^k p_{\lambda_j}\right].
\end{align}
Let $\tilde{Y}_1, \dots, \tilde{Y}_k$ be i.i.d.~copies of $\tilde{Y}$, as defines in \eqref{eq: tildeY}. For shortness, we denote the set $\left\{ (\tilde{Y}_1, \dots, \tilde{Y}_k) = (a_{\lambda_1}, \dots, a_{\lambda_k}) : (\lambda_1, \dots, \lambda_k) \in \Lambda_k \right\}$, only by $(\tilde{Y}_1, \dots, \tilde{Y}_k) \sim \Lambda_k$, thus
\begin{equation}\label{eq: ex2}
\sum_{\Lambda_k}\left[ \prod_{j=1}^k p_{\lambda_j}\right] = C^k \sum_{\Lambda_k}\left[ \prod_{j=1}^k \frac{p_{\lambda_j}}{C}\right]
= C^k P\left((\tilde{Y}_1, \dots, \tilde{Y}_k) \sim \Lambda_k \right).
\end{equation}
According to the definition of $\Lambda_k$ given in \eqref{eq: Lambdak}, by inequality \eqref{eq: desigualdade_alpha}, we have that
\begin{align}\label{eq: ex3}
P\left((\tilde{Y}_1, \dots, \tilde{Y}_k) \sim \Lambda_k \right) \leq P\left(\prod_{j=1}^k \tilde{Y}_j \geq 1 \right) &= P\left(\prod_{j=1}^k \tilde{Y}_j^{\theta} \geq 1 \right) \nonumber \\
&\leq E\left[ \tilde{Y}^{\theta} \right]^k \leq \left(\frac{\alpha}{C}\right)^k.
\end{align}
Thus, combining the equations \eqref{eq: ex1}, \eqref{eq: ex2} and \eqref{eq: ex3}, we conclude that
\begin{equation}
P\left( X_{n+ \ell} \geq X_n, \forall \ell = 1, \dots, k \vert X_n \geq m_0 \right) \leq \alpha^k,
\end{equation}
yielding that
$P(\exists n \in {\mathbb{N}} : X_{n+\ell} \geq X_n \geq m_0, \forall \ell \geq 1) = 0,$
since $\alpha<1$. In particular, $P(\lim_{n \to \infty} X_n = \infty) = 0$.
Remarking that
$
( \left\{X_n \neq 0, \forall n \geq 1 \right\} \cap \left\{ \lim_{n \to \infty} X_n = \infty \right\}^{c}
) = 0,$
we prove that $P(X_n \neq 0, \forall n \geq 1)=0$. Concluding the case $0<\beta<1$.
\vspace{0.4cm}
\noindent{\bf \underline{Case $\beta >1$:}}
\vspace{0.4cm}
We say that a vertex $i \in {\mathbb{Z}},\ i > 0$, is a \emph{cutting point} if $j + N_j < i,\ \forall j=0,\dots,i-1$. Let $A_i$ be the event $i$ is a cutting point, then
\begin{equation}
P(A_i) = \prod_{\ell = 1}^{i}P(N < \ell) = \exp\left\{-\beta \sum_{\ell =1}^i \frac{1}{\ell} \right\}.
\end{equation}
Observe that $\sum_{i \geq 1} P(A_i) < \infty$, since $\beta>1$ then, by Borel-Cantelli's Lemma, it holds that $P(A_i ~ i.o.) = 0$. Remark that we are deal with the case $p =1$, then
\begin{equation}\label{eq: existeAi}
\{ |C_0| < \infty\} = \bigcup_{i \geq 1} A_i.
\end{equation}
By the strong Markov property, if $P(\cup_{i \geq 1} A_i) = 1$ then $P(A_i ~i.o.) = 1$. Thus $P(\cup_{i \geq 1} A_i) < 1$ and the result follows from \eqref{eq: existeAi}.
\qed
\bigskip
The last argument above can be adapted to show that if $0 < p < 1$, then there is percolation for all $\beta > 1/p$. Establishing the existence of a critical parameter $\beta_c(p) \in [1, p^{-1}]$. This is the content of Theorem~\ref{perc4}.
\subsubsection*{Proof of Theorem~\ref{perc4}}
\noindent We define recursively the sequence $(j_n)_{n\geq 0}$ as follows: set $j_0 = o$ and $u_0 = N_0$. For any $n \geq 0$ and $i \in (j_{n}, u_{n}]$, define
$B_i = \{(j_{n}, i) \text{ is open}\}$,
$\tilde{N}_i = N_i \mathbbm{1}_{B_i}$ and $a_{n+1} = \max\{ i + \tilde{N}_i : i \in (j_{n}, u_{n}]\}$. If $a_{n+1} = u_n$, we say that $u_n$ is a cutting point and restart the recursion defining $j_{n+1} = u_n + 1$ and $u_{n+1} = u_n + N_{u_n} $. Otherwise, we define $j_{n+1} \in (j_{n}, u_{n}]$ in such a way that $j_{n+1} + \tilde{N}_{j_{n+1}} = u_{n+1}$.
As before, let $A_i$ be the event where the vertex $i$ is a cutting point. Define $\tilde{N} = XN$, where $X$ is a Bernoulli random variable with parameter $p$. Given any $\delta > 0$, we can find some large constant $C>0$, such that for all $i$ large enough, it holds
\begin{equation}
P(A_i) = \prod_{\ell = 1}^{i}P(\tilde{N} < \ell) \leq C \exp\left\{ -p\beta(1-\delta) \sum_{\ell = i_o}^i \frac{1}{\ell}\right\}.\
\end{equation}
Thus, if $\beta > 1/p$, we can take $\delta$ small enough such that $\sum_{i\geq 0} P(A_i) < \infty$. The proof is concluded using the same argument in the end of the proof of Theorem~\ref{perc3}.
\qed
\section*{Acknowledgements}
The research of P.A.G. is supported by S\~ao Paulo Research Foundation (FAPESP) grants 2020/02636-3 and 2017/10555-0. The research of B.N.B.L.\ was supported in part by CNPq grant 305811/2018-5, FAPEMIG (Programa Pesquisador Mineiro) and FAPERJ (Pronex E-26/010.001269/2016).
|
\section{Introduction}
Historically, research on patterned magnetic materials has first focused on dense arrays of non-interacting and hence independent dots, mainly for magnetic memory purposes. For this reason, the magnetic samples consisted often of arrays of disks and even rings, in the vortex state, or, in general, of systems with some anisotropy but vanishing stray fields \cite{Demokritov, Hubert, Novosad, Hillebrands, Bailleul, Hertel, Vogel, Schult, JAP2008}. However, more recently the interaction among dots in an array is seen more as an opportunity rather than a limit, and the possibility to exploit the collective spin excitation ("magnons") as information carriers has been valued, in analogy to light in photonic crystals \cite{Kruglyak, Lenk}. For this reason, interacting nano-magnets are called "magnonic crystals". They can be tailored to obtain magnonic band diagrams with the possible occurrence of magnonic bandgaps. The research includes any other magnetic "meta-material", namely a material with periodic magnetic properties: antidot arrays \cite{Zivieri} and bicomponent continuous films \cite{Ma, Gubbiotti-2012} are other important examples. These systems are promising candidates for building novel versatile magnetic devices, which can operate either as waveguides or memories, but also as tunable filters, depending on the amplitude of an external magnetic field \cite{Demidov, Chumak, Chumak2}. Furthermore, the possibility of modifying the propagation properties of the information carriers is important for building magnonic- and spin-logic devices \cite{Ding, Lenk, Khitun}.
Besides the experimental investigation (fabrication and characterization of a magnonic crystal as well as the measurement of its dynamical properties), great effort has been devoted to the theoretical understanding of the properties of spin waves in such systems, in particular to set computational tools (either analytical or micromagnetic) suitable to simulate and predict their dynamic behavior \cite{Sukhostavets, yu, Puszkarski, Puszkarski2, Kraw, Kostylev, Verba, Montoncello-2012, Montoncello-2013, Giovannini, NMAG, OOMMF, Micromagus}. Generally speaking, for a given equilibrium magnetic configuration, there are two main approaches in this context. First, the simulation of an infinite, periodic 1-D or 2-D system by applying periodic boundary conditions on a unit cell. In the case of the dynamical matrix method \cite{Giovannini}, the Landau-Lifshitz equation is cast in a matrix form, which depends on the given Bloch wave vector. The diagonalization of this matrix leads to frequencies and space profiles of all possible modes in the spectrum at the given Bloch wave vector. In the second approach a finite, though large, area of the magnetic system, excited by a pulse field with a given symmetry profile, is simulated. After collecting the time evolution of the magnetic response throughout the sample the frequency and wave vector spectrum of the response is obtained through a temporal and spatial Fourier analyses \cite{Venkat, Krug}. Here, the excited mode type depends on the symmetry of the excitation pulse, while the minimum frequency and the frequency resolution depends on the pulse duration and simulation time window. Comparing the two techniques, the latter has been widely preferred by experimentalists, especially for the possibility of controlling the input/output frequency bandwidth through the duration of the time pulse.
The approach we present in this paper belongs to the second category, but is innovative because the excitation field is continuously feeding the system at one single frequency. Consequently, only the modes beating at that frequency are excited. Contrary to a pulse excitation, only a spatial Fourier analysis is required to extract the wave vector spectra at the considered frequency. By combining multiple simulations at different excitation frequencies one can determine the dispersion relations at the frequency range of interest at any frequency resolution. Likewise, one can directly visualize the dynamic magnetization existing at a given frequency. Moreover, by studying the dispersion graphs in the time domain, one can directly extract the temporal phase information of the magnonic waves with respect to the microwave excitation. We will show that this point is crucial in understanding the real symmetry of the excited propagating mode. Finally, the continuous excitation allows spin waves to propagate to a longer extent, overcoming damping problems typical for large arrays \cite{Vivas, NeusserDamp, KrugDamp}, and providing a larger resolution in wave vector space.
In the next section we illustrate the method, highlighting the differences with the classically adopted pulse excitation. We then apply the method on 2D magnonic crystals of different kinds: a square array of interacting magnetic dots, a continuous film with antidot lattice, and a square bicomponent chessboard. In this way we show both the versatility of the method and validate it with (published) experimental data.
The purpose of this work is twofold: not only presenting a "new" way of finding mode dispersions with a higher resolution, but also a "proposal" for experimentalists to continuously feed the system with some excitation. We will see that, as a consequence, classically hardly observable odd spin wave modes can in principle be detected by Brillouin light scattering. Furthermore, the continuous excitation also helps to overcome damping criticality in magnonic crystals. In this respect, we should remark that only in large magnonic crystals collective spin wave properties can be exploited. Here, damping of the spin wave intensity is so critical that a signal, usually delivered by a pulse at the input side of the magnonic crystal, can hardly be detected at the output side, compromising the applicability of the technology. Conversely, when reverting to smaller magnonic crystals to overcome the complete signal damping, the collective propagation properties are suppressed by stationary modes of the lattice as a whole. Therefore, in the context of magnonic- and spin-logic devices, a continuous excitation of the spin waves is a promising alternative to address both the collective properties of spin modes and low signal damping.
\section{Methodology}
\subsection{General excitation}
Let us consider a general field $H^{exc}(t)$ which excites spin waves in a magnetic system.
|
The excited spin waves will depend on the frequency spectrum present in the excitation
\begin{equation}
\begin{split}
H^{exc}(t) &= \int C^{exc}(f) e^{\imath 2\pi f t}\,\mathrm{d}f\\
&= \int \left\{A^{exc}(f) + \imath B^{exc}(f)\right\}e^{\imath 2\pi f t}\,\mathrm{d}f\\
&=\int |C^{exc}(f)|\cos\{2\pi f t + \phi^{exc}(f)\}\,\mathrm{d}f \\
&\quad + \imath\int |C^{exc}(f)|\sin\{2\pi f t + \phi^{exc}(f)\}\,\mathrm{d}f\\
&=\int |C^{exc}(f)| e^{\imath[2\pi f t + \phi^{exc}(f)]}\,\mathrm{d}f.
\end{split}\label{FFT_exc}
\end{equation}
Indeed, a general excitation contains a distribution of frequencies with corresponding amplitudes
\begin{equation}
|C^{exc}(f)| = \sqrt{\left[A^{exc}(f)\right]^2 + \left[B^{exc}(f)\right]^2} \label{ampl_exc}
\end{equation}
and corresponding phases
\begin{equation}
\phi^{exc}(f)=\arctan\frac{A^{exc}(f)}{B^{exc}(f)}. \label{psi_exc}
\end{equation}
In terms of phasors rotating in frequency space, $|C(f)|$ is the amplitude of the phasor, while $\phi^{exc}(f)$ is its initial phase angle.
Due to the excitation, the magnetization $\mathbf{M}(\mathbf{r}, t)$ varies in time and space. We now study the spin waves propagating along the $x$-axis by analyzing the $z$-component of the magnetization along this axis. This is done by extracting the spin wave modes defined by a characteristic wave vector $k_x$ and frequency $f$ by means of a spatial and a temporal Fourier transform of $M_z(x,t)$ respectively.
\begin{equation}
\begin{split}
M_z&(x,t) = \iint C^{M_z}(k_x,f) e^{\imath 2\pi f t} e^{\imath 2\pi k_x x}\,\mathrm{d}f\,\mathrm{d}k_x\\
&= \iint \left\{A^{M_z}(k_x,f) + \imath B^{M_z}(k_x,f)\right\}\\
&\qquad\qquad\qquad\qquad \times e^{\imath 2\pi (ft+k_xx)}\,\mathrm{d}f\,\mathrm{d}k_x\\
& = \iint |C^{M_z}(k_x,f)| \,\times \\
&e^{\imath \left[ 2\pi(ft+k_xx) + \phi^{exc}(f) + \phi^{M_z}(k_x,f)\right]}\,\mathrm{d}f\mathrm{d}k_x.\label{FFT_Mz}
\end{split}
\end{equation}
Expression (\ref{FFT_Mz}) describes the dispersion relation of the different spin wave modes with given ($k_x,f$). The amplitude
\begin{equation}
|C^{M_z}(k_x,f)| = \sqrt{\left[A^{M_z}(k_x,f)\right]^2 + \left[B^{M_z}(k_x,f)\right]^2},\label{ampl_Mz}
\end{equation}
depends on the amplitude of the excitation field at that frequency $|C^{exc}(f)|$ as well as on the geometrical and material properties of the sample supporting this mode. Hence, since one is only interested in the latter, one should correct expression (\ref{ampl_Mz}) for the excitation field amplitude (\ref{ampl_exc}).
The phase has different contributions
\begin{equation}
\begin{split}
\phi^{exc}(f) + \phi^{M_z}&(k_x,f)\\&=\arctan\frac{A^{M_z}(k_x,f)}{B^{M_z}(k_x,f)} \label{psi_Mz}.
\end{split}
\end{equation}
It indeed depends on the phase of the considered frequency component in the excitation $\phi^{exc}(f)$. Moreover it contains an additional phase difference $\phi^{M_z}(k_x, f)$. The latter can be interpreted as a temporal phase difference between the mode and the excitation ---a phase $\phi^{M_z}(k_x,f)=0$ then stands for a mode beating in phase with the excitation--- or as a spatial phase difference between spin wave modes. Due to the propagating character of the spin waves, both interpretations are equivalent.
\subsection{Pulse excitation}
In micromagnetic simulations, a Gaussian pulse profile $e^{-at^2}$ ($a$ real, positive) is often used to excite the spin modes. This pulse has a spectrum
\begin{equation}
C^{exc}(f) = \sqrt{\frac{\pi}{a}}e^{\pi^2f^2/a},
\end{equation}
yielding a constant phase angle $\phi^{exc}(f) = 0$. In most cases, the pulse is applied on a restricted area of the sample. After applying the pulse, the magnetization dynamics is simulated for some time $T_{sim}$, sampling the magnetization data at $N_t$ time instants. This leads to a frequency resolution $\Delta f = 1/T_{sim}$
\begin{equation}
f_n = \frac{n}{T_{sim}} \quad n=0\ldots N_t/2-1. \label{frequencies}
\end{equation}
Only $N_t
|
_{+sc}+C_{Qsc} z_{+sc}}{\sqrt{c_{2} z_{+}^2}}\log{\left[ \frac{16 c_{2sc}^2 z_{+sc}^2 b_{sc}''}{c_{1}'^2 2b_{sc}(z_{+sc}-1)^2}\right]}
+\frac{r_{sc}^3}{a^2+Q^2}\frac{C_{Qsc}}{\sqrt{c_{2sc}}}\log{\left[ \frac{16 c_{2sc}^2 b_{sc}'')}{c_{1sc}'^2 2 b_{sc}}\right] } \, ,
\end{split}
\end{equation}
from which we can read off the coefficients $\bar a$ and $b_D$ as follows
\begin{equation}\label{abar_kn}
\begin{split}
\bar{a}=&\frac{r_{sc}^3}{\sqrt{c_{2sc}}}\left[ \frac{C_{-sc}}{r_{sc} r_--(a^2+Q^2)}+\frac{C_{+sc}}{r_{sc} r_{+}-(a^2+Q^2)}\right]
\end{split}
\end{equation}
\begin{equation}\label{b_D_kn}
\begin{split}
b_D=&\bar{a} \log{\left[ \frac{8c_{2sc}^2 b_{sc}''}{c_{1sc}'^2 b_{sc}}\right]}\\
&+\frac{2r_{sc}^3}{\sqrt{c_{2sc}}} \left[ \frac{ C_{-sc}}{r_{sc} r_{-}-(a^2+Q^2)}\log{\left( 1-\frac{a^2+Q^2}{r_{sc} r_{-}}\right)}+ \frac{C_{+sc}+ C_{Qsc} z_{+sc} }{r_{sc} r_{+} -(a^2+Q^2)}\log{\left(1-\frac{a^2+Q^2}{r_{sc} r_{+}}\right)} \right]\, .
\end{split}
\end{equation}
They reduce to their counterparts in (\ref{abar_k}) and (\ref{bD}) respectively as $Q\to 0$.
As for the remaining contributions to the regular part, and in the SDL, we have
\begin{equation}\label{b_R_kn}
\begin{split}
& b_R=I_R(r_{sc})=\int_0^1 f_R(z,r_{sc}) dz\\
&=\frac{2r_{0}^3}{a^2+Q^2}\frac{C_-}{\sqrt{c_2}z_-}\log{\left(\frac{z_-}{z_--1} \frac{2c_2+c_3+2\sqrt{c_2+c_3+c_4}\sqrt{c_2}}{4c_2} \right)}\\
&+\frac{2r_{0}^3}{a^2+Q^2}\frac{C_-}{z_-\sqrt{c_2+c_3 z_-+c_4z^2_-}}\log{\left( \frac{z_--1}{z_-}\frac{\left(\sqrt{c_2+c_3 z_-+c_4 z_-^2}+ \sqrt{c_2}\right)^2-c_4 z_-^2}{\left(\sqrt{c_2+c_3 z_-+c_4 z_-^2}+ \sqrt{c_2+c_3+c_4}\right)^2-c_4 (z_--1)^2}\right)}\\
&+\frac{2r_{0}^3}{a^2+Q^2}\frac{C_+}{\sqrt{c_2}z_+}\log{\left(\frac{z_+}{z_+-1} \frac{2c_2+c_3+2\sqrt{c_2+c_3+c_4}\sqrt{c_2}}{4c_2} \right)}\\
&+\frac{2r_{0}^3}{a^2+Q^2}\frac{C_++C_Qz_+}{z_+\sqrt{{c_2+c_3 z_-+c_4z^2_-}}}\log{\left( \frac{z_+-1}{z_-}\frac{\left(\sqrt{c_2+c_3 z_++c_4 z_+^2}+ \sqrt{c_2}\right)^2-c_4 z_+^2}{\left(\sqrt{c_2+c_3 z_++c_4 z_+^2}+ \sqrt{c_2+c_3+c_4}\right)^2-c_4 (z_+-1)^2}\right)}\\
&+\frac{2r_{0}^3}{a^2+Q^2}\frac{C_Q}{\sqrt{c_2}}\log{\left(\frac{z_+}{z_+-1}\right)}\Big\vert_{r_0=r_{sc}}
\end{split}
\end{equation}
In the limit of $Q\to 0$, where $C_Q$ and $c_4$ go to zero, the above expression of $b_R$ reduces to (\ref{bR_k}) in the Kerr case after implementing straightforward algebra. The coefficient $\bar{b}$ is obtained using (\ref{b_D_kn}) and (\ref{b_R_kn}) as
\begin{equation}\label{bbar_kn}
\begin{split}
\bar{b}=&-\pi+\bar{a} \log{\left[\frac{36}{4(1-c_{4sc}/c_{2sc})^2+4\sqrt{3}(1-c_{4sc}/c_{2sc})^{3/2}+3(1-c_{4sc}/c_{2sc})} \frac{8 c_{2sc}^2 b_{sc}''}{c_{1sc}'^2 b_{sc}}\right]}\\
&+\frac{r_{sc}^3}{\sqrt{c_{2sc}}}\frac{2(a^2+Q^2) C_{-sc}}{(a^2+Q^2-r_{sc}r_-)}\frac{\sqrt{3}}{{P_-}}\\
&\quad\quad \times \log\left[ \frac{-r_{sc}r_-}{a^2+Q^2-r_{sc}r_-} \frac{\left({P_-}+\sqrt{3} ({a^2+Q^2}) \right)^2-3 \left({a^2+Q^2}-{r_{sc} r_-} \right)^2 ({c_{4sc}}/{c_{2sc}})}{\left(P_-+(a^2+Q^2) (1-c_{4sc}/c_{2sc})^{1/2}\right)^2-3 {r^2_{sc} r^2_-} ({c_{4sc}}/{c_{2sc}})}\right]\\
&+\frac{r_{sc}^3}{\sqrt{c_{2sc}}}\frac{2 [(a^2+Q^2)C_{+sc} +(a^2+Q^2-r_{sc}r_+)C_{Qsc}]}{(a^2+Q^2-r_{sc}r_+)}\frac{\sqrt{3}}{{P_+}}\\
&\quad\quad \times \log{\left[ \frac{-r_{sc}r_+}{a^2+Q^2-r_{sc}r_+} \frac{\left({P_+}+\sqrt{3} ({a^2+Q^2}) \right)^2-3 \left({a^2+Q^2}-{r_{sc} r_+} \right)^2 ({c_{4sc}}/{c_{2sc}})}{\left(P_++(a^2+Q^2) (1-c_{4sc}/c_{2sc})^{1/2}\right)^2-3 {r^2_{sc} r^2_+} ({c_{4sc}}/{c_{2sc}})}\right]} \, .
\end{split}
\end{equation}
In the equation above, we have replaced $c_{3sc} $ by the linear combination of $c_{2sc}$ and $c_{4sc}$ in (\ref{cs_kn}), given by
\begin{equation}
\begin{split}
c_{3sc}=&-\frac{2}{3}c_{2sc}-\frac{4}{3}c_{4sc} \, .
\end{split}
\end{equation}
We also have
\begin{equation}
\begin{split}
P^2_{\pm}&=(a^2+Q^2+2r_{sc}r_{\pm})(a^2+Q^2)-(a^2+Q^2+r_{sc}r_{\pm})(a^2+Q^2-r_{sc}r_{\pm})(c_{4sc}/c_{2sc})\, .
\end{split}
\end{equation}
Combining (\ref{br0_kn}),(\ref{rsc_kn}) and (\ref{bsc_kn}), the coefficients $\bar a$ and $\bar b$ in (\ref{abar_kn}) and (\ref{bbar_kn}) can be analytically expressed as a function of the black hole's parameters in the SDL. Our results are ploted in Fig.(\ref{ab_kn_f}).
Again, notice that $\bar a>0$ but $\bar b<0$ with the parameters in the figure. Due to the fact that the bending angle for light rays resulting from the charged black hole is suppressed as compared with the neutral black hole with the same impact parameter $b$, both $\bar a $ and $\vert \bar b \vert $ are found to increase with the charge $Q$.
\begin{figure}[h]
\centering
\includegraphics[width=0.88\columnwidth=0.88]{fig3.pdf}
\caption{
{
{The coefficients $\bar a$ and $\bar b$ as a function of the spin parameter $a/M$ for the Kerr black hole with $Q/M=0$ and the Kerr-Newman black hole with $Q/M=0.6$: (a) the coefficient $\bar a$, (b) the coefficient $\bar b$. The display of the plot follows the convention in Fig.(\ref{bsc_a}).}}}
\label{ab_k_f}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.88\columnwidth=0.88]{fig4.pdf}
\caption{
{The coefficients of $\bar a$ and $\bar b$ as a function of charge $Q/M$ for the Schwarzschild black hole with ${a/M}=0$, ${Q/M}=0$,
the the Reissner-Nordstr\"om blck hole with
${a/M}=0$ and the Kerr-Newman black hole with ${a/M}=0.6$: (a) the coefficient $\bar a$, (b) the coefficient $\bar b$. The display of the plot follows the convention in Fig.(\ref{bsc_a}). }}
\label{ab_kn_f}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.63\columnwidth=0.63]{fig5.pdf}
\caption{
{ The SDL deflection angle (dotted lines) and the exact one (solid lines): (a)${a/M}=0.5$ and ${Q/M}=0.6$, (b) Error between them defined by $( \hat{\alpha}_{exact}-\hat{\alpha} )/ \hat{\alpha}_{exact} \times 100\%$; (c) ${a/M}=0.5$ and ${Q/M}=0.3$,(d) Error; (e) ${a/M}=0.9$ and ${Q/M}=0.3$, (f) Error; (g) ${a/M}=0.5$ and ${Q/M}=0.8$, (h) Error.}}
\label{alpha_com}
\end{figure}
It is then quite straightforward to check that the coefficients $\bar a$ and $\bar b$ in the Kerr-Newmann case can reduce to those in (\ref{abar_k}) and (\ref{bbar_k}) in the Kerr case by setting $c_4 \to 0$ in the limit of $Q \to 0$, also leading to $P_{\pm} \to a \sqrt{a^2+2r_{sc}r_{\pm}}$.
To compare with the Reissner-Nordstr\"om black hole in \cite{Tsuka1,Tsuka2}, it is known that the impact parameter $b$ as a function of $r_0$ is
\begin{equation}\label{br0_Q}
b(r_0)=\frac{r_0^2}{\sqrt{Q^2-2Mr_0+r_0^2}}
\end{equation}
and the circular motion of light rays forms the photon sphere with the radius
\begin{equation}
r_{c}=\frac{3M+\sqrt{9M^2-8Q^2}}{2}\;.
\end{equation}
The critical impact parameter as a function of $r_{c}$ is given by
\begin{equation}
b_{c}=\frac{r_{c}^2}{\sqrt{Mr_{c}-Q^2}}\, .
\end{equation}
Notice the subscript is changed from $sc$ to $c$ since the same critical impact parameters are obtained for light rays in direct orbits and retrograde orbits in the case of the nonspinning black holes.
In the limit of $a \to 0$, we have $C_{-sc} \to 0$ in (\ref{CpmQ_kn}) using the definition of $r_-$ in (\ref{rpm_kn}). Thus, the coefficient $\bar a$ in (\ref{abar_kn}) can be further simplified using (\ref{CpmQ_kn}), (\ref{rpm_kn}) and $c_{1sc}=0$ giving
\begin{equation}
\bar{a}=\frac{r_{c}}{\sqrt{3M r_{c}-4 Q^2}}\, ,
\end{equation}
which reproduces the expression in \cite{Tsuka1,Tsuka2}.
As for the coefficient $\bar b$, in the limit of $a\to 0$, apart from $C_{-sc} \to 0$, $(a^2+Q^2)C_{+ sc} +(a^2+Q^2-r_{sc}r_+)C_{Qsc} \to 0$ as well.
So, the coefficient $\bar{b}$ in (\ref{bbar_kn}) has the contribution only from the term proportional to $\bar a$. After substituting (\ref{br0_Q}) and (\ref{cs_kn}) in the limit of $a\to 0$ to (\ref{bbar_kn}) and going through nontrivial algebra, we indeed recover the compact analytical expression in \cite{Tsuka1,Tsuka2}:
\begin{equation}
\bar{b}= -\pi + \bar{a}\log{\left[ \frac{8(3Mr_{c}-4Q^2)^3}{M^2r_{c}^2(Mr_{c}-Q^2)^2} \left( 2\sqrt{Mr_{c}-Q^2}-\sqrt{3Mr_{c}-4Q^2} \right)^2 \right]}\, .
\end{equation}
Figure \ref{alpha_com} shows good agreement between the obtained SDL expression and the exact one in \cite{Hsiao} computed numerically when $b$ approaches $b_{c}$ for some values of $a$ and $Q$.
In conclusion, we have successfully achieved the analytical expression of the coefficient $\bar a $ and $\bar b$ in the form (\ref{hatalpha_as}) of the SDL deflection angle due to the spherically nonsymmetric black holes, although they look not as simple as in the cases of the spherically symmetric black holes.
Additionally, the obtained expressions can reproduce the respective ones due to the Kerr, Reissner-Nordstr\"om black holes and also due to the Schwarzschild black hole by taking the appropriate limits of the black hole's parameters.
\section{Relativistic Images of Gravitational lens and applications to supermassive galactic black holes}
We consider the cases of the planar light rays with the lens diagram shown in Fig.(\ref{fig:arch_02}), where $d_L$ and $d_S$ are the distances of the lens (black hole) and the light source from the observer, and also $d_{LS}$ represents the distance between the lens and the source. The line joining the observer and the lens is considered as a reference optical axis.
The angular positions of the source and the image are measured from the optical axis, and are denoted by $\beta$ and $\theta$, respectively. The lens equation is given by
\begin{equation} \label{leneq}
\tan s\beta=\tan\theta- \frac{d_{LS}}{d_S}[\tan\theta+\tan(\hat{\alpha}-\theta)]\;,
\end{equation}
where $\hat{\alpha}$ is the deflection angle of light rays obtained from (\ref{alpha_I}) that can be expressed in terms of the impact parameter $b$ as the light rays approach to the black holes.
{In \cite{Eiroa}, it is mentioned that the lens equations are applied for the observer and the source immersed in the asymptotically flat spacetime, where the Kerr and Kerr-Newman black holes have the asymptotically flat metric.
Also, in the small $\beta$ and $\theta$ limits, we will see that the approximate lens equations to be found later are the same ones in \cite{Bozza_2003}, in which the Kerr black holes are considered.}
In the SDL of our interest, when the light rays wind around the black hole $n$ times, the deflection angle $\hat \alpha$ can be approximately by (\ref{hatalpha_as}). The angle appearing in the lens equation should be within $2\pi$ and can be obtained from the deflection angle $\hat \alpha$ subtracting $ 2n\pi$.
Together with the relation between the impact parameter $b$ and the angular position of the image given by
\begin{equation} \label{b_theta}
b=d_L \sin\theta\;,
\end{equation}
in Fig.(\ref{fig:arch_02}), we can solve the lens equation (\ref{leneq}) with a given angular position of the source $\beta$ for the angular position of the observed image $\theta$.
In the SDL, when the angular position of the source is small, $\theta$ is expectedly small with the small impact parameter $b$. Then the lens equation (\ref{leneq}) can be further simplified by
\begin{equation}\label{leneq_app}
s\beta\simeq \theta-\frac{d_{LS}}{d_{S}}[\hat{\alpha}(\theta)-2n\pi] \,
\end{equation}
and (\ref{b_theta}) can be approximated by $b\simeq d_L\theta$.
{This can reduce to the lens equations in \cite{Bozza_2003}, in which the small angle limits are considered.}
According to \cite{Bozza3}, the zeroth order solution $\theta_{sn}^0$ is obtained from $\hat{\alpha}(\theta_{sn}^0)=2n\pi$. Using the SDL deflection angle in (\ref{hatalpha_as}) we have then
\begin{equation}\label{theta0}
\theta_{sn}^0=\frac{\vert b_{sc}\vert}{d_L}\left( 1+e^{\frac{\bar{b}-2n\pi}{\bar{a}}} \right)
\end{equation}
for $n=1,2,\cdots$.
The angular position $\theta_{s n}$ decrease in $n$ and reaches the asymptotic angular position given by $\theta_{s\infty}= \vert b_{sc}\vert/{d_L}$ as $n\to \infty$.
With the zeroth order solution (\ref{theta0}), the expansion of $\hat{\alpha}(\theta)$ around $\theta=\theta_{sn}^0$ is written explicitly as
\begin{equation}
\hat{\alpha}(\theta)=\hat{\alpha}(\theta_{sn}^0)-\frac{ \bar a}{ e^{(\bar{b}-2n\pi)/\bar{a}}} \frac{d_L d_{LS} } {\vert b_{sc}\vert d_S} (\theta-\theta_{sn}^0)+{O}(\theta-\theta_{s n}^0)^2\, .
\end{equation}
Then the approximate lens equation (\ref{leneq_app}) to the order $(\theta-\theta_{sn}^0)$ becomes
\begin{equation}
s\beta\simeq\theta_{sn}^0 +\left(1+\frac{ \bar a}{ e^{(\bar{b}-2n\pi)/\bar{a}}} \frac{d_L d_{LS}} {\vert b_{sc}\vert d_S}\right)(\theta-\theta_{sn}^0) \, .
\end{equation}
Solving for $\theta$, by keeping the lowest order term in $\vert b_{sc}\vert / d_L \ll 1 $, we find the angular position of the image as \cite{Bozza2}
\begin{equation}\label{theta_sn}
\begin{split}
\theta_{s n}\simeq\theta_{sn}^0+\frac{ e^{(\bar{b}-2n\pi)/\bar{a}}}{ \bar a} \frac{\vert b_{sc}\vert d_S}{d_{LS}d_L } (s \beta -\theta_{sn}^0) \;.
\end{split}
\end{
|
equation}
We assume that either Kerr or Kerr-Newman black holes have the clockwise rotation shown in Fig.(\ref{fig:arch_02}).
The light rays emitted from the source circle around the black hole multiple times in the SDL along a direct orbit (red line) with $s=+1$, where both the image and the source end up in the same sides of the optical axis with the angular position $\theta_{+n}$ and/or along a retrograde orbit (blue line) with $s=-1$, where the image and the source are in the opposite sides with the angular position $\theta_{-n}$.
We also define the angular position difference between the outermost image $\theta_{1\pm}$ and the asymptotic one near the optical axis as
\begin{equation}
\Delta \theta_{s}=\theta_{s1}-\theta_{s\infty} \,\, ,
\end{equation}
which is the value to compare with the resolution of the observation that allows to distinguish among a set of the relativistic images.
\begin{comment}
The magnifications of the images $\mu$ defined to be the ratio of the flux of the image to the flux of the source can then be obtained from the ratio of the solid angles of the images and the sources, and are given and further approximated in the SDL as
\begin{equation}
\mu=\left( \frac{\sin{\beta}}{\sin{\theta}}\frac{d \beta }{d \theta} \right)^{-1} \approx \left( \frac{ \beta }{\theta}\frac{d \beta}{d \theta} \right)^{-1}\, .
\end{equation}
For the image with the angular position $\theta_{sn}$, the magnification $\mu_{sn}$ is found to be \cite{Bozza2}
\begin{equation} \label{mu_sn}
\mu_{sn}\simeq \frac{ e^{(\bar{b}-2n\pi)/\bar{a}}}{ \bar a} \frac{\vert b_{sc}\vert d_S}{d_L d_{LS} }\frac{\theta^0_{sn}}{ \beta }\,.
\end{equation}
Apart from very small $\beta$, where the source and the lens are extremely aligned, $\mu$ normally is very small due to $\vert b_{sc}\vert \ll d_L$. It implies the demagnification of the images and thus causes difficulties in observing the relativistic images.
\end{comment}
We now compute the angular positions of the relativistic images of the sources for $n=1$ (the outermost image) due to either Kerr or Kerr-Newman black holes with the mass $M=4.1\times 10^6 M_{\odot}$ and the distance $d_L=26000\;{\rm ly}$ of the supermassive black hole Sagittarius A* at the center of our Galaxy as an example.
We also take the ratio to be $d_{LS}/d_S=1/2$. In Table 1 (2), we consider both the image and the source are in the same (opposite) sides of the optical axis, where the light rays travel along the direct (retrograde) orbits, and choose $\beta \sim \theta_{\pm 1}$.
The angular positions of the relativistic images are computed by (\ref{theta_sn}). In the case of $\vert b_{sc}\vert \ll d_L$, $\theta_{sn}$ is not sensitive to $\beta$ but mainly determined by $\theta_{sn}^0$ in (\ref{theta0}).
Given $\bar a$ and $\vert \bar b\vert$ of the magnitudes shown in Fig.(\ref{ab_k_f}) and (\ref{ab_kn_f}), $e^{-\frac{\vert \bar{b}\vert+2n\pi}{\bar{a}}} \ll 1$. The behavior of $\theta_{sn}$ thus depends mainly on $\vert b_{sc}\vert$ as a function of angular momentum $a$ and charge $Q$ of the black holes.
As discussed in the previous section, since the effects from the angular momentum of the black hole for direct orbits effectively induces more repulsive effects compared with the retrograde orbits, clearly shown in their effective potential $ W_\text{eff}$ (\ref{eq:Weff_k}), the resulting $b_{+c} < \vert b_{-c}\vert $
yields asymmetric values of $\theta_{+1} <\theta_{-1}$ for the same $a$ and $Q$. These features are shown in the Tables 1 and 2.
Additionally, we notice that $\theta_{+1}$ ($\theta_{-1}$) decreases (increase) in $a$ for fixed $Q$ resulting from the decrease (increase) of $ b_{+c}$ ($\vert b_{-c} \vert$) as $a$ increases.
As for $\Delta \theta$, for the same $Q$, $\Delta \theta_+$ increases with $a$ whereas $\Delta \theta_-$ decreases with $a$.
In particular, $\Delta \theta_+$ can be increased from about $10^{-2} \mu\rm{as}$ with $a/M\sim 10^{-3}$ and $Q/M=10^{-3}$ to the value as high as $0.6 \mu{\rm as}$ with $a/M =0.9$ and $Q/M=10^{-3}$, which certainly increases their observability by the current very long baseline interferometry (VLBI) \cite{Ulv,Johnson_2020}.
As for the finite $Q$ effects, also showing the repulsion to the light rays seen in the effective potential (\ref{eq:Weff_kn}), both $\theta_{+1}$ and $\theta_{-1}$ decrease in $Q$ for fixed $a$, resulting in the slightly increase of $\Delta \theta_{\pm}$ as $Q$ increases.
\begin{table}
\begin{tabular}{ccccccc}
\hline
\hline
$a/{M}$ ~ & $Q/{M}$ ~ &$\theta_{+1}$ ($\mu$as) &$\hat{\alpha}$ &$b/M$ &$\theta_{+\infty}$ ($\mu$as) &$\Delta \theta_{+}$ ($\mu$as)\\
\hline
$10^{\tiny -3}$ & $10^{\tiny -3}$ &26.4231 & $2\pi+32.8135$ ($\mu$as) & $5.2007$ &26.3900 &0.0331\\
$ $ & $0.3$ &26.0217 & $2\pi+32.0563$ ($\mu$as) & $5.1217$ &25.9866 &0.0351\\
$ $ & $0.6$ &24.7179 & $2\pi+29.4336$ ($\mu$as) & $4.8651$ &24.6747 &0.0432\\
$ $ & $0.8$ &23.1445 & $2\pi+26.2837$ ($\mu$as) & $4.5554$ &23.0849 &0.0596\\
\hline
$0.5$ & $10^{\tiny -3}$ &20.9290 & $2\pi+21.8561$ ($\mu$as) & $4.1193$ &20.8119 &0.1171\\
$ $ & $0.3$ &20.4085 & $2\pi+20.8203$ ($\mu$as) & $4.0169$ &20.2758 &0.1327\\
$ $ & $0.6$ &18.6189 & $2\pi+17.2398$ ($\mu$as) & $3.6646$ &18.4049 &0.2140\\
$ $ & $0.8$ &16.0922 & $2\pi+12.1835$ ($\mu$as) & $3.1673$ &15.5372 &0.5550\\
\hline
$0.9$ & $10^{\tiny -3}$ &15.1170 & $2\pi+10.2354$ ($\mu$as) & $2.9754$ &14.4517 &0.6653\\
$ $ & $0.3$ &14.1818 & $2\pi+8.36638$ ($\mu$as) & $2.7913$ &13.2701 &0.9117\\
$ $ & $0.6$ &$\cdots$ &$\cdots$ &$\cdots$ &$\cdots$ &$\cdots$ \\
$ $ & $0.8$ &$\cdots$ &$\cdots$ &$\cdots$ &$\cdots$ &$\cdots$ \\
\hline
\hline
\end{tabular}
\caption{\label{tab:table-I}Relativistic images on the same side of the source with the angular position $\beta=10$ ($\mu$as) where the light rays are along direct orbits seen in Fig.(\ref{fig:arch_02}).}
\end{table}
\begin{table}
\begin{tabular}{ccccccc}
\hline
\hline
$a/{M}$ ~ & $Q/{M}$ ~ &$\theta_{-1}$ ($\mu$as) &$\hat{\alpha}$ &$b/M$ &$\theta_{-\infty}$ ($\mu$as) & $\Delta \theta_- $ ($\mu$as)\\
\hline
$10^{\tiny -3}$ & $10^{\tiny -3}$ &26.4433 & $2\pi+72.8286$ ($\mu$as) & $5.20464$ &26.4103 &0.0330\\
$ $ & $0.3$ &26.0422 & $2\pi+72.0654$ ($\mu$as) & $5.12569$ &26.0073 &0.0349\\
$ $ & $0.6$ &24.7395 & $2\pi+69.4844$ ($\mu$as) & $4.86931$ &24.6966 &0.0429\\
$ $ & $0.8$ &23.1680 & $2\pi+66.3165$ ($\mu$as) & $4.56000$ &23.1088 &0.0592\\
\hline
$0.5$ & $10^{\tiny -3}$ &31.1994 & $2\pi+82.4458$ ($\mu$as) & $6.14075$ &31.1862 &0.0132\\
$ $ & $0.3$ &30.8561 & $2\pi+81.6482$ ($\mu$as) & $6.07318$ &30.8422 &0.0139\\
$ $ & $0.6$ &29.7638 & $2\pi+79.5352$ ($\mu$as) & $5.85820$ &29.7479 &0.0159\\
$ $ & $0.8$ &28.5058 & $2\pi+76.9916$ ($\mu$as) & $5.61059$ &28.4866 &0.0192\\
\hline
$0.9$ & $10^{\tiny -3}$ &34.7203 & $2\pi+89.4397 $($\mu$as) & $6.83374 $ &34.7130 &0.0073\\
$ $ & $0.3$ &34.4063 & $2\pi+88.5984$ ($\mu$as) & $6.77195 $ &34.3988 &0.0075\\
$ $ & $0.6$ &$\cdots$ & $\cdots$ &$\cdots$ &$\cdots$ &$\cdots$ \\
$ $ & $0.8$ &$\cdots$ &$\cdots$ &$\cdots$ &$\cdots$ &$\cdots$ \\
\hline
\hline
\end{tabular}
\caption{\label{tab:table-II}Relativistic images on the opposite side of the source with the angular position $\beta=10$ ($\mu$as) where the light rays are in retrograde orbits seen in Fig.(\ref{fig:arch_02}).}
\end{table}
{Another application of the analytical expression of the deflection angle on the equatorial plane is to consider the quasiequatorial gravitational lensing based upon the works of \cite{Bozza_2003,Gyu_2007}. In this situation, the polar angle $\theta$ is set to be slightly away from $\theta=\frac{\pi}{2}$ and now becomes time dependent.
In the SDL, the deflection angle of light rays with the additional initial declination can also be cast into the form of (\ref{hatalpha_as}) where the coefficients are replaced by $\hat a$ and $\hat b$.
In particular, the coefficient $\hat a$ obtained from the slightly off the equatorial plane can be related by the coefficient $\bar a$ on the equatorial plane through the $\omega$ function as
\begin{equation} \label{hata}
\hat a=\omega (r_{sc}) \, \bar a\;,
\end{equation}
where $\omega$ depends on $r$, and in turn depends on the deflection angle $\phi(r)$.
Notice that the above relation (\ref{hata}) involves $\omega$, which is evaluated at $r=r_{sc}$. In the case of the Kerr black hole, it is found that \cite{Bozza_2003} }
\begin{equation} \label{omega_k}
\omega(r_{sc})=\frac{(r_{sc}^2+a^2-2M r_{sc}) \sqrt{b_{sc}^2-a^2}}{2M a r_{sc}+b_{sc}(r_{sc}^2-2 M r_{sc})}\, ,
\end{equation}
{and thus for the Schwarzschild case we have $a\rightarrow 0$, $\omega \rightarrow 1$.
Then, substituting (\ref{rsc_k}) and (\ref{bsc_k}) into (\ref{omega_k}), together with the expression of $\bar a$ in (\ref{abar_k}), through (\ref{hata}) gives $\hat a=1$ for the Kerr case.
However, in the Kerr-Newman black hole, the straightforward calculations show that the above relation (\ref{hata}) still holds true.
The detailed derivations will appear in our future publication. Thus, the coefficient $\hat a$ can be analytically given by the coefficient $\bar a$ in (\ref{abar_kn}), together with the $\omega$ function in the Kerr-Newman case in below
\begin{equation}\label{omega_kn}
\omega (r_{sc})=\frac{(r_{sc}^2+a^2+Q^2-2 M r_{sc}) \sqrt{b_{sc}^2-a^2}}{- a (Q^2-2M r_{sc})+b_{sc}(r_{sc}^2+Q^2-2 M r_{sc})}\, .
\end{equation}
As $Q\rightarrow 0$, (\ref{omega_kn}) reduces to (\ref{omega_k}) in the Kerr case.
The behavior of $\hat a$ as a function of the charge $Q$ with the choices of the angular momentum $a$ for direct and retrograde orbits is displayed in Fig.(\ref{Fig6}). The value of $\hat a$ ($\hat a>1$) increases with $Q$ for both direct and retrograde orbits.
According to \cite{Bozza_2003,Gyu_2007}, the magnification of relativistic images might formally diverge when the angular positions of the sources are at caustic points. The corresponding magnifying power close to caustic points due to the light rays winding around the black hole $n$ times is given by $\bar \mu_n$ with the ratio between two neighboring caustic points
\begin{equation}
\frac{\bar \mu_{n+1}}{\bar \mu_n} \propto e^{-\pi/{\hat a}} \,
\end{equation}
depending only on $\hat a$.
In the Kerr case with $\hat a=1$, this ratio is independent of the black hole angular momentum $a$, whereas in the Kerr-Newman case with $\hat a >1$ shown in the Fig.(\ref{Fig6}), the ratio decreases with $Q$ for both direct and retrograde orbits \cite{Gyu_2007}.
Here we just sketch some of the effects from the charge $Q$ of the black hole on the magnification of relativistic images. To have the full pictures of the caustic points and find the magnification of relativistic images, it in fact deserves the extensive study to compute not only $\hat a$ but also $\hat b$ by following \cite{Bozza_2003,Gyu_2007}. The further extension from quasiequatorial plane to the full sky is also of great interest \cite{Gralla_2020a,Gralla_2020b,Johnson_2020}. }
%
\begin{figure}[h]
\centering
\includegraphics[width=0.88\columnwidth=0.88]{fig6.pdf}
\caption{
{
{The coefficient $\hat a$ as a function of the black hole charge $Q/M$ for the direct (retrograde) orbits with (a) $a/M=0.3$, (b) $a/M=0.6$. The display of the plot follows the convention in Fig.(\ref{bsc_a}).}}}
\label{Fig6}
\end{figure}
\begin{comment}
{The magnifications of the relativistic images at the angular position $\theta_{sn}$ can be calculated from (\ref{mu_sn}), and their dependence of the black hole's parameters is determined by the coefficients $\bar a$ and $\bar b$, mainly through $\mu \propto e^{-2n\pi/\bar a}$ since typically $\vert \bar b \vert <2\pi$.
In the case of $n=1$, the increase (decrease) of $\bar a$ for direct (retrograde) orbits as $a$ increases by fixing $Q$ drives the increase (decrease) in the magnification $\mu_{+1}$ ($\mu_{-1}$), as shown in Tables 1 and 2 respectively.
In particular, $\mu_{+1}$ for direct orbits can be increased from $10^{-13}$ with $a/M=10^{-3}$ and $Q/M=10^{-3}$ to $10^{-12}$ with $a/M =0.9 $ and $Q/M =10^{-3}$.
Similar discussions are also applied to the charge $Q$ of the black hole with fixed angular momentum $a$. The increase of $\bar a$ as $Q$ increases for both direct and retrograde orbits results in the increase of $\mu_{\pm}$ also shown in the Tables.
For example, $\mu_{\pm1}$ can also be increased from $10^{-13}$ with $a/M=10^{-3}$ and $Q/M=10^{-3}$ to $10^{-12}$ with $a/M =10^{-3}$ and $ Q/M=0.8$. The effects from angular momentum $a$ and charge $Q$ of the black holes with the enhanced $\mu$ values may increase the visibility of the created images.}
\end{comment}
\section{Summary and outlook}
In summary, the dynamics of light rays traveling around the Kerr black hole and the Kerr-Newman black hole, respectively, is studied with the detailed derivations on achieving analytical expressions of $\bar a$ and $\bar b$ in the approximate form of the deflection angle in the SDL.
Various known results are checked by taking the proper limits of the black hole's parameters. The analytical expressions are then applied to compute the angular positions of relativistic images due to the supermassive galactic black holes.
We find that the effects from the angular momentum $a$ for direct orbits of light rays and the charge $Q$ for both direct and retrograde orbits increase the angular separation of the outermost images from the others.
Although the observation of relativistic images is a very difficult task \cite{Ulv}, our studies show potentially increasing observability of the relativistic images from the effects of angular momentum and charge of the black holes.
Hopefully, relativistic images will be observed in the near future. Through the analytical results we present in this work, one can reconstruct the black hole's parameters that give strong lensing effects.
{As light rays travel on the quasiequatorial plane, our analytical results on the equatorial plane can also be applied to roughly estimate the relative magnifications of relativistic images with the sources near one of the caustic points by taking account of the dynamics of the light rays in the polar angle. The work of investigating the structure of the caustic points from the effects of the charge $Q$ of the Kerr-Newman black holes and the magnification of relativistic images is in progress.}
Also, inspired by the recent advent of horizon-scale observations of astrophysical black holes, the properties of null geodesics become of great relevance to astronomy.
The recent work of \cite{Gralla_2020a,Gralla_2020b,Johnson_2020} provides an extensive analysis on Kerr black holes. We also plan to extend the analysis of null geodesic to Kerr-Newman black holes, focusing on the effects from the charge of black holes.
\begin{acknowledgments}
This work was supported in part by the Ministry of Science and Technology, Taiwan, under Grant No.109-2112-M-259-003.
\end{acknowledgments}
|
\section{Introduction}
The Kramers--Smoluchowski equation describes
the time evolution of the probability density of a particle undergoing a Brownian motion under the influence of a chemical potential -- see \cite{Berg} for
the background and references.
Mathematical treatments in the low temperature regime have been provided
by Peletier et al \cite{Pele} using $ \Gamma$-convergence, by
Herrmann--Niethammer \cite{herr} using {W}asserstein gradient flows and
by Evans--Tabrizian \cite{EvTa16}.
The purpose of this note is to explain how precise quantitative results can be
obtained using semiclassical methods developed by, among others, Bovier, Gayrard, Helffer, H\'erau, Hitrik, Klein, Nier and Sj\"ostrand \cite{BoGaKl05_01,He88_01,HeKlNi04_01,HeSj85_01,HeHiSj11_01} for the study of spectral asymptotics for Witten Laplacians \cite{Wi82}
and for Fokker--Planck operators. The semiclassical parameter $ h$ is
the (low) temperature. This approach is much closer in spirit to the heuristic
arguments in the physics literature \cite{Ey,Kra} and the main point is that
the Kramers--Smoluchowski equation {\em is} the heat equation for the Witten
Laplacian acting on functions.
Here
we give a self-contained presentation of the one dimensional case
and explain how the recent paper by the
first author \cite{Mi16} can be used to obtain results in higher dimensions.
Let $\varphi:\mathbb R^d\rightarrow \mathbb R$ be a smooth function.
Consider the corresponding Kramers-Smoluchowski equation:
\begin{equation}\label{eq:KS}
\left\{\begin{array}{c}
\partial_t\rho= \partial_x \cdot (\partial_x \rho+\epsilon^{-2}\rho \partial_x\varphi)\\
\rho_{\vert t=0}=\rho_0\phantom{********}
\end{array}
\right.
\end{equation}
where $\epsilon\in (0,1]$ denotes the temperature of the system and will be the small asymptotic parameter. Assume that
there exists $C>0$ and a compact $K\subset\mathbb R^d$ such that for all $x\in\mathbb R^d\setminus K$, we have
\begin{equation}\label{eq:hyp-croissancephi}
\vert\partial \varphi(x)\vert\geq\frac 1 C, \ \ \ \ \vert \partial^2_{x_i x_j}\varphi\vert\leq C\vert\partial\varphi\vert^2, \ \ \ \ \varphi(x)\geq C\vert x\vert.
\end{equation}
Suppose additionally that $\varphi$ is a Morse function, that is,
$ \varphi $ has isolated and non-degenerate critical points. Then, thanks to the above assumptions the set ${\mathcal U}}\def\vvv{{\mathcal V}}\def\www{{\mathcal W}$ of critical points of $\varphi$ is finite.
For $p=0,\ldots,d$, we denote by ${\mathcal U}}\def\vvv{{\mathcal V}}\def\www{{\mathcal W}^{(p)}$ the set of critical points of index $p$.
Denote
\begin{equation}\label{eq:defm0m1}
\varphi_0:=\inf_{x \in \mathbb R^d}\varphi(x)=\inf_{\mathbf{m}\in{\mathcal U}}\def\vvv{{\mathcal V}}\def\www{{\mathcal W}^{(0)}}\varphi(\mathbf{m}) \ \ \text{ and } \ \ \sigma_1:=\sup_{\mathbf{s}\in{\mathcal U}}\def\vvv{{\mathcal V}}\def\www{{\mathcal W}^{(1)}}\varphi(\mathbf{s}).
\end{equation}
Thanks to \eqref{eq:hyp-croissancephi}, the sublevel set of $ \sigma_1 $ is
decomposed in finitely many connected components $E_1,\ldots,E_N$:
\begin{equation}
\label{eq:En}
\{x\in\mathbb R^d,\,\varphi(x)<\sigma_1\} = \bigsqcup_{n=1}^N E_n .
\end{equation}
We assume that
\begin{equation}\label{eq:hyptopol}
\inf_{x\in E_n}\varphi(x)=\varphi_0, \ \ \ \forall n=1,\ldots, N, \ \ \text{ and }\ \
\varphi (\mathbf{s} ) = \sigma_1 , \ \ \forall\mathbf{s} \in \mathcal U^{(1)} .
\end{equation}
which corresponds to the situation where $\varphi$ admits $N$ wells of the same height.
In order to avoid heavy notation, we also assume that for
$n=1,\ldots,N$ the minimum of $\varphi$ on $E_n$ is attained in a single point that we denote by $\mathbf{m}_n$.
The associated {\em Arrhenius number},
$S=\sigma_1-\varphi_0$, governs the long time dynamics of \eqref{eq:KS}. That is made quantitative in Theorems \ref{th1} below. More general assumptions
can be made as will be clear from the proofs. We restrict ourselves to the
case in which the asymptotics are cleanest.
\begin{figure}
\center
\scalebox{0.6}{ \input{figure1.pdf_tex}}
\caption{A one dimensional potential with interesting Kramers--Smoluchowski
dynamics.}
\label{fig1}
\end{figure}
To state the simplest result let us assume that $ d = 1 $ and that the second derivative of $\varphi$ is constant on the sets
${\mathcal U}}\def\vvv{{\mathcal V}}\def\www{{\mathcal W}^{(0)}$ and ${\mathcal U}}\def\vvv{{\mathcal V}}\def\www{{\mathcal W}^{(1)}$:
\begin{equation}\label{hyp:phisec-const}
\varphi''(\mathbf{m})=\mu,\ \ \ \forall \mathbf{m}\in {\mathcal U}}\def\vvv{{\mathcal V}}\def\www{{\mathcal W}^{(0)}\ \ \ \text{ and }\ \ \ \varphi''(\mathbf{s})=-\nu,\ \ \ \forall \mathbf{s}\in {\mathcal U}}\def\vvv{{\mathcal V}}\def\www{{\mathcal W}^{(1)}
\end{equation}
for some $\mu,\nu>0$.
The potential then looks like the one
shown in Fig.\ref{fig1}. We introduce the matrix
\begin{equation}
\label{eq:simpleA0} A_0 =
\frac \kappa \pi
\begin{pmatrix} 1&\!\! \! - 1 &0&0&\ldots&\ldots&\ldots&0\\
\!\! \! - 1 &2&\!\! \! - 1 &0&\ldots&\ldots&\ldots&0\\
0&\!\! \! - 1 &2&\!\! \! - 1 &0&\ldots&\ldots&0\\
\vdots&0&\!\! \! - 1 &2&\ddots&\ddots&\ldots&0\\
\vdots&\vdots&\ddots&\ddots&\ddots&\ddots&\ddots&0\\
\vdots&\vdots&\ddots&\ddots&\ddots&\ddots&\!\! \! - 1 &0\\
0&\vdots&\ddots&\ddots&\ddots&\!\! \! - 1 &2&\!\! \! - 1 \\
0&0&\ldots&\ldots&\ldots&0&\!\! \! - 1 &1
\end{pmatrix}.
\end{equation}
with $ \kappa=\sqrt{\mu \nu} $.
This matrix is positive semi-definite with a simple eigenvalue at $ 0 $.
\begin{theorem}\label{th1}
Suppose that $ d = 1 $ and $ \varphi $ satisfies \eqref{eq:hyp-croissancephi},
\eqref{eq:hyptopol} and \eqref{hyp:phisec-const}.
Suppose that
\begin{equation}
\label{eq:rho0}
\rho_0=\left(\frac \mu{2\pi\epsilon^2}\right)^{\frac 12} \left( \sum_{n=1}^N \beta_n
\operatorname{1\negthinspace l}_{E_n} + \, r_\epsilon \right) e^{-\varphi/\epsilon^2} , \ \ \
\lim_{ \epsilon \to 0 } \Vert r_\epsilon\Vert_{L^\infty} = 0, \ \ \beta\in
\mathbb R^N ,
\end{equation}
then the solution to \eqref{eq:KS} satisfies, uniformly for $ \tau \geq 0 $,\begin{equation}\label{eq:weakconv}
\rho( 2\epsilon^2e^{S/\epsilon^2}\tau,x)\ \rightarrow \ \sum_{n=1}^N\alpha_n(\tau)\delta_{\mathbf{m}_n} (x) ,
\ \ \ \epsilon \to 0,
\end{equation}
in the sense of distributions in $ x $, where $ S = \sigma_1 - \varphi_0 $ and
where
$\alpha(\tau)=(\alpha_1,\ldots,\alpha_n)(\tau)$ solves
\begin{equation}
\label{eq:dotal} \partial_\tau \alpha= - A_0\alpha , \ \ \ \alpha ( 0 ) = \beta ,
\end{equation}
with $ A_0 $ given by \eqref{eq:simpleA0}.
\end{theorem}
The above result is a generalization of Theorem 2.5 in \cite{EvTa16} where the case of
a double-well is considered and estimates are uniform on compact time intervals only.
We remark that the equation considered in \cite{EvTa16} has also an additional transverse variable (varying slowly).
A development of the methods presented in this note would also allow having
such variables. Since our goal is to explain
general ideas in a simple setting we do not address this issue here.
A higher dimensional version of Theorem \ref{th1} is given in Theorem \ref{t:higher}
in \S \ref{s:get_high}. In this higher dimensional setting, the matrix $ A_0 $ becomes a graph Laplacian for a graph
obtained by taking minima as vertices and saddle points as edges. The same graph Laplacian was used by Landim et al \cite{Lamit} in
the context of a discrete model of the Kramers--Smoluchowski equation.
Using methods of \cite{EvTa16} and \cite{BoGaKl05_01},
Theorem \ref{t:higher} was also proved by Seo--Tabrizian \cite{Into}, but as the other
previous papers, without uniformity in time (that is, with convergence uniform for $ t
\in [ 0 , T ] $).
Here, Theorem \ref{th1} is a consequence of a more precise asymptotic formula
given in Theorem \ref{th:DynamWitten} formulated using the Witten Laplacian.
Provided that certain topological assumptions are satisfied (see \cite[\S 1.1, \S 1.2]{Mi16}) an analogue of Theorem \ref{th1} in higher dimensions is immediate --
see \S \ref{s:get_high} for geometrically interesting examples.
The need for the new results of \cite{Mi16} comes from the fact that
in the papers on the low-lying eigenvalues of the Witten Laplacian \cite{BoGaKl05_01,He88_01,HeKlNi04_01,HeSj85_01,HeHiSj11_01} the authors make assumptions on the relative positions of minima and of
saddle points. These assumptions mean that
the Arrhenius numbers are distinct and hence potentials for which
the Kramers--Smoluchowski dynamics \eqref{eq:weakconv} is interesting are excluded.
With this motivation the general case was studied in \cite{Mi16} and to explain
how the results of that paper can be used in higher dimensions we give
a self-contained presentation in dimension one.
We remark that we need specially prepared initial data \eqref{eq:rho0} to
obtain results valid for all times. Also, $ E_n $'s in the
statement can be replaced by any interval in $ E_n $ containing the minimum.
Theorem \ref{th:DynamWitten} also shows that
a weaker result is valid for any $ L^2 $ data: suppose that
$ \rho_0 \in L^2_\varphi:= L^2(e^{\varphi(x)/\epsilon^2}dx)$ and that
\[ \beta_n :=\left(\frac \mu{2\pi\epsilon^2}\right)^{\frac 14}
\int_{E_n } \rho_0 (x ) dx . \]
Then, uniformly for $ \tau \geq 0 $,
\begin{equation}
\label{eq:rhot}
\begin{gathered} \rho ( t, x ) = \left(\frac \mu{2\pi\epsilon^2}\right)^{\frac 14}
\sum_{n=1}^N \alpha_n ((2\epsilon^2)^{-1}e^{-S/\epsilon^2} t ) \operatorname{1\negthinspace l}_{E_n } ( x ) e^{-\varphi/\epsilon^2}+
r_\epsilon(t,x) ,\\
\Vert r_\epsilon(t)\Vert_{L^1(dx)}\leq C(\epsilon^{\frac 52}+\epsilon^{\frac 12}e^{-t\epsilon^2})\Vert\rho_0\Vert_{L^2_\varphi} , \ \ \
L^2 _\varphi := L^2 (\mathbb R , e^{\varphi(x)/\epsilon^2}dx) .
\end{gathered}
\end{equation}
where $ \alpha $ solves \eqref{eq:dotal}.
The proof of \eqref{eq:rhot}
is given at the end of \S \ref{s:pft}.
\medskip
\noindent
{\sc Acknowledgements.} We would like to thank Craig Evans and
Peyam Tabrizian for introducing us to the Kramers--Smoluchowski equation, and
Insuk Seo for informing us of reference \cite{Lamit}.
The research of LM was partially supported by
the European Research Council, ERC-2012-ADG, project number 320845 and by the France Berkeley Fund. MZ acknowledges partial support under the National Science Foundation grant DMS-1500852.
\section{Dimension one}
In this section we assume that the dimension is equal to $d=1$. That allows to present self-contained proofs which indicate the
strategy for higher dimension.
Ordering the sets $E_n$ such that
$\mathbf{m}_1<\mathbf{m}_2<\ldots<\mathbf{m}_N$ it follows that for all $n=1,\ldots N-1$ $\bar E_n\cap \bar E_{n+1}=\{\mathbf{s}_n\}$ is a maximum and we assume additionally that there exists $\mu_n,\nu_k>0$ such that for
$ n=1,\ldots,N$ and $ k=1,\ldots,N-1 $,
\begin{equation}\label{eq:nothessphi}
\varphi''(\mathbf{m}_n)=\mu_n \ \ \text{ and } \ \ \varphi''(\mathbf{s}_k)=-\nu_k.
\end{equation}
Using this notation we define a symmetric $N\times N$ matrix:
$ A_0= (a_{ij} )_{1 \leq i,j \leq N } $, where (with the convention that
$ \nu_0 = \nu_N = 0 $)
\begin{equation}\label{eq:defA0}
\begin{split}
a_{ii} & = \pi^{-1}\mu_j^{\frac12} (\nu_{j-1}^{\frac12} + \nu_j^{\frac12} ) ,
\ \ \ a_{i,i+1}
= -\pi^{-1} \nu_i^{\frac12} \mu_{i}^{\frac14} \mu_{i+1}^{\frac14}, \ \
1 \leq i \leq N-1 ,
\end{split}
\end{equation}
and $ a_{i,i+k} = 0$ , for $k > 1$, $ a_{ij} = a_{ji}$.
The matrix $A_0$ is symmetric positive and the eigenvalue $0$ has multiplicity $1$.
When $ \mu_j $'s and $ \nu_j $'s are all equal our matrix takes the particularly simple form \eqref{eq:simpleA0}.
First, observe that we can assume without loss of generality that $\varphi_0=0$.
Define the operator appearing on the right hand side of \eqref{eq:KS} by
\[ P: =\partial_x \cdot (\partial_x+\epsilon^{-2}\partial_x\varphi)\]
and denote
\[ h=2\epsilon^2 . \]
Then, considering $ e^{\pm \varphi/h } $ as a multiplication operator,
$$P=\partial_x\circ(\partial_x+2h^{-1}\partial_x\varphi)=\partial_x \circ e^{-2\varphi/h}\circ\partial_x\circ e^{2\varphi/h}$$
and
$$e^{\varphi/h}\circ P\circ e^{-\varphi/h}=-h^{-2}\Delta_\varphi, \ \ \
\Delta_\varphi : =-h^2\Delta+\vert\partial_x\varphi\vert^2-h\Delta\varphi .$$
Hence, $\rho$ is solution of \eqref{eq:KS} if
$u(t,x):=e^{\varphi(x)/h}\rho(h^2t,x)$ is a solution of
\begin{equation}\label{eq:heatwitten}
\partial_tu =-\Delta_\varphi u, \ \ \
u_{\vert t=0}=u_0:=\rho_0e^{\varphi/h} .
\end{equation}
In order to state our result for this equation, we denote
\begin{equation}\label{eq:defpsin}
\psi_n(x):=c_n(h)h^{-\frac 1 4}\operatorname{1\negthinspace l}_{E_n}(x)e^{-(\varphi-\varphi_0)(x)/h}, \ \ \ \forall n=1,\ldots,N ,
\end{equation}
where $c_n(h)$ is a normalization constant such that $\Vert \psi_n\Vert_{L^2}=1$. The method of steepest descent shows that
\begin{equation}\label{eq:asympcn}
c_n(h) \sim \sum_{k=0}^\infty c_{n,k}h^k, \ \
c_{n,0}=(\mu_n/\pi)^{\frac 14} , \ \ \forall n=1,\ldots N.
\end{equation}
We then define a map $\Psi:\mathbb R^N\rightarrow L^2$ by
\begin{equation}\label{eq:defPsi}
\Psi(\beta):=\sum_{n=1}^N\beta_n\psi_n, \ \ \ \forall \beta = ( \beta_1 , \ldots , \beta_N )
\in\mathbb R^N.
\end{equation}
The following theorem describes the dynamic of the above equation when $h\rightarrow 0$.
\begin{theorem}\label{th:DynamWitten}
There exists $C>0$ and $h_0>0$ such that for all $\beta\in\mathbb R^N$ and all $0 < h < h_0 $, we have
\begin{equation}\label{eq:DynamWitten1}
\Vert e^{-t\Delta_\varphi}\Psi(\beta)-\Psi(e^{-t\nu_h A}\beta)\Vert_{L^2}\leq Ce^{-\frac 1{Ch}} | \beta | , \ \ \ \forall t\geq 0,
\end{equation}
where $\nu_h=he^{-2S/h}$, $ S = \sigma_1 - \varphi_0 $, and $A=A(h)$ is a real symmetric positive matrix having a classical expansion
$
A\sim \sum_{k=0}^\infty h^k A_k
$
with
$A_0$ given by \eqref{eq:defA0}. In addition,
\begin{equation}\label{eq:DynamWitten2}
\Vert e^{-t\Delta_\varphi}\Psi(\beta)-\Psi(e^{-t\nu_h A_0}\beta)\Vert_{L^2}\leq Ch | \beta |
\end{equation}
uniformly with respect to $t\geq 0$.
\end{theorem}
We first show how
\begin{proof}[Theorem \ref{th:DynamWitten} implies Theorem \ref{th1}]
First recall that we assume here $\mu_n=\mu$ for all $n=1,\dots N$ and $\nu_k=\nu$ for all $k=1,\ldots N-1$.
Suppose that $\rho$ is the solution to \eqref{eq:KS} with
$\rho_0$ as in Theorem \ref{th1}. Then $u(t,x):=e^{\varphi(x)/h}\rho(h^2t,x)$ is a solution of \eqref{eq:heatwitten}, that is
$u(t)=e^{-t\Delta_\varphi}u_0$ with
\begin{equation}
\begin{split}
u_0&=\rho_0e^{\varphi/2\epsilon^2}
=\left(\frac \mu{2\pi\epsilon^2}\right)^{\frac 12} \left(\sum_{n=1}^N\beta_n\operatorname{1\negthinspace l}_{E_n} +r_\epsilon\right)e^{-\varphi/2\epsilon^2}\\
&=\left(\frac \mu{\pi h}\right)^{\frac 12}\left( \sum_{n=1}^N\beta_n \operatorname{1\negthinspace l}_{E_n}
+ r_h\right) e^{-\varphi/h}
\end{split}
\end{equation}
Since, $c_n(h)=(\mu/\pi)^{\frac 14}+\ooo(h)$ it follows that
$$
u_0=(\mu/\pi h)^{\frac 14} \Psi(\beta)+\tilde r_h, \ \ \
\tilde r_h=
\left(\ooo(h^{\frac 12})+h^{-\frac 12}r_h \right)e^{-\varphi/h} . $$
Since $h^{-\frac 12}e^{-\varphi/h}=\ooo_{L^1}(1)$,
we have
$\tilde r_h\rightarrow 0$ in $L^1$ when $h\rightarrow 0$.
Hence, it follows from \eqref{eq:DynamWitten2} (Theorem \ref{th:DynamWitten}) that
\begin{equation*}
\begin{split}
\rho(h^2t,x)&=e^{-\varphi(x)/h}u(t,x)=e^{-\varphi(x)/h}e^{-t\Delta_\varphi}\left((\mu/\pi h)^{\frac 14} \Psi(\beta)+\tilde r_h\right)\\
&=e^{-\varphi(x)/h}\left((\mu/\pi h)^{\frac 14} \Psi(e^{-t\nu_h A_0}\beta)+e^{-t\Delta_\varphi}\tilde r_h+\ooo_{L^2}(h)\right)
\end{split}
\end{equation*}
With the new time variable $s=t \nu_h$, we obtain
\begin{equation}\label{eq:limKS1}
\rho(she^{2S/h},x)=e^{-\varphi(x)/h}\left((\mu/\pi h)^{\frac 14}\Psi(e^{-s A_0}\beta)+e^{-t\Delta_\varphi}\tilde r_h+\ooo_{L^2}(h)\right)
\end{equation}
and denoting $\alpha(s)=e^{-s A_0}\beta$, we get
$$
e^{-\varphi(x)/h}(\mu/\pi h)^{\frac 14}\Psi(e^{-s A_0}\beta)= (\mu/\pi )^{\frac 14}\sum_{n=1}^N \alpha_n(t)h^{-\frac 12}c_n(h)\chi_n(x)e^{-2\varphi(x)/h}.
$$
On the other hand,
$
h^{-\frac 12}\chi_n(x)e^{-2\varphi(x)/h}\longrightarrow({\pi}/\mu)^{\frac 12}\delta_{x=m_n}
$, as $ h \to 0 $,
in the sense of distributions.
Since, $c_n(h)=(\mu/\pi)^{\frac 14}+\ooo(h)$, it follows that
\begin{equation}\label{eq:cvdistrib1}
e^{-\varphi/h}(\mu/{\pi h})^{\frac 14}\Psi(e^{-s A_0}\beta)\longrightarrow \sum_{n=1}^N\alpha_n(t)\delta_{x=m_n}
\end{equation}
when $h\rightarrow0$.
Moreover, since $e^{-t\Delta_\varphi}$ is bounded by $1$ on $L^2$, then
$$\Vert h^{-\frac 12} e^{-\varphi/h}e^{-t\Delta_\varphi}(r_h e^{-\varphi/h})\Vert_{L^1}
\leq \Vert r_h\Vert_{L^\infty}\Vert h^{-\frac 14} e^{-\varphi/h}\Vert_{L^2}^2\leq C\Vert r_h\Vert_{L^\infty}
$$
and recalling that $ r_h\rightarrow 0$ in $L^\infty$,
we see that
\begin{equation}\label{eq:cvdistrib2}
e^{-\varphi(x)/h}(e^{-t\Delta_\varphi}\tilde r_h+\ooo_{L^2}(h))\longrightarrow 0
\end{equation}
in the sense of distributions.
Inserting \eqref{eq:cvdistrib1} and \eqref{eq:cvdistrib2} into \eqref{eq:limKS1} and recalling that $h=2\epsilon^2$, we obtain \eqref{eq:weakconv}.
\end{proof}
\subsection{Witten Laplacian in dimension one}
The Witten Laplacian is particularly simple
in dimension one but one can already observe features which play a crucial role
in general study.
For more information we refer to
\cite[\S 11.1]{CyFrKiSi87_01} and \cite{He88_01}.
We first consider $\Delta_\varphi$ acting on $ C_c^\infty (\mathbb R ) $ and recall a supersymmetric structure which is the starting point of our analysis:
\begin{equation}\label{eq:susy1D}
\Delta_\varphi=d_\varphi^*\circ d_\varphi
\end{equation}
with $d_\varphi=e^{-\varphi/h}\circ h\partial_x\circ e^{\varphi/h}=h\partial_x+\partial_x\varphi$ and
$d_\varphi^*=-h\partial_x+\partial_x\varphi=-d_{-\varphi}$.
From this square structure, it is clear that $\Delta_\varphi$ is non negative
and that we can use the Friedrichs extension to define a self-adjoint operator $ \Delta_\varphi $ with
domain denoted $D(\Delta_\varphi) $. Moreover, it follows from \eqref{eq:hyp-croissancephi} that there exists $c_0,h_0>0$ such that for $ 0 < h < h_0 $,
\begin{equation}\label{eq:locspecess}
\sigma_{\rm{ess}}(\Delta_\varphi)\subset [c_0,+\infty) .
\end{equation}
Therefore, $\sigma(\Delta_\varphi)\cap [0,c_0)$ consists of
eigenvalues of finite multiplicity and with no accumulation points except possibly
$c_0$.
The following proposition gives a preliminary description of the low-lying eigenvalues.
\begin{proposition}\label{prop:roughestimspectre}
There exist $\varepsilon_0,h_0>0$ such that for any $h\in (0,h_0]$,
$\Delta_{\pm \varphi} $ has exactly $N$ eigenvalues $0 \leq \lambda^\pm_1\leq\lambda_2^\pm\ldots\leq\lambda_N^\pm$ in the interval $[0, \epsilon_0 h ]$. Moreover,
for any $ \epsilon > 0 $ there exists $ C $ such that
\begin{equation}
\lambda_n^\pm(h)\leq Ce^{-(S-\epsilon)/h},
\end{equation}
where $S = \sigma_1 - \varphi_0 $.
\end{proposition}
\noindent
{\bf Remark.}
The proof applies to any $ \varphi $ which satisfies
the first {\em two} inequalities in
\eqref{eq:hyp-croissancephi}. If one assumes additionally that
$\varphi(x)\geq C\vert x\vert $ for $ |x| $ large, then $e^{-\varphi/h} \in
D(\Delta_\varphi)$.
Since $d_\varphi(e^{-\varphi/h})=0$ it follows that $ \lambda_0 ^+= 0 $.
\medskip
\begin{proof} This is proved in \cite[Theorem 11.1]{CyFrKiSi87_01} with
$ h^{\frac32} $ in place of $ \epsilon_0 h $. The proof applies in any
dimension and we present it in that greater generality for $ \varphi $ satisfying
\[ \vert\partial \varphi(x)\vert\geq\frac 1 C, \ \ \ \ \vert \partial^2_{x_i x_j} \varphi\vert\leq C\vert\partial\varphi\vert^2 .\]
The fact that there exists at least $N$ eigenvalues in the interval
$[0,C e^{-(S- \epsilon )/h}]$ is a direct consequence of the existence of $N$ linearly independent quasi-modes
-- see Lemma \ref{lem:propf0} and \eqref{eq:estimdeltaphifn0} below.
To show that $ N $ is the exact number of eigenvalues in
$ [ 0 , \epsilon_0 h ) $
it suffices to find a $N$ dimensional vector space $V$ and $\varepsilon_0>0$
such that the operator
$\Delta_\varphi$ is bounded from below by $\varepsilon_0 h$ on $V^\bot$
-- see for instance \cite[Theorem C.15]{Zw12_01}.
To find $ V$ we introduce a family of harmonic oscillators associated to
minima $\mathbf{m}\in{\mathcal U}}\def\vvv{{\mathcal V}}\def\www{{\mathcal W}^{(0)}$ and obtained by replacing $\varphi$ by its harmonic approximation
in the expression for $\Delta_\varphi$:
$$
H_\mathbf{m} :=-h^2\Delta+\vert\varphi''(\mathbf{m})(x-\mathbf{m})\vert^2-h\Delta\varphi(\mathbf{m}), \ \
\mathbf{m} \in \mathcal U^{(0)} .
$$
The spectrum of this operator is known explicitly, see \cite[Sect 2.1]{He88_01} with the simple eigenvalue $ 0 $ at the bottom. We denote by $e_\mathbf{m}$ the
normalized eigenfunction, $ H_\mathbf{m} e_\mathbf{m} = 0 $. The other eigenvalues of $H_\mathbf{m}$ are bounded from below by $c_0h$ for some $c_0>0$.
Let $\chi\in C^\infty_c(\mathbb R^d; [0,1] )$ be equal to $ 1 $ near $ 0 $
and satisfy $ ( 1 - \chi^2 )^{\frac12} \in C^\infty ( \mathbb R^d ) $.
We define $\chi_\mathbf{m}(x)=\chi((x-\mathbf{m})/\sqrt{Mh})$ where $M>0$ will be chosen later. For $h$ small enough, the functions $\chi_\mathbf{m}$ have disjoint supports and hence the function $\chi_\infty$ defined by
$1-\chi_\infty^2=\sum_{\mathbf{m}\in{\mathcal U}}\def\vvv{{\mathcal V}}\def\www{{\mathcal W}^{(0)}}\chi_\mathbf{m}^2$ is smooth. We define the $N$-dimensional vector space
$$
V= {\rm{span}} \, \{ \chi_\mathbf{m} e_\mathbf{m}, \ \mathbf{m}\in{\mathcal U}}\def\vvv{{\mathcal V}}\def\www{{\mathcal W}^{(0)} \} .
$$
The proof is completed if we show that there exist $\varepsilon_0,h_0>0$ such that
\begin{equation}\label{eq:minorDeltaphi}
\<\Delta_\varphi u,u\>\geq \varepsilon_0 h\Vert u\Vert^2,\; \ \
\forall u\in V^\bot \cap D ( \Delta_\varphi ) , \ \ \ \forall h \in]0, h_0] .
\end{equation}
To establish \eqref{eq:minorDeltaphi} we use
the following localization formula the verification of which is left to the reader
(see \cite[Theorem 3.2]{CyFrKiSi87_01}):
$$
\Delta_\varphi=
\sum_{\mathbf{m}\in{\mathcal U}}\def\vvv{{\mathcal V}}\def\www{{\mathcal W}^{(0)}\cup\{\infty\}} \chi_\mathbf{m}\circ \Delta_\varphi \circ\chi_\mathbf{m}-h^2\sum_{\mathbf{m}\in{\mathcal U}}\def\vvv{{\mathcal V}}\def\www{{\mathcal W}^{(0)}\cup\{\infty\}} \vert\nabla\chi_\mathbf{m}\vert^2.
$$
Since, $\nabla\chi_\mathbf{m}=\ooo(({M h})^{-\frac12})$, this implies, for $ u \in D ( \Delta_\varphi ) $, that
\begin{equation}\label{eq:minorDelta0}
\<\Delta_\varphi u,u\>=\<\Delta_\varphi \chi_\infty u,\chi_\infty u\>+\sum_{\mathbf{m}\in{\mathcal U}}\def\vvv{{\mathcal V}}\def\www{{\mathcal W}^{(0)}}\<\Delta_\varphi \chi_\mathbf{m} u,\chi_\mathbf{m} u\>
+\ooo(hM^{-1}\Vert u\Vert^2).
\end{equation}
On the support of $\chi_\infty$ we have $\vert\nabla\varphi\vert^2-h\Delta\varphi\geq(1-\ooo(h))\vert\nabla\varphi\vert^2\geq c_1M h$ for some $c_1>0$, and hence
\begin{equation}\label{eq:minorDelta1}
\<\Delta_\varphi \chi_\infty u,\chi_\infty u\>\geq Mc_1h\Vert \chi_\infty u\Vert^2
\end{equation}
On the other hand, near any $\mathbf{m}\in{\mathcal U}}\def\vvv{{\mathcal V}}\def\www{{\mathcal W}^{(0)}$, $\vert \nabla\varphi(x)\vert^2=\vert\varphi''(\mathbf{m})(x-\mathbf{m})\vert^2+\ooo(|x-\mathbf{m}|^3)$
and $\varphi''(x)=\varphi''(\mathbf{m})+\ooo(|x-\mathbf{m}|)$. Since on the support of $\chi_\mathbf{m}$ we have
$ | x - \mathbf{m} |\leq \sqrt{Mh} $, it follows that
\begin{equation}
\label{eq:chimu}
\<\Delta_\varphi \chi_\mathbf{m} u,\chi_\mathbf{m} u\>=\<H_\mathbf{m} \chi_\mathbf{m} u,\chi_\mathbf{m} u\>+\ooo((Mh)^{\frac 32}).
\end{equation}
We now assume that $u \in D ( \Delta_\varphi ) $
is orthogonal to $\chi_\mathbf{m} e_\mathbf{m}$ for all $\mathbf{m}$. Then $\chi_\mathbf{m} u$ is orthogonal to $e_\mathbf{m}$. Since the spectral gap of $ H_\mathbf{m} $ is bounded from below by $ c_0 h $, \eqref{eq:chimu} shows that
\begin{equation}\label{eq:minorDelta2}
\<\Delta_\varphi \chi_\mathbf{m} u,\chi_\mathbf{m} u\>\geq c_0 h\Vert\chi_\mathbf{m} u\Vert^2 + \ooo((Mh)^{\frac 32}\Vert u\Vert^2) , \ \ \ \forall \mathbf{m} \in \mathcal U^{(0)} .
\end{equation}
Combining this with \eqref{eq:minorDelta0}, \eqref{eq:minorDelta1} and \eqref{eq:minorDelta2} gives
\begin{equation*}
\begin{split}
\<\Delta_\varphi u,u\>&\geq c_0 h\sum_{\mathbf{m}\in{\mathcal U}}\def\vvv{{\mathcal V}}\def\www{{\mathcal W}^{(0)}\cup\{\infty\}}\Vert\chi_\mathbf{m} u\Vert^2+\ooo(hM^{-1}\Vert u\Vert^2)+\ooo((Mh)^{\frac 32}\Vert u\Vert^2)\\
&\geq c_0h\Vert u\Vert^2+\ooo(hM^{-1}\Vert u\Vert^2)+\ooo((Mh)^{\frac 32}\Vert u\Vert^2).
\end{split}
\end{equation*}
Taking $M$ large enough completes the proof of \eqref{eq:minorDeltaphi}.\end{proof}
We denote by $E^{(0)}$ the
subspace spanned by eigenfunctions of these low lying eigenvalues and by
\begin{equation}\label{eq:definPi0}
\Pi^{(0)} :=\operatorname{1\negthinspace l}_{[0,\varepsilon_0h ]}(\Delta_\varphi)
\end{equation}
the spectral projection onto $E^{(0)}$. This projector is
expressed by the standard contour integral
\begin{equation} \label{eq:cauchyPi0}
\Pi^{(0)} = \frac{1}{2\pi i} \int_{\partial B ( 0 , \delta \varepsilon_0h ) }
(z-\Delta_\varphi)^{-1} d z
\end{equation}
In our analysis, we will also need the operator $\Delta_{-\varphi}$, noting
that in dimension one $\Delta_{-\varphi}$ is the Witten Laplacian on $1$-forms.
Since $-\varphi$ has exactly $N-1$ minima (given by the $N-1$ maxima of $\varphi$), it follows from Proposition \ref{prop:roughestimspectre} that there exists $\varepsilon_1>0$ such that
$\Delta_{-\varphi}$ has $N-1$ eigenvalues in $[0,\varepsilon_1 h]$ and that these eigenvalues
are actually exponentially small. Observe that because of the condition $\varphi(x)\geq C\vert x\vert$ at infinity, the function $e^{\varphi/h}$ is not square integrable. Consequently, unlike in the case of $ \Delta_{\varphi } $, we cannot conclude that
the lowest eigenvalue is equal to $0$
We denote by $E^{(1)}$ the
subspace spanned by eigenfunctions of these low-lying eigenfunctions of $\Delta_{-\varphi}$ and by
$\Pi^{(1)}$ the corresponding projector onto $E^{(1)}$,
\begin{equation}\label{eq:definPi1}
\Pi^{(1)}=\operatorname{1\negthinspace l}_{[0,\varepsilon_1h]}(\Delta_{-\varphi}).
\end{equation}
Similarly to \eqref{eq:cauchyPi0}, we have
\begin{equation} \label{eq:cauchyPi1}
\Pi^{(1)} = \frac{1}{2\pi i} \int_{\partial B ( 0 , \delta \varepsilon_1 h ) } (z-\Delta_{-\varphi})^{-1} d z ,
\end{equation}
for any $0 < \delta <1$.
\subsection{Supersymmetry} The key point in the analysis is the following intertwining relations which follows directly from \eqref{eq:susy1D}
\begin{equation}\label{eq:intertwin}
\Delta_{-\varphi} \circ d_\varphi=d_\varphi\circ \Delta_\varphi
\end{equation}
and its adjoint relation
\begin{equation}\label{eq:intertwinadj}
d_\varphi^*\circ\Delta_{-\varphi}= \Delta_\varphi\circ d_\varphi^*.
\end{equation}
From these relations we deduce that $d_\varphi(E^{(0)})\subset E^{(1)} $ and $d_\varphi^*(E^{(1)})\subset E^{(0)} $. Indeed, suppose that
$\Delta_\varphi u=\lambda u$,
with $u\neq 0$ and $\lambda\in[0,\varepsilon_0 h]$. Then, we see from \eqref{eq:intertwin} that
$$
\Delta_{-\varphi}(d_\varphi u)=d_\varphi( \Delta_\varphi u)=\lambda d_\varphi u.
$$
Therefore, either $d_\varphi u$ is null and obviously belongs to $E^{(1)}$ or $d_\varphi u\neq 0$ and hence $d_\varphi u$ is an eigenvector of $\Delta_{-\varphi}$ associated to $\lambda\in[0,\varepsilon_0 h]$. This proves the first statement. The inclusion $d_\varphi^*(E^{(1)})\subset E^{(0)} $ is obtained by similar arguments.
By definition, the operator $\Delta_\varphi$ maps $E^{(0)}$ into itself and we can consider its restriction to $E^{(0)}$. From the above discussion we know also that $d_\varphi(E^{(0)})\subset E^{(1)} $ and $d_\varphi^*(E^{(1)})\subset E^{(0)} $. Hence we consider
$ \lll=(d_\varphi)_{\vert E^{(0)}\rightarrow E^{(1)}} $ and $ \lll^*=(d_\varphi^*)_{\vert E^{(1)}\rightarrow E^{(0)}}$.
When restricted to $E^{(0)}$, the structure equation \eqref{eq:susy1D} becomes
\begin{equation}\label{eq:susymatrice}
{\mathcal M}}\def\nnn{{\mathcal N}} \def\ooo{{\mathcal O}}\def\ppp{{\mathcal P}=\lll^*\lll \ \ \ \text{ with }\ \ \ \ \ {\mathcal M}}\def\nnn{{\mathcal N}} \def\ooo{{\mathcal O}}\def\ppp{{\mathcal P} := \Delta_\varphi |_{ E^{(0)} } , \ \ \
\lll := (d_\varphi)|_{ E^{(0)} \to E^{(1)}} .
\end{equation}
\subsection{Quasi-modes for $\Delta_\varphi$}
Let $\delta_0=\inf\{\operatorname{diam}(E_n),\,n=1,\ldots,N\}$ and let $\epsilon>0$ be small with respect to $\delta_0$. For all $n=1,\ldots,N$, let $\chi_n$ be smooth cut-off functions such that
\begin{equation}
\label{eq:cutoff} \left\{
\begin{array}{c}
0\leq \chi_n\leq 1,\\
\operatorname{supp}(\chi_n)\subset \{x\in E_n,\,\varphi(x)\leq\sigma_1- \epsilon \}\\
\chi_n=1\text{ on } \{x\in E_n,\,\varphi(x)\leq \sigma_1- 2\epsilon\},
\end{array}
\right.
\end{equation}
where $ \epsilon > 0$ will be chosen small (in particular much smaller than
$ \delta_0 $ in \eqref{eq:delta0}).
Consider now the family of approximated eigenfunctions defined by
\begin{equation}\label{eq:definfn0}
f_n^{(0)}(x)=h^{-\frac 14}c_n(h)\chi_n(x) e^{-\varphi(x)/h} , \ \
\ \Vert f_n^{(0)}\Vert_{L^2}=1 ,
\end{equation}
where $c_n(h)=\varphi''(m_n)^{\frac 14}\pi^{-\frac 14}+\ooo(h)$.
We introduce the projection of these quasi-modes onto the eigenspace space $E^{(0)}$:
\begin{equation}\label{eq:defingn0}
g_n^{(0)}:=\Pi^{(0)}f_n^{(0)}.
\end{equation}
\begin{lemma}\label{lem:propf0}
The approximate eigenfunctions defined by \eqref{eq:definfn0}
satisfy
$$ \<f_n^{(0)},f_m^{(0)}\>=\delta_{n,m}, \ \ \ \forall n,m=1,\ldots ,N,
$$
and
$$d_\varphi f_n^{(0)}=\ooo_{L^2}(e^{-(S-\epsilon)/h}), \ \ \ \
g_n^{(0)}-f_n^{(0)}=\ooo_{L^2}(e^{-(S-\epsilon')/h})$$
for any $\epsilon'>\epsilon$.
\end{lemma}
\begin{proof}
The first statement is a direct consequence of the support properties of the cut-off functions $\chi_n$ and the choice of the normalizing constant.
To see the second estimate, recall that $d_\varphi e^{-\varphi/h}=0$. Hence
$$
d_\varphi f_n^{(0)}(x)=h^{\frac 3 4} c_n(h)\chi_n'(x) e^{-\varphi(x)/h}.
$$
Moreover, thanks to \eqref{eq:cutoff}, there exists $c>0$ such that
for $\epsilon>0$ small enough we have
$ \varphi(x)\geq S-\epsilon $ {for} $ x\in \operatorname{supp}(\chi_n')$.
Combining these two facts gives estimates on $d_\varphi f_n^{(0)} $.
We now prove the estimate on $ g_n^{(0)}-f_n^{(0)}$. We first observe that
$$
\Delta_\varphi f_n^{(0)}=d_\varphi^* d_\varphi f_n^{(0)}=h^{\frac 3 4} c_n(h)d_\varphi^*(\chi_n' e^{-\varphi/h})
=h^{\frac 3 4} c_n(h)(-h\chi_n''+2\partial_x\varphi \chi'_n) e^{-\varphi/h}
$$
and the same argument as before shows that
\begin{equation}\label{eq:estimdeltaphifn0}
\Delta_\varphi f_n^{(0)}=\ooo_{L^2}(e^{-(S-\epsilon)/h}).
\end{equation}
From
\eqref{eq:cauchyPi0} and Cauchy formula, it follows that
\begin{align*}
g_n^{(0)} - f_n^{(0)}
& = \Pi^{(0)} f_n^{(0)} - f_n^{(0)}
= \frac{1}{2 \pi i } \int_\gamma (z - \Delta_\varphi)^{-1} f_n^{(0)} dz - \frac{1}{2 \pi i } \int_\gamma z^{-1} f_n^{(0)}d z \\
& = \frac{1}{2 \pi i } \int_\gamma (z - \Delta_\varphi)^{-1} z^{-1} \Delta_\varphi f_n^{(0)} d z ,
\end{align*}
with $\gamma = \partial B ( 0 , \delta\epsilon_0 h ), 0 < \delta < 1 $.
Since $\Delta_\varphi$ is selfadjoint and $\sigma(\Delta_\varphi)\cap[0,\epsilon_0 h]\subset [0,e^{-1/Ch}]$, we have for $\alpha>0$ small enough
\begin{equation*}
\big\Vert ( z - \Delta_\varphi )^{- 1} \big\Vert = \ooo ( h^{-1} ) ,
\end{equation*}
uniformly for $z \in \gamma$. Using \eqref{eq:estimdeltaphifn0}, we get
$\big\Vert ( z - \Delta_\varphi)^{- 1} z^{- 1} \Delta_\varphi f_k^{(0)} \big\Vert = \ooo \big( h^{- 2} e^{-(S-\epsilon) / h} \big) $,
and, after integration,
$ \| g_k^{(0)} - f_k^{(0)} \| = \ooo ( h^{-1 } e^{- (S-\epsilon) / h} ) = \ooo ( e^{- (S-\epsilon') / h} ), $
for any $\epsilon'>\epsilon$.
\end{proof}
\subsection{Quasi-modes for $\Delta_{-\varphi}$} Since, $\varphi$ and $-\varphi$ share similar properties, the construction of the preceding section produces quasi-modes for $\Delta_{-\varphi}$. Eventually we will only need quasi-modes localized near the maxima $\mathbf{s}_k$. Hence, let
$\theta_k \in C^\infty_c ( \mathbb R ; [ 0 , 1] ) $ satisfy
\begin{equation}
\label{eq:delta0} \operatorname{supp}\theta_k\subset \{\vert x-\mathbf{s}_k\vert\leq\delta_0\}, \ \
\text{ $\theta_k=1$ on $\{\vert x-\mathbf{s}_k\vert\leq \frac{\delta_0}2\}$. }
\end{equation}
We take $ \epsilon $ in the definition \eqref{eq:cutoff} small enough
then for all $k=1,\ldots N-1$, we have
\begin{equation}\label{eq:propsupptronc}
\theta_k\chi'_k=\chi'_{k,+}\text{ and } \theta_k\chi'_{k+1}=\chi'_{k+1,-}
\end{equation}
where $\chi_{k,\pm}$ are the smooth functions defined by
$$
\chi_{k,+}(x)=\left\{
\begin{array}{cc}
\chi_k(x)&\text{ if }x\geq m_k,\\
1&\text{ if }x<m_k,
\end{array}
\right.
\ \ \ \
\chi_{k,-}(x)=\left\{
\begin{array}{cc}
\chi_k(x)&\text{ if }x\leq m_k,\\
1&\text{ if }x>m_k.
\end{array}
\right.
$$
Moreover, we also have $\theta_k\theta_l=0$ for all $k\neq l$. The family of quasi-modes associated to these cut-off functions is given by
\begin{equation}\label{eq:definfn1}
f_k^{(1)}(x):=h^{-\frac 14}d_k(h)\theta_k(x) e^{(\varphi(x)-S)/h}, \ \ \
\Vert f_k^{(1)}\Vert_{L^2}=1 ,
\end{equation}
where $d_k(h)=\vert\varphi''(s_k)\vert^{\frac 14}\pi^{-\frac 14}+\ooo(h)$ is the
normalizing constant. Again, we introduce the projection of these quasi-modes onto the eigenspace $E^{(1)}$:
\begin{equation}\label{eq:definen1}
g_k^{(1)}(x):=\Pi^{(1)}f_k^{(1)}.
\end{equation}
\begin{lemma}\label{lem:propf1}
There exists $\alpha>0$ independant of $\epsilon$ such that the following hold true
$$
\<f_k^{(1)},f_l^{(1)}\>=\delta_{k,l}, \ \ \ \forall k,l=1,\ldots ,N-1,
$$
$$d_\varphi^* f_k^{(1)}=\ooo_{L^2}(e^{-\alpha/h}), \ \ \ g_k^{(1)}-f_k^{(1)}=\ooo_{L^2}(e^{-\alpha/h})$$
\end{lemma}
\begin{proof}
The proof follows the same lines as the proof of Lemma \ref{lem:propf0}.
\end{proof}
\subsection{ Computation of the operator $\lll$}
In this section we represent $\lll$ in a suitable basis.
For that we first observe that the bases $(g_n^{(0)})$ and $(g_k^{(1)})$ are quasi-orthonormal. Indeed, thanks to Lemmas \ref{lem:propf0} and \ref{lem:propf1}, we have
$$
\<g_n^{(0)},g_m^{(0)}\>=\delta_{n,m}+\ooo(e^{-\alpha/h}), \ \ \ \forall n,m=1,\ldots ,N
$$
and
$$
\<g_k^{(1)},g_l^{(1)}\>=\delta_{k,l}+\ooo(e^{-\alpha/h}), \ \ \ \forall k,l=1,\ldots ,N-1.
$$
for some $\alpha>0$.
We then obtain orthonormal bases of $ E^{(0)} $ and $ E^{(1)} $:
\begin{equation}
\label{eq:GrSch}
\begin{split}
(g_n^{(0)} )_{ 1\leq n \leq N } & \xrightarrow{\text{Gramm--Schmidt process}} (e_n^{(0)})_{ 1 \leq n \leq N } , \\
(g_k^{(1)} )_{ 1\leq k \leq N -1 } & \xrightarrow{\text{Gramm--Schmidt process}} (e_k^{(1)})_{ 1 \leq n \leq N -1 } .
\end{split}
\end{equation}
It follows from the approximate orthonormality above
that the change of basis matrix $P_j$ from $(g_n^{(j)})$ to
$( e_n^{(j)})$ satisfies
\begin{equation}\label{eq:passage}
P_j=I +\ooo(e^{-\alpha/h})
\end{equation}
for $j=0,1$.
To describe the matrix of $\lll$ in the bases $(e_n^{(0)})$ and $(e_k^{(1)})$ we introduce
a $N-1\times N$ matrix $\hat L=(\hat\ell_{ij})$ defined by
\begin{equation}
\label{eq:lhat}
\hat\ell_{ij}=\<f_i^{(1)},d_\varphi f_j^{(0)}\>.
\end{equation}
We claim that the matrices $L$ and $\hat L$ are very close. To see
that we give a precise expansion of $\hat L$:
\begin{lemma}\label{lem:computhatL}
The matrix $ \hat L $ defined by \eqref{eq:lhat} is given by
$\hat L=( h/\pi)^{\frac 12}e^{-S/h} \bar L $ where
$\bar L$ admits a classical expansion $\bar L\sim\Sigma_{k=0}^\infty h^k L_k$ with
\begin{equation}
\label{eq:matrixL}
L_0=
\left(
\begin{array}{ccccccc}
-\nu_1^{\frac14} \mu_1^{\frac14} & \ \nu_1^{\frac14} \mu_2^{\frac14} &0&0&\ldots&0\\
0&- \nu_2^{\frac14} \mu_2^{\frac14} & \nu_2^{\frac14} \mu_3^{\frac14} &0&\ldots&0\\
\vdots&\ddots&\ddots&\ddots&\ddots&\vdots\\
\vdots&\ddots&\ddots&\ddots&\ddots&0\\
0&0&\ldots&0& - \nu_{n-1}^{\frac14} \mu_{n-1}^{\frac14} &
\nu_{n-1}^{\frac14} \mu_{n}^{\frac14}
\end{array}
\right).
\end{equation}
\end{lemma}
\begin{proof}
From \eqref{eq:definfn0} and \eqref{eq:definfn1}, we have
\begin{equation*}
\begin{split}
\hat\ell_{ij}&=\<f_i^{(1)},d_\varphi f_j^{(0)}\>
=h^{-\frac 12}d_i(h)c_j(h)\int_\mathbb R \theta_i(x) e^{(\varphi(x)-S)/h}d_\varphi(\chi_j(x)e^{-\varphi(x)/h})dx\\
&=h^{\frac 12}d_i(h)c_j(h)e^{-S/h}\int_\mathbb R\theta_i(x)\chi_j'(x)dx.
\end{split}
\end{equation*}
Moreover, since $\operatorname{supp} \theta_i\cap \operatorname{supp} \chi_j=\emptyset $ except for $j=i$ or $j=i+1$, it follows from \eqref{eq:propsupptronc} that
\begin{equation}
\label{eq
|
:psichi}
\int_\mathbb R\theta_i(x)\chi_j'(x)dx=\delta_{i,j}\int_\mathbb R\chi_{i,+}'(x)dx+\delta_{i+1,j}\int_\mathbb R\chi_{i,-}'(x)dx=-\delta_{i,j}+\delta_{i+1,j} .
\end{equation}
On the other hand, we recall that $d_i(h)$ and $c_j(h)$ both have a classical expansion. Together with the above equality, this shows that
$\hat L$ has the required form and it remains to prove the formula giving $L_0$.
To that end we observe that
$$
d_i(h)c_j(h)=\pi^{-\frac 12}((\vert\varphi''(s_i)\vert\varphi''(m_j))^{\frac 14}+\ooo(h))=\mu_j^{\frac 14}\nu_i^{\frac14}\pi^{-\frac 12} +\ooo(h)
$$
in the notation of \eqref{eq:nothessphi}. Combining this with \eqref{eq:psichi}
we obtain
$$
\hat\ell_{ij}= h^{\frac 12}\pi^{-\frac 12} e^{-S/h} \mu_j^{\frac 14}\nu_i^{\frac14} (-\delta_{i,j}+\delta_{i+1,j}+\ooo(h))
$$
which gives \eqref{eq:matrixL}.
\end{proof}
\begin{lemma}\label{lem:approxL} Let $ L $ be the matrix
of $ \mathcal L $ in the basis obtained in \eqref{eq:GrSch}.
There exists $\alpha'>0$ such that
$ L=\hat L+\ooo(e^{-(S+\alpha')/h})$, where $ \hat L $ is defined by \eqref{eq:lhat} and is described in Lemma \ref{lem:computhatL}.
\end{lemma}
\begin{proof}
It follows from \eqref{eq:passage} that
\begin{equation}\label{eq:chbase1}
L=(I+\ooo(e^{-\alpha/h})) \tilde L(I+\ooo(e^{-\alpha/h}))
\end{equation}
where $\tilde L=(\tilde \ell_{i,j})$ with $\tilde \ell_{i,j}=\<g_i^{(1)},d_\varphi g_j^{(0)}\>$. Moreover, \eqref{eq:intertwin} implies that
$\Pi^{(1)}d_\varphi=d_\varphi \Pi^{(0)}$. Using this identity and the fact that $\Pi^{(0)},\Pi^{(1)}$ are orthogonal projections, we have
\begin{equation*}
\begin{split}
\<g^{(1)}_i,d_\varphi g^{(0)}_j\>&=\<g^{(1)}_i,d_\varphi \Pi^{(0)}f^{(0)}_j\>=\<g^{(1)}_i,\Pi^{(1)}d_\varphi f^{(0)}_j\>=\<g^{(1)}_i,d_\varphi f^{(0)}_j\>\\
&=\<f^{(1)}_i,d_\varphi f^{(0)}_j\>+\<g^{(1)}_i-f^{(1)}_i,d_\varphi f^{(0)}_j\>
\end{split}
\end{equation*}
But from Lemmas \ref{lem:propf0}, \ref{lem:propf1} and the Cauchy-Schwarz inequality we get
$$
\vert \<g^{(1)}_i-f^{(1)}_i,d_\varphi f^{(0)}_j\>\vert\leq C e^{-(\alpha+S-\epsilon')/h}.
$$
Since $\alpha$ is independent of $\epsilon'$ which can be chosen as small as we want, it follows that there exists $\alpha'>0$ such that
$\tilde\ell_{i,j}=\hat \ell_{i,j}+\ooo(e^{-(S+\alpha')/h})$. Combining this estimate, \eqref{eq:chbase1} and the fact that $ \hat \ell_{i,j}=\ooo(e^{-S/h})$, we get
the announced result.
\end{proof}
It is now easy to describe
${\mathcal M}}\def\nnn{{\mathcal N}} \def\ooo{{\mathcal O}}\def\ppp{{\mathcal P}$ as a matrix:
\begin{lemma}\label{lem:asymptM}
Let $M$ be the matrix representation of ${\mathcal M}}\def\nnn{{\mathcal N}} \def\ooo{{\mathcal O}}\def\ppp{{\mathcal P}$ in the basis $(e^{(0)}_n)$. Then
$$M= h e^{-2S/h}A$$
where $A$ is symmetric positive with a classical expansion $A\sim\sum_{k=0}^\infty h^k A_k$
with $A_0$ given by \eqref{eq:defA0}.
\end{lemma}
{\it Proof. }
By definition, $M=L^*L$ and it follows from Lemma \ref{lem:computhatL} and \ref{lem:approxL} that
$$
L^*L=(\hat L+\ooo(e^{-(S+\alpha')/h}))^*(\hat L+\ooo(e^{-(S+\alpha')/h}))
= h e^{-2S/h}(\bar L^*\bar L+\ooo(e^{-\alpha'/h}))
$$
Then, $A:=h^{-1}e^{2S/h}L^*L$ is clearly positive and admits a classical expansion since $\bar L$ does. Moreover, the leading term of this expansion is
$\bar L_0^*\bar L_0$ and
a simple computation shows that $\bar L_0^*\bar L_0=A_0$, where
$ A_0 $ is given by \eqref{eq:defA0}.
\hfill $\square$\\
\noindent
{\bf Remark.} Innocent as this lemma might seem, the supersymmetric structure,
that is writing $ - \Delta_\varphi |_{ E^{(0)}} $ using $ d_\varphi $, is very
useful here.
\begin{lemma}\label{cor:spectM}
Denote by $\mu_1(h)\leq\ldots\leq \mu_k(h)$ the eigenvalues of $A(h)$. Then,
$$
\mu_0(h)=0\text{ and }\mu_k(h)=\mu^0_k+\ooo(h), \ \ \ \forall k\geq 2,
$$
where $0=\mu^0_1<\mu^0_2\leq\mu^0_3\leq\ldots\leq\mu^0_N$ denote the eigenvalues of $A_0$.
Moreover, a normalized
eigenvector associated to $\mu_1^0$ is $\xi^0=N^{-\frac 12}(1,\ldots,1)$ and there exists a normalized vector $\xi(h)\in\ker(A(h))$, such that
\begin{equation}\label{eq:comparnoyau}
\xi(h)=\xi_0+\ooo(h).
\end{equation}
\end{lemma}
{\it Proof. }
Many of the statements of this lemma are immediate consequence of Lemma \ref{lem:asymptM}. We emphasize the fact that
$0$ belongs to $\sigma(A)$ since $0\in\sigma(\Delta_\varphi)$.
The fact that $\xi^0$ is in the kernel of $A_0$ is a simple computation. Eventually, for any $\xi\in \ker(A(h))$, we have
$$
\xi-\<\xi,\xi_0\>\xi_0=\frac 1{2i\pi}\Big(\int_{\gamma}z^{-1}\xi dz-\int_{\gamma}(A_0-z)^{-1}\xi dz\Big) =\frac 1{2i\pi}\int_{\gamma}(A_0-z)^{-1}z^{-1}A_0\xi dz
$$
where $\gamma$ is a small path around $0$ in $\mathbb C$. Since $A_0\xi=\ooo(h)$ we obtain \eqref{eq:comparnoyau}.
\hfill $\square$\\
\subsection{Proof of Theorem \ref{th:DynamWitten}}
\label{s:pft}
Let $u$ be solution of \eqref{eq:heatwitten} with $u_0=\Psi(\beta)$,
$ | \beta | \leq 1 $ (see
\eqref{eq:defpsin},\eqref{eq:defPsi} and \eqref{eq:definfn0} for
definitions of $ \psi_n$, $ \Psi$ and $ f_n^{(0)} $, respectively). Then,
\begin{equation}
\begin{split}
u&=e^{-t\Delta_\varphi}\Pi^{(0)}u_0+e^{-t\Delta_\varphi}\widehat\Pi^{(0)}u_0\\
&=e^{-t{\mathcal M}}\def\nnn{{\mathcal N}} \def\ooo{{\mathcal O}}\def\ppp{{\mathcal P}}\Pi^{(0)}u_0+e^{-t\Delta_\varphi}\widehat\Pi^{(0)}u_0, \ \ \
\widehat\Pi^{(0)}: = I-\Pi^{(0)}.
\end{split}
\end{equation}
Since $\operatorname{1\negthinspace l}_{E_n}-\chi_n$ is supported near $\{s_{n-1},s_n\}$, then
$\psi_n-f_n^{(0)}=\ooo_{L^2}(e^{-\alpha/h})$, for all $n=1,\ldots, N$, and it follows that
$$
u_0=\bar u_0+\ooo_{L^2}(e^{-\alpha/h}), \ \ \
\bar u_0 :=\sum_{n=1}^N\beta_n f_n^{(0)}.
$$
Then, using Lemma \ref{lem:propf0} and \eqref{eq:passage}, we get
$
u_0=\tilde u_0+\ooo_{L^2}(e^{-\alpha/h})
$
with
$
\tilde u_0:=\sum_{n=1}^N\beta_n e_n^{(0)},
$
where $e_n^{(0)}$ is the orthonormal basis of $E^{(0)}$ given by \eqref{eq:GrSch}. Since
$\Pi^{(0)}e_n^{(0)}=e^{(0)}_n$ and $\widehat\Pi^{(0)}e_n^{(0)}=0$, we have
$$
u(t)=e^{-t{\mathcal M}}\def\nnn{{\mathcal N}} \def\ooo{{\mathcal O}}\def\ppp{{\mathcal P}}\tilde u_0+\ooo_{L^2}(e^{-\alpha/h}).
$$
If $M$ is the matrix of the operator ${\mathcal M}}\def\nnn{{\mathcal N}} \def\ooo{{\mathcal O}}\def\ppp{{\mathcal P}$ in the basis $(e^{(0)}_n)$ then
$$
u(t)=\sum_{n=1}^N(e^{-tM}\beta)_n e_n^{(0)}+\ooo_{L^2}(e^{-\alpha/h}).
$$
Going back from $e_n^{(0)}$ to $\psi_n$ as above, we see that
\begin{equation}
\label{eq:utpsi}
u(t)=\sum_{n=1}^N(e^{-tM}\beta)_n \psi_n+\ooo_{L^2}(e^{-\alpha/h})=\Psi(e^{-tM}\beta)+\ooo_{L^2}(e^{-\alpha/h})
\end{equation}
and the proof of \eqref{eq:DynamWitten1} (main statement in Theorem \ref{th:DynamWitten}) is complete. We now prove \eqref{eq:DynamWitten2}. Since the linear map $\psi:\mathbb C^N\rightarrow L^2(dx)$
is bounded uniformly with respect to $h$, and thanks to \eqref{eq:DynamWitten1}, the proof reduces to showing (after time rescaling) that there exists $C>0$ such that for all $\beta\in\mathbb R^N$,
\begin{equation}\label{eq:DynamWitten3}
\vert e^{-\tau A}-e^{-\tau A_0}\vert\leq Ch,\; \ \ \forall \tau\geq 0
\end{equation}
Since, by Lemma \ref{cor:spectM}, $ A $ and $ A_0 $ both have $ 0 $ as a simple eigenvalue with the approximate eigenvector given by $ ( 1, \cdots , 1 ) $,
we see that for any norm on $ \mathbb C^N $,
\[ \begin{split} | e^{ - \tau A } - e^{ - \tau A_0 } |
& \leq | e^{ - \tau A_0 } |_{ \{ ( 1, \cdots , 1 ) \}^\perp}
| I - e^{ - \tau \mathcal O ( h ) } |_{ \ell^2 \to \ell^2 } +
C h \\
& \leq C e^{ - c \tau } \tau h + C h =
\mathcal O ( h ) . \end{split} \]
which is exactly \eqref{eq:DynamWitten3}.
\hfill $\square$\\
We now prove one of the consequences of Theorem \ref{th:DynamWitten}.
\begin{proof}[Proof of \eqref{eq:rhot}]
We have seen in the preceding proof that $e_n^{(0)}-\psi_n=\ooo_{L^2}(e^{-C/\epsilon^2})$ and since
\[ \| \psi_n - ( \mu/ 2 \pi \epsilon^2 )^{\frac14} \operatorname{1\negthinspace l}_{E_n }e^{-\varphi/2\epsilon^2} \|_{L^2} =
\mathcal O ( \epsilon^2 ) ,\]
it follows that
$\Pi^{(0)}u_0=\psi(\beta)+\ooo(\epsilon^2\Vert u_0\Vert_{L^2})$ with $\beta\in\mathbb C^N$ given by
$$
\beta_n = (\tfrac{ \mu}{ 2 \pi \epsilon^2 } )^{\frac14} \int_{ E_n} u_0 ( x )
e^{ -\varphi(x) /2\epsilon^2 } dx= (\tfrac{ \mu}{ 2 \pi \epsilon^2 } )^{\frac14} \int_{ E_n} \rho_0 ( x )
dx .
$$
Applying \ref{th:DynamWitten} (second part of Theorem \ref{th:DynamWitten}) with
$h=2\epsilon^2$ gives
\begin{equation}\label{equa1}
e^{- t \Delta_\varphi } \Pi^{(0)} u_0 = \sum_{ n=1}^N ( e^{ - t \nu_h A_0 }
\beta )_n (\tfrac{ \mu}{ 2 \pi \epsilon^2 } )^{\frac14} \operatorname{1\negthinspace l}_{E_n } e^{ -\varphi/2\epsilon^2} +
\mathcal O_{L^2} ( \epsilon^2 ) \| u_0\|_{L^2}.
\end{equation}
On the other hand, Proposition \ref{prop:roughestimspectre} shows that
\begin{equation}\label{equa2}
e^{-t \Delta_\varphi } (I-\Pi^{(0)} ) u_0 = \mathcal O_{L^2} (
e^{ - t \epsilon^2 /C } ) \| u_0 \|_{ L^2 } .
\end{equation}
Since
$ \rho ( h^2 t ) = e^{-\varphi/h} u ( t ) $, \eqref{equa1} and \eqref{equa2} yield
\begin{equation}
\label{eq:rhotC} \begin{gathered} \rho ( 2 \epsilon^2 e^{ S/\epsilon^2 } \tau ) =
\sum_{ n=1}^N ( e^{ - \tau A_0 }
\beta )_n (\tfrac{ \mu}{ 2 \pi \epsilon^2 } )^{\frac14} \operatorname{1\negthinspace l}_{E_n }e^{ -\varphi/\epsilon^2} +
r_\epsilon(\tau)
\end{gathered}
\end{equation}
with
$$
r_\epsilon(\tau)=e^{-\varphi/2\epsilon^2}\Big(\ooo_{L^2}(e^{-c \tau e^{ S/ \epsilon^2}})+\ooo_{L^2}(\epsilon^2)\Big)\| \rho_0 \|_{L^2_\varphi }.
$$
By Cauchy-Schwartz it follows that
$\Vert r_\epsilon(\tau)\Vert_{L^1}\leq C( \epsilon^{\frac 52} + e^{ - c \tau e^{ S/ \epsilon^2}}
) \| \rho_0 \|_{L^2_\varphi } $.
\end{proof}
\section{A higher dimensional example}
\label{s:get_high}
The same principles apply when the wells may have different height and in
higher dimensions. In both cases there are interesting
combinatorial and topological (when $ d > 1 $) complications and we refer to
\cite[\S 1.1, \S 1.2]{Mi16} for a presentation and references.
To illustrate this we give
a higher dimensional result in a simplified setting.
Suppose that $\varphi:\mathbb R^d\rightarrow\mathbb R$ is a smooth Morse function
satisfying \eqref{eq:hyp-croissancephi} and denote by ${\mathcal U}}\def\vvv{{\mathcal V}}\def\www{{\mathcal W}^{(j)}$ the finite sets of critical points of index $j$, $n_j := |{\mathcal U}}\def\vvv{{\mathcal V}}\def\www{{\mathcal W}^{(j)}| $. We assume that
\eqref{eq:hyptopol} holds and write $ S := \sigma_1 - \varphi_0 $.
In the notation of \eqref{eq:En} we have $ n_0 = N $ and we also assume that each $ E_n $ contains
exactly one minimum. Hence we can label the components by the minima:
\[ \forall n=1,\ldots, N,\;\;\exists \, ! \, \mathbf{m}\in E_n ,\;\min_{ x \in E_n } \varphi ( x ) = \varphi ( \mathbf m )\]
and we denote $ E( \mathbf m ) := E_n $.
Since $\varphi$ is a Morse function,
\begin{equation}
\label{eq:grpr}
\begin{gathered} \forall \mathbf{m},\mathbf{m}'\in {\mathcal U}}\def\vvv{{\mathcal V}}\def\www{{\mathcal W}^{(0)}, \ \ \mathbf{m} \neq \mathbf{m}' \ \Longrightarrow
\bar E(\mathbf{m})\cap\bar E(\mathbf{m}') \subset {\mathcal U}}\def\vvv{{\mathcal V}}\def\www{{\mathcal W}^{(1)}, \\
\forall \, \mathbf{s}\in{\mathcal U}}\def\vvv{{\mathcal V}}\def\www{{\mathcal W}^{(1)}, \ \exists\, ! \, \mathbf{m} , \mathbf{m}' \in
\mathcal U^{(0)} , \ \ \ \ \mathbf{s}\in\bar E(\mathbf{m})\cap\bar E(\mathbf{m}') .
\end{gathered}
\end{equation}
To simplify the presentation we make an addition assumption
\begin{equation}
\label{eq:h3}
\forall \mathbf{m},\mathbf{m}'\in {\mathcal U}}\def\vvv{{\mathcal V}}\def\www{{\mathcal W}^{(0)}, \ \ \mathbf{m} \neq \mathbf{m}' \ \Longrightarrow
|\bar E(\mathbf{m})\cap\bar E(\mathbf{m}') | \leq 1 .
\end{equation}
Under these assumptions, the set ${\mathcal U}}\def\vvv{{\mathcal V}}\def\www{{\mathcal W}^{(0)}\times {\mathcal U}}\def\vvv{{\mathcal V}}\def\www{{\mathcal W}^{(1)}$ defines a graph $\ggg$. The elements of ${\mathcal U}}\def\vvv{{\mathcal V}}\def\www{{\mathcal W}^{(0)}$ are the vertices of $ \ggg $ and elements of
${\mathcal U}}\def\vvv{{\mathcal V}}\def\www{{\mathcal W}^{(1)}$ are the edges of $ \ggg $: $\mathbf{s}\in{\mathcal U}}\def\vvv{{\mathcal V}}\def\www{{\mathcal W}^{(1)}$ is an edge between
$ \mathbf{m} $ and $ \mathbf{m}' $ in $ {\mathcal U}}\def\vvv{{\mathcal V}}\def\www{{\mathcal W}^{(0)} $
if $ s \in\bar E ( \mathbf{m} ) \cap E ( \mathbf{m}') $
-- see Fig.\ref{figsublevel} for an example.
The same graph has been constructed in \cite{Lamit} for a certain discrete
model of the Kramers--Smoluchowski equation.
We now introduce
the discrete Laplace operator on $ \ggg $, $M _\ggg $ -- see
\cite{CvDoSa95} for the background and results about $ M_\ggg $.
If
the degree ${d}(\mathbf{m})$ is defined as the number of edges
at the vertex $\mathbf{m}$, $M _\ggg $ is given by the matrix $ (a_{\mathbf{m},\mathbf{m}'})_{\mathbf{m},\mathbf{m}'\in{\mathcal U}}\def\vvv{{\mathcal V}}\def\www{{\mathcal W}^{(0)}}$:
\begin{equation}\label{eq:deflaplacegraphe}
a_{\mathbf{m},\mathbf{m}'}=
\left\{
\begin{array}{ll}
{d}(\mathbf{m}), & \mathbf{m}=\mathbf{m}' \\
-1 & \mathbf{m}\neq \mathbf{m}', \ \ \bar E(\mathbf{m})\cap\bar E(\mathbf{m}') \neq \emptyset , \\
\ \ \, 0 & \text{ otherwise}
\end{array}
\right.
\end{equation}
\begin{figure}
\center
\scalebox{0.35}{ \input{sublevel.pdf_tex} } \hspace{0.1in}
\scalebox{0.6}{ \input{graph1.pdf_tex}}
\caption{{\em Left:} The sublevel set $\{\varphi<\sigma_1\}$ (dashed region) associated to a potential $\varphi$ \eqref{eq:hyptopol}. The x's represent local minima, the o's, local maxima. {\em Right:} The graph associated to the
potential on the left.}
\label{figsublevel}
\end{figure}
Among basic properties of the matrix $M_\ggg$, we recall:
\begin{itemize}
\item[-] it has a square structure $M_\ggg=\lll^*\lll$, where $\lll$ is the transpose of the incidence matrix of any oriented version of the graph $\ggg$.
In particular, $M_\ggg$ is symmetric positive.
\item[-] thanks to \eqref{eq:hyptopol} and \cite[Proposition B.1]{Mi16}, the graph $\ggg$ is connected.
\item[-] $0$ is a simple eigenvalue of $M_\ggg$
\end{itemize}
We make one more assumption which is a higher dimensional analogue of
the hypothesis in Theorem \ref{th1}: there exist $ \mu , \nu > 0 $ such that
\begin{equation}
\label{eq:h4}
\begin{split}
\det \varphi''(\mathbf{m}) &=\mu, \ \ \ \ \forall \mathbf{m} \in \mathcal U^{(0)}
\\
\frac {\lambda_1(\mathbf{s})^2}{\det\varphi''(\mathbf{s})} & =-\nu, \ \ \
\forall \mathbf{s} \in \mathcal U^{(1)},
\end{split}
\end{equation}
where $\lambda_1 (\mathbf{s})$ is the unique negative eigenvalue of $\varphi''(\mathbf{s})$.
Assumptions \eqref{eq:h3} and \eqref{eq:h4} can be easily removed.
Without \eqref{eq:h4} the graph $\ggg$ is replaced by a weighted graph with a weight function depending explicitly of the values of $\varphi''$ at critical points.
Removing \eqref{eq:h3} leads to multigraphs in which there may
be several edges between two
vertices. This can be also handled easily.
Assumption \eqref{eq:hyptopol} however is more fundamental and removing it
results in major complications.
We refer to \cite{Mi16} for results in that situation. Here we restrict ourselves to making the following
\noindent
{\bf Remark.} Under the assumption \eqref{eq:hyptopol} the proof presented in the one dimensional case applies with relatively simple modifications. The serious difference lies in the description of $ E^{(1)} $, the eigenspace of $ \Delta_\varphi $ on one-forms, in terms of exponentially accurate quasi-modes (in one dimension it was easily done using Lemma \ref{lem:propf1}). That description is however provided by Helffer--Sj\"ostrand in the self contained Section 2.2 of \cite{HeSj85_01} -- see Theorem 2.5 there. The computation of \eqref{eq:lhat} becomes more involved and is based on the method of stationary phase -- see Helffer--Klein--Nier \cite[Proof of Proposition 6.4]{HeKlNi04_01}.
The analogue of Theorem \ref{th1} is
\begin{theorem}\label{t:higher}
Suppose that $ \varphi $ satisfies \eqref{eq:hyp-croissancephi},\eqref{eq:hyptopol},\eqref{eq:h3} and \eqref{eq:h4}. If
\begin{equation}
\label{eq:rho0h}
\rho_0=\left(\frac \mu{2\pi\epsilon^2}\right)^{\frac 12} \left( \sum_{n=1}^N \beta_n
\operatorname{1\negthinspace l}_{E_n} + \, r_\epsilon \right) e^{-\varphi/\epsilon^2} , \ \ \
\lim_{ \epsilon \to 0 } \Vert r_\epsilon\Vert_{L^\infty} = 0, \ \ \beta\in
\mathbb R^N ,
\end{equation}
then the solution to \eqref{eq:KS} satisfies, uniformly for $ \tau \geq 0 $,\begin{equation}\label{eq:weakconvh}
\rho( 2\epsilon^2e^{S/\epsilon^2}\tau,x)\ \rightarrow \ \sum_{n=1}^N\alpha_n(\tau)\delta_{m_n} (x) ,
\ \ \ \epsilon \to 0,
\end{equation}
in the sense of distributions in $ x $, where
$\alpha(t)=(\alpha_1,\ldots,\alpha_n)(\tau)$ solves
\begin{equation}
\label{eq:dotalbis} \partial_\tau \alpha= - \kappa M_\ggg \alpha , \ \ \ \alpha ( 0 ) = \beta ,
\end{equation}
with $ M_\ggg $ is given by \eqref{eq:deflaplacegraphe} and $ \kappa = \pi^{-1}\mu^{\frac12} \nu^{\frac12} $ with
$ \mu $ and $ \nu $ in \eqref{eq:h4}.
\end{theorem}
We also have the analogue of \eqref{eq:rhot} for any initial data.
As in the one dimensional case this theorem is a consequence of a more
precise theorem formulated using the localized states
\begin{equation}\label{eq:defpsind}
\psi_n(x)=c_n(h)h^{-\frac d 4}\operatorname{1\negthinspace l}_{E_n}(x)e^{-(\varphi-\varphi_0)(x)/h},
\end{equation}
where $c_n(h)$ is a normalization constant such that $\Vert \psi_n\Vert_{L^2}=1$.
We then define a map $\Psi:\mathbb R^{N}\rightarrow L^2(\mathbb R^d)$ by
\begin{equation}\label{eq:defPsiD>1}
\Psi(\beta)=\sum_{n=1}^{N}\beta_n\psi_n, \ \ \ \forall \beta = ( \beta_1 , \ldots , \beta_{N} )
\in\mathbb R^{N}.
\end{equation}
We have the following analogue of Theorem \ref{th:DynamWitten}.
\begin{theorem}\label{th3}
Suppose $\varphi $ satisfies \eqref{eq:hyp-croissancephi},\eqref{eq:hyptopol},\eqref{eq:h3} and \eqref{eq:h4}. There exists $C>0$ and $h_0>0$ such that for all $\beta\in\mathbb R^{N}$ and all $0 < h < h_0 $, we have
$$
\Vert e^{-t\Delta_\varphi}\Psi(\beta)-\Psi(e^{-t\kappa\nu_h A}\beta)\Vert_{L^2}\leq Ce^{-1/Ch}, \ \ \ t\geq 0,
$$
where $\nu_h=he^{-2S/h}$, $\kappa = \pi^{-1}\mu^{\frac12} \nu^{\frac12}$ and $A=A(h)$ is a real symmetric positive matrix having a classical expansion
$
A\sim \sum_{k=0}^\infty h^k A_k
$
and
$A_0=M_\ggg$ with $M_\ggg$ the Laplace matrix defined by \eqref{eq:deflaplacegraphe}.
\end{theorem}
\begin{figure}[h]
\center
\scalebox{0.6}{ \input{figure7.pdf_tex}} \hspace{0.1in}
\includegraphics[scale=0.5]{graph2}
\caption{A two dimensional potential which is a cyclic analogue of
the potential shown in Fig.\ref{fig1}: the corresponding
matrix describing the Kramer--Smoluchowski evolution is given by \eqref{eq:A1}. It should compared to the matrix \eqref{eq:simpleA0} for the potential in Fig.\ref{fig1}. The corresponding cyclic graph is shown on the right.}
\label{f2}
\end{figure}
We conclude by
one example \cite[\S 6.3]{Mi16} for which the graph $\ggg$ is elementary.
We assume that $ d = 2 $, $ \varphi $ has a maximum at
$ x = 0 $, there are $ N $ minima, $ m_n $, $ N $ saddle points, $s_n $, and that \eqref{eq:hyptopol} holds -- see Fig.\ref{f2}. We assume also that
\[ \det \varphi'' (\mathbf{m}_n ) = \mu > 0 , \ \ \ \frac{\lambda_1(\mathbf{s}_n)}{\lambda_2(\mathbf{s}_n)} = - \nu < 0 , \]
where for $\mathbf{s}\in{\mathcal U}}\def\vvv{{\mathcal V}}\def\www{{\mathcal W}^{(1)}$, $\lambda_1(\mathbf{s})>0>\lambda_2(\mathbf{s})$ denote the two eigenvalues of $\varphi''(\mathbf{s})$.
Then assumptions of Theorem \ref{th3} are satisfied. The graph $\ggg$ associated to $\varphi$ is the cyclic graph with $N$ vertices and the corresponding Laplacian is given by
\begin{equation}
\label{eq:A1} {\mathcal A}}\def\bbb{{\mathcal B}}\def\ccc{{\mathcal C}}\def\ddd{{\mathcal D}_\ggg =
\begin{pmatrix} 2&\!\! \! - 1 &0&0&\ldots&\ldots&\ldots&\!\! \! - 1\\
\!\! \! - 1 &2&\!\! \! - 1 &0&\ldots&\ldots&\ldots&0\\
0&\!\! \! - 1 &2&\!\! \! - 1 &0&\ldots&\ldots&0\\
\vdots&0&\!\! \! - 1 &2&\ddots&\ddots&\ldots&0\\
\vdots&\vdots&\ddots&\ddots&\ddots&\ddots&\ddots&0\\
\vdots&\vdots&\ddots&\ddots&\ddots&\ddots&\!\! \! - 1 &0\\
0&\vdots&\ddots&\ddots&\ddots&\!\! \! - 1 &2&\!\! \! - 1 \\
\!\! \! - 1&0&\ldots&\ldots&\ldots&0&\!\! \! - 1 &2
\end{pmatrix}.
\end{equation}
\def\arXiv#1{\href{http://arxiv.org/abs/#1}{arXiv:#1}}
\bibliographystyle{amsplain}
|
\section{Introduction}
\label{sec:intro}
Motivated by practical applications in data storage and communication,
\emph{locality} has become an increasingly important notion in the study of erasure codes.
A code with some locality property allows certain
operations to be performed by accessing only a portion of a codeword
or a data vector.
As a result, significant amount of resources (bandwidth, computational power, memory) can be saved. We list below some representative families of
codes with certain locality properties.
\begin{itemize}
\item \emph{Locally decodable codes} were proposed by Katz and Trevisan~\cite{KatzTrevisan2000}, which allow a single symbol of the original data to be decoded with high probability by only querying a small number of coded symbols of a possibly corrupted codeword.
\item
\emph{Locally testable code} was first systematically studied by
Goldreich and Sudan~\cite{GoldreichSudan2002}.
For such a code, there exists a test that checks whether a given string
is a codeword, or rather far from the code, by reading only a constant
number of symbols of the string.
\item \emph{Locally repairable codes} (LRC) (see e.g.~\cite{OggierDatta2011, Gopalan2012}) were tailored-made for distributed storage systems
(DSS), which guarantee that any erased coded symbol (corresponding to a failed storage node) can be reconstructed from a small subset of other coded symbols (corresponding to other surviving nodes).
LRC with a given locality network topology were studied in \cite{Mazumdar2014,Shanmugam2014}.
\item
\emph{Update efficient codes}, also known as \emph{locally updatable codes} (see e.g.~\cite{Anthapadmanabhan2010, Chandran2014, Mazumdar2015}), require updating as few nodes
as possible whenever one data symbol is changed.
\item
\emph{Coding with constraints}~\cite{HalbawiHoYaoDuursma2013, HalbawiThillHassibi15, YanSprintsonZelenko2014,
DauSongYuenISIT14, DauSongYuen_JSAC2014, SongDauYuen15} requires that each coded symbol must be a function of a given subset of data symbols. In a classical code, each coded symbol
can be a function of all data symbols.
\end{itemize}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.8]{ToyExample_cropped.pdf}
\caption{An example of an LEDC with two local subcodes.
Here ${\mathcal K}_1=\{x_1, x_2, x_3\}$, ${\mathcal K}_2 = \{x_2, \ldots, x_5\}$,
${\mathcal N}_1=\{c_1,\ldots,c_4\}$, and ${\mathcal N}_2 = \{c_5, \ldots, c_{10}\}$.
The underlying field is $\mathbb{F}_7$ - the set of integers modulo $7$.
Coded symbols in each group
${\mathcal N}_i$ are encoded from the corresponding data symbols in ${\mathcal K}_i$
and conversely can be used to decode these data symbols.
Each local subcode is an MDS code and moreover, the LEDC, which encodes $k = |{\mathcal K}_1 \cup {\mathcal K}_2|=5$
data symbols into $n = |{\mathcal N}_1|+|{\mathcal N}_2| = 10$ coded symbols, can reach the optimal minimum distance $d=4$.
In this example, the subcodes are encoded by simply using two Vandermonde
matrices. In general, this naive construction
yields LEDCs with optimal distances only in certain cases (see
Theorem~\ref{thm:constructionI} and Example~\ref{ex:counter_ex}).}
\label{fig:toy-example}
\vspace{-15pt}
\end{figure}
Continuing along this line of research, we propose the class of
\emph{locally encodable and decodable code} (LEDC) that provides
a higher level of operational locality in distributed storage systems (DSS) compared to currently known codes.
In an LEDC, the set of $n$ coded symbols (corresponding to $n$ storage
nodes) is partitioned into $m$ disjoint subsets ${\mathcal N}_i$'s, each of which is responsible for encoding and decoding of a given subset ${\mathcal K}_i$ of some data symbols.
Each pair $({\mathcal N}_i,{\mathcal K}_i)$ forms a \emph{maximum distance separable} (MDS) code, referred to as a \emph{local} subcode.
The parameter of interest is the minimum (Hamming) distance of the
\emph{global} LEDC. An LEDC that has the largest minimum distance
is called \emph{optimal}.
We illustrate in Fig.~\ref{fig:toy-example} a toy example
of an optimal LEDC with minimum distance $d = 4$. This LEDC consists of
$m = 2$ local subcodes. The first local subcode, which is a $[4,3]$ MDS
code, encodes three data symbols ${\mathcal K}_1=\{x_1, x_2, x_3\}$
into four coded symbols ${\mathcal N}_1=\{c_1,\ldots,c_4\}$.
The second local subcode, which is a $[6,4]$ MDS code, encodes
four data symbols ${\mathcal K}_2 = \{x_2, \ldots, x_5\}$ into six coded symbols
${\mathcal N}_2 = \{c_5, \ldots, c_{10}\}$.
The underlying field is $\mathbb{F}_7$ - the set of integers modulo $7$.
An LEDC distinguishes itself from the family of locally repairable codes in the following aspects.
\begin{itemize}
\item First, \emph{both} encoding and decoding can be done locally
for each local subcode of an LEDC - to create coded symbols in ${\mathcal N}_i$, only
data symbols from ${\mathcal K}_i$ are involved, and conversely, to decode
data symbols in ${\mathcal K}_i$, only coded symbols in ${\mathcal N}_i$ are involved.
In contrast, the encoding operation in each local group in an LRC
may involve all data symbols and only local repair is required, not
local decodability.
In other words, an LRC does not provide local
encoding and decoding and only guarantees local repair, which can also be done by an LEDC.
\item Secondly, in the context of LEDC, the structure of the local groups (i.e. ${\mathcal K}_i$'s and ${\mathcal N}_i$'s) are given,
whereas in the context of LRC, the size and the repair capability of each group are given as input.
\end{itemize}
LEDC falls under the regime of coding with constraints as each coded symbol must be a function of a given set of data symbols.
However, while LEDC provides local decodability, coding with
constraints does not.
Notice also that the notion of \emph{local decodability} in the setting of LEDC is different
from that in the setting of locally decodable codes (LDC)~\cite{KatzTrevisan2000}. Indeed, while an LEDC guarantees that
each given \emph{subset} of data symbols can be decoded (with probability
one) from a specific subset of coded symbol (possibly under some errors/erasures), an LDC requires that \emph{each} data symbol can be decoded (with
high probability) from a small subset of coded symbols (possibly under
some errors).
A crucial feature of an optimal LEDC is that the repair capability of the global system can be greater than its individual local systems.
To illustrate this, we consider the example in Fig.~\ref{fig:toy-example} and regard the local subcodes as codes operating over independent storage systems.
The first storage system utilizes an MDS code of
minimum distance $d_1 = 2$ and hence tolerates \emph{one} node failure.
On the other hand, the second storage system utilizes an MDS code of
minimum distance $d_2 = 3$ and hence tolerates \emph{two} node failures.
If the two codes are co-designed to form an optimal LEDC with optimal distance $d=4$,
then the LEDC provides extra protection for node failures:
any \emph{three} node failures across the two systems can be tolerated.
The improvement in fault tolerance results from the fact that the two systems
share some common data symbols ($x_2$ and $x_3$ in this toy example).
Each system also has some private data symbols ($x_1$ in the first
system and $x_4$ and $x_5$ in the second system).
In normal condition where the node failures are within the
local fault tolerance, each storage system can work independently
to repair the failed nodes. No sharing of private data is required.
However, in a catastrophic scenario where the number
of node failures exceeds the local fault tolerance, the two systems
can cooperate to repair the failed nodes by sharing the private data.
Furthermore, there exist LEDCs whose fault tolerances
exceed the \emph{sum} of the fault tolerances of their local subcodes.
\begin{example}\label{ex:equal}
For example, we can construct an LEDC with the following locality structure:
\begin{align*}
{\mathcal N}_1 &=\{c_1,c_2,\ldots,c_{5}\}, &
{\mathcal K}_1 &=\{x_1,x_2,x_3,x_{4}\}, \\
{\mathcal N}_2 &=\{c_{6},c_{7},\ldots,c_{12}\}, &
{\mathcal K}_2 &=\{x_2,x_3,\ldots,x_{7}\}.
\end{align*}
Here, the local subcodes are $[5,4]$ and $[7,6]$ MDS codes that each tolerate up to one erasure.
We demonstrate later in Example~\ref{ex:2} that the LEDC can achieve a minimum distance of five and
hence is able to tolerate up to four erasures.
\end{example}
Our key results are summarized below.
\begin{itemize}
\item We prove that for any given locality structure (i.e. ${\mathcal K}_i$'s and ${\mathcal N}_i$'s), there always exists an optimal LEDC over any sufficiently large finite fields. The optimal minimum distance, however, can be determined in polynomial time.
\item When $m = 2$ (i.e. there are only two local subcodes), we provide constructions for two families of optimal LEDC.
\begin{enumerate}[(I)]
\item A straightforward construction using nested MDS codes for the case where $|{\mathcal K}_1\cap {\mathcal K}_2|$ is small.
\item An algebraic construction of LEDC as a (punctured) subcode of a cyclic code of length $q-1$, where $q$ is the size of the finite field,
in the case where $|{\mathcal N}_1-{\mathcal K}_1| = |{\mathcal N}_2 - {\mathcal K}_2|$.
\end{enumerate}
\end{itemize}
Our paper is organized as follows.
Necessary definitions and notation are provided in Section~\ref{sec:pre}.
We prove the existence of optimal LEDC over sufficiently large finite
fields in Section~\ref{sec:large_field}.
Section~\ref{sec:cyclic} is devoted for the construction of optimal LEDC
over small fields when there are two local subcodes.
\section{Preliminaries}
\label{sec:pre}
Let $\mathbb{F}_q$ denote the finite field with $q$ elements.
Let $[n]$ denote the set $\{1,2,\ldots,n\}$.
For a $k \times n$ matrix $\bM$, for $i \in [k]$ and $j \in [n]$, let
$\bM_i$ and $\bM[j]$ denote the row $i$ and the column $j$ of $\bM$, respectively.
We define below standard notions from coding theory (for instance, see \cite{MW_S}).
The \emph{support} of a vector ${\boldsymbol u} = (u_1,\ldots,u_n) \in \mathbb{F}_q^n$ is the set
${\sf supp}({\boldsymbol u}) = \{i \in [n] \colon u_i \neq 0\}$.
The (Hamming) \emph{weight} of a vector ${\boldsymbol u} \in \mathbb{F}_q^n$ is $|{\sf supp}({\boldsymbol u})|$.
The (Hamming) \emph{distance} between two vectors ${\boldsymbol u}$ and ${\boldsymbol v}$ of $\mathbb{F}_q^n$
is defined to be
$
{\mathsf{d}}({\boldsymbol u},{\boldsymbol v}) = |\{i \in [n] \colon u_i \ne v_i\}|.
$
A $k$-dimensional subspace ${\mathscr{C}}$ of $\mathbb{F}_q^n$ is called a linear $[n,k,d]_q$ \emph{(erasure) code} over $\mathbb{F}_q$
if the minimum distance ${\mathsf{d}}({\mathscr{C}})$ between any pair of distinct vectors in ${\mathscr{C}}$ is equal to $d$.
Sometimes we may use the notation $[n,k,d]$ or just $[n,k]$ for the sake of simplicity. The vectors in ${\mathscr{C}}$ are called \emph{codewords}.
It is known that the minimum weight of a nonzero codeword in a linear code ${\mathscr{C}}$ is equal to its minimum distance ${\mathsf{d}}({\mathscr{C}})$.
The well-known Singleton bound (\cite[Ch. 1]{MW_S}) states that for any $[n,k,d]_q$
code, it holds that $d \leq n - k + 1$.
If the equality is attained, the code is called \emph{maximum distance separable} (MDS).
A \emph{generator matrix} ${\boldsymbol{G}}$ of an $[n,k]_q$ code ${\mathscr{C}}$ is a $k \times n$ matrix whose rows are linearly independent codewords of ${\mathscr{C}}$. Then ${\mathscr{C}} = \{{\boldsymbol{x}} {\boldsymbol{G}} \colon {\boldsymbol{x}} \in \mathbb{F}_q^k\}$.
It is also well known that if ${\mathsf{d}}({\mathscr{C}}) = d$ then ${\mathscr{C}}$ can correct any $d-1$ erasures.
In other words, ${\boldsymbol{x}}$ can be recovered from any $n-d+1$ coordinates of the codeword ${\boldsymbol c} = {\boldsymbol{x}}{\boldsymbol{G}}$.
An $[n,k,d]$ code can be used in a DSS as follows.
A vector ${\boldsymbol{x}}$ of $k$ data symbols can be encoded into $n$ coded symbols ${\boldsymbol c} = {\boldsymbol{x}} {\boldsymbol{G}}$,
each of which is stored at one node in the system.
Then ${\boldsymbol{x}}$ can be recovered from any set of $n-d+1$ nodes.
Hence the DSS can tolerate $d-1$ node failures.
\begin{definition}
\label{def:LEDC}
Let $n \geq k \geq 1$ and $m \geq 1$ be some integers.
Let $K_1, \ldots, K_m$ be $m$ nonempty (possibly overlapping)
subsets of $[k]$ such that $[k] = \cup_{i=1}^m K_i$.
Let $N_1, \ldots, N_m$ be $m$ nonempty non-overlapping subsets of $[n]$ that partition $[n]$, i.e. $N_i \cap N_{i'} = \varnothing$ if $i \neq i'$ and $[n] = \cup_{i=1}^m N_i$.
Suppose that $n_i = |N_i| \geq |K_i| = k_i$ for every $i \in [m]$.
An $\{N_i,K_i\}_{i=1}^m$-LEDC over an alphabet $\Sigma$ is a
mapping
${\mathfrak E}: \Sigma^k \longrightarrow \Sigma^n$
that maps a vector of $k$ data symbols ${\boldsymbol{x}} = (x_1,\ldots,x_k) \in \Sigma^k$ into a vector of $n = \sum_{i=1}^m n_i$ coded symbols
${\boldsymbol c} = (c_1, \ldots, c_n) \in \Sigma^n$
and satisfies the following properties.
\begin{itemize}
\item[(P1)]
The coded symbols in ${\mathcal N}_i \triangleq \{c_j: j \in N_i\}$
only depends on the data symbols in ${\mathcal K}_i \triangleq \{x_j: j \in K_i\}$
$(\forall i \in [m])$.
\item[(P2)]
The set of data symbols ${\mathcal K}_i$ can be
determined from any subset of $k_i$ coded symbols of the set ${\mathcal N}_i$
$(\forall i \in [m])$.
\end{itemize}
\end{definition}
When $\Sigma$ is a finite field $\mathbb{F}_q$ for some prime power $q$ and
the mapping ${\mathfrak E}$ is linear, the corresponding LEDC is called \emph{linear}.
For linear LEDC, the mapping ${\mathfrak E}$ can be represented by
a $k \times n$ matrix ${\boldsymbol{G}}$ over $\mathbb{F}_q$ such that
${\mathfrak E}({\boldsymbol{x}}) = {\boldsymbol{x}} {\boldsymbol{G}}$. Such a matrix ${\boldsymbol{G}}$ is referred to
as a \emph{generator matrix} of the LEDC.
In this work we are only interested in linear LEDC.
The second property (P2) in Definition~\ref{def:LEDC}, which ${\mathfrak E}$ must satisfy, states that each pair of $n_i$ coded symbols in ${\mathcal N}_i$
and $k_i$ data symbols in ${\mathcal K}_i$ forms a linear $[n_i,k_i]$
MDS code. We refer to these $m$ MDS codes as the \emph{local subcodes} of
the LEDC.
An example of a linear LEDC with two local subcodes over $\mathbb{F}_7$ is given in Fig.~\ref{fig:toy-example}.
\section{Existence of Optimal Locally Encodable and Decodable Codes
Over Large Fields}
\label{sec:large_field}
We first discuss the closely related concept of coding with constraints.
The upper bound on the minimum distance for a code with coding constraints~\cite{HalbawiThillHassibi15, SongDauYuen15} still applies in the setting of LEDC.
The existence proof for optimal codes over large finite fields~\cite[Lemma 12]{SongDauYuen15}, however, needs to be appropriately modified to take into account the new locality feature of LEDC.
\subsection{Coding With Constraints}
\label{subsec:coding_constraints}
In the setting of linear coding with constraints~\cite{HalbawiHoYaoDuursma2013, HalbawiThillHassibi15, YanSprintsonZelenko2014,
DauSongYuenISIT14, DauSongYuen_JSAC2014,
SongDauYuen15}, the data vector ${\boldsymbol{x}} \in \mathbb{F}_q^k$ is encoded into
the coded vector ${\boldsymbol c} = {\boldsymbol{x}} {\boldsymbol{G}} \in \mathbb{F}_q^n$ for some $k\times n$
matrix ${\boldsymbol{G}}$ in $\mathbb{F}_q$, subjected to the following constraints:
each coded symbol $c_j$ is a function of a given subset of
the data symbols indexed by $C_j \subseteq [k]$.
In a classical code, $C_j \equiv [k]$ for all $j \in [n]$.
For $i \in [k]$ let $R_i \triangleq \{j: i \in C_j\}$.
Then it is obvious that the support of the $i$th \emph{row} of any
valid generator matrix ${\boldsymbol{G}}$ of a code with coding constraints
must be included in $R_i$. Similarly, the support of the $j$th
\emph{column} of ${\boldsymbol{G}}$ must be included in $C_j$.
The following theorem presents an upper bound on the minimum
distance of a code with coding constraints and states that
an optimal code attaining this upper bound does exist over
a sufficiently large finite field.
\begin{theorem}(\cite[Corollary 1]{HalbawiThillHassibi15}, \cite[Lemma 12]{SongDauYuen15})
\label{thm:coding_constraints}
Suppose that ${\mathscr{C}}$ is a linear code that encodes the vector of data symbols ${\boldsymbol{x}} \in \mathbb{F}_q^k$ into the vector of coded symbols ${\boldsymbol c} \in \mathbb{F}_q^n$
under the following constraints: each $c_j$ is a function of a given subset of
the data symbols indexed by $C_j \subseteq [k]$.
Let $R_i \triangleq \{j: i \in C_j\}$. Then
\begin{equation}
\label{eq:d_constr}
{\mathsf{d}}({\mathscr{C}}) \leq d_{\max} \triangleq 1 + \min_{\varnothing \neq I \subseteq [k]}\{|\cup_{i \in I}R_i| - |I|\}.
\end{equation}
Moreover, when $q$ is sufficiently large, there exists a code with minimum distance attaining this bound.
\end{theorem}
\begin{proposition}
\label{pro:d-poly}
The upper bound $d_{\max}$ in Theorem~\ref{thm:coding_constraints} can be found in polynomial time.
\end{proposition}
\begin{proof}
Note that $d_{\max}$ is the largest $d$ satisfying
\begin{equation}
\label{eq:3.1}
|\cup_{i \in I} R_i| \geq d - 1 + |I|, \quad \forall \varnothing
\neq I \subseteq [k].
\end{equation}
From the Singleton bound, $d \leq n - k + 1$.
Hence, we can find $d_{\max}$ by verifying (\ref{eq:3.1}) for each $d$
ranging from $n-k+1$ down to $1$. As long as (\ref{eq:3.1})
can be verified in polynomial time for every $d \in [n-k+1]$,
we can find $d_{\max}$ in polynomial time.
Note that any $d \in [n-k+1]$ can be written as $d = (n-k+1) - \delta$,
for some $0 \leq \delta \leq n - k$. Hence, (\ref{eq:3.1}) can
be rewritten as
\begin{equation}
\label{eq:3.3}
|\cup_{i \in I} R_i| \geq n - k - \delta + |I|, \quad \forall \varnothing
\neq I \subseteq [k].
\end{equation}
In the proof of \cite[Lemma 10]{DauSongYuen_JSAC2014}, we provide
a polynomial time algorithm to verify the so-called MDS Condition
\begin{equation}
\label{eq:3.2}
|\cup_{i \in I} R_i| \geq n - k + |I|, \quad \forall \varnothing
\neq I \subseteq [k],
\end{equation}
where $R_1,\ldots,R_k$ are arbitrary nonempty subsets of $[n]$.
We do so by creating a network with one source and $k$ sinks
and prove that (\ref{eq:3.2}) holds if and only if the capacity of
a minimum cut between the source and any sink is at least $n$.
Using exactly the same proof, we can show that (\ref{eq:3.3})
holds if and only if the capacity of a
minimum cut between the source and any sink of that network is at least $n - \delta$.
As the capcity of such a minimum cut can be computed in polynomial time,
(\ref{eq:3.3}) can be verified in polynomial time and so can (\ref{eq:3.1}).
\end{proof}
\subsection{Optimal Locally Encodable and Decodable Codes Over
Large Fields}
\label{subsec:large_field}
In this subsection we establish that an optimal LEDC
always exists over a sufficiently large field.
\begin{theorem}
\label{thm:large_field}
Suppose that ${\mathscr{C}}$ is a linear {{{$\{N_i,K_i\}_{i=1}^m$-LEDC}}}.
Let $n_i = |N_i|$, $n = \sum_{i=1}^m n_i$, and $k = |\cup_{i = 1}^m K_i|$.
For $i \in [m]$ and $j \in N_i$ let $C_j = K_i$.
For each $i \in [k]$ let $R_i = \{j: i \in C_j\}$.
Then
\begin{equation}
\label{eq:d}
{\mathsf{d}}({\mathscr{C}}) \leq d_{\max} \triangleq 1 + \min_{\varnothing \neq I \subseteq [k]}\{|\cup_{i \in I}R_i| - |I|\}.
\end{equation}
Moreover, when $q$ is sufficiently large, there exists a linear {{{$\{N_i,K_i\}_{i=1}^m$-LEDC}}}
over $\mathbb{F}_q$ with minimum distance attaining this bound.
\end{theorem}
The upper bound (\ref{eq:d}) simply follows from Theorem~\ref{thm:coding_constraints}.
Note that by Proposition~\ref{pro:d-poly}, this upper bound
can be determined in polynomial time.
We present below a proof of
the existence of optimal LEDCs over large fields.
We aim to show that when $q$ is sufficiently large,
there always exists a $k \times n$ matrix ${\boldsymbol{G}}$ over $\mathbb{F}_q$
that generates an {{{$\{N_i,K_i\}_{i=1}^m$-LEDC}}} with minimum distance attaining
(\ref{eq:d}).
Firstly, observe that if ${\boldsymbol{G}}$ is a generator matrix of
an {{{$\{N_i,K_i\}_{i=1}^m$-LEDC}}} then by (P1), ${\sf supp}({\boldsymbol{G}}_i) \subseteq R_i$ and
${\sf supp}({\boldsymbol{G}}[j]) \subseteq C_j$ for all $i \in [k]$ and $j \in [n]$.
Let ${\boldsymbol{G}}^\xi = (g^\xi_{i,j})_{k\times n}$ where
\vspace{-5pt}
\[
g^\xi_{i,j} =
\begin{cases}
\xi_{i,j}, &\text{if } j \in R_i,\\
0, &\text{otherwise,}
\end{cases}
\]
where $\xi_{i,j}$'s are indeterminates.
For any subset $J \subseteq [n]$ of size $n - d_{\max} + 1$
let ${\mathscr{G}}^J = ({\mathscr{V}},{\mathscr{E}})$ be the bipartite graph defined
as follows. The vertex set ${\mathscr{V}}$ can be partitioned into two
parts, namely, the left part ${\sf{L}} = \{\ell_1, \ldots, \ell_k\}$, and the right part
${\sf{R}} = \{{\sf{r}}_j: j \in J\}$. The edge set is \vspace{-5pt}
\[
{\mathscr{E}} = \big\{ (\ell_i, {\sf{r}}_j) \colon i \in [k], \ j \in J \cap R_i\big\}.
\]
Then for every $\varnothing \neq I \subseteq [k]$, the neighbor set of $\{\ell_i: i \in I\}$ has size at least
\[
\begin{split}
|N(\{\ell_i: i \in I\})| &= |\cup_{i \in I} (J\cap R_i)|\\
&= |J \cap \big(\cup_{i \in I} R_i\big)|\\
&= |\big(\cup_{i \in I} R_i\big) \setminus
\big([n] \setminus J \big)|\\
&\geq |\big(\cup_{i \in I} R_i\big)| - (n-|J|)\\
&\geq (d_{\max} + |I| - 1) - (n - (n-d_{\max}+1))\\
&=|I|.
\end{split}
\]
Hence, according to the famous Hall's marriage theorem,
there is a matching of size $k$ in ${\mathscr{G}}^J$. In other words,
there exist $k$ distinct indices $j_1, \ldots, j_k$ in $J$ such that
$j_i \in R_i$ for all $i \in [k]$.
For each subset $J \subseteq [n]$ of size $n - d_{\max} + 1$,
we consider the submatrix $\bP^J$ of ${\boldsymbol{G}}^\xi$ that consists
of the columns indexed by $j_1, \ldots, j_k$ as discussed above.
Then the determinant $\det(\bP^J)$, which is a multivariable polynomial
in $\mathbb{F}_q[\ldots, \xi_{i,j},\ldots]$, is not identically zero.
The reason is that in the expression of $\det(\bP^J)$ as a sum
of monomials, there is one monomial that cannot be canceled out,
namely $\prod_{i=1}^k \xi_{i,j_i}$.
Let
\[
{\mathfrak F}^{\text{dist}} = \prod_{\genfrac{}{}{0pt}{}{J \subseteq [n]}{|J| = n-d_{\max}+1}}
\det(\bP^J) \in \mathbb{F}_q[\ldots, \xi_{i,j},\ldots].
\]
Then ${\mathfrak F}^{\text{dist}}$ is not identically zero.
Roughly speaking, the polynomial ${\mathfrak F}^{\text{dist}}$ captures the locality structure $\{N_i,K_i\}_{i=1}^m$ of the LEDC and the desired minimum distance $d_{\max}$.
Regarding the locality for decoding, i.e. every $k_i$
coded symbols in ${\mathcal N}_i$ can be used to recover all data symbols in
${\mathcal K}_i$, let
\[
{\mathfrak F}^{\text{MDS}} = \prod_{i=1}^m
\prod_{\bQ_i} \det(\bQ_i) \in \mathbb{F}_q[\ldots, \xi_{i,j}, \ldots],
\]
where the second product is taken over all $k_i \times k_i$ submatrices
$\bQ_i$ that consist of $k_i$ rows of ${\boldsymbol{G}}^\xi$ indexed by $K_i$
and some $k_i$ columns of ${\boldsymbol{G}}^\xi$ among those indexed by $N_i$.
By definition of ${\boldsymbol{G}}^\xi$, each of such matrices $\bQ_i$ has a
nontrivial determinant. Therefore, ${\mathfrak F}^{\text{MDS}}$ is not identically zero.
Thus,
\[
{\mathfrak F} = {\mathfrak F}^{\text{dist}} \times {\mathfrak F}^{\text{MDS}} \not\equiv {\boldsymbol 0}.
\]
Therefore, by \cite[Lemma 4]{Ho2006},
for sufficiently large $q$, there exist $g_{i,j} \in \mathbb{F}_q$
such that ${\mathfrak F}(\ldots,g_{i,j}, \ldots,) \neq 0$. Let ${\boldsymbol{G}} = (g_{i,j})$
(for $i,j$ where $j \notin R_i$, simply let $g_{i,j} = 0$).
Since ${\mathfrak F}^{\text{dist}}(g_{i,j}) \neq 0$, the linear code generated
by ${\boldsymbol{G}}$ has minimum distance at least $d_{\max}$.
Moreover, since ${\mathfrak F}^{\text{MDS}}(g_{i,j}) \neq 0$,
the coded symbols in each ${\mathcal N}_i$ form an $[n_i,k_i]$ MDS code.
Hence, ${\boldsymbol{G}}$ generates an optimal LEDC.
We complete the proof of Theorem~\ref{thm:large_field}.
\section{Optimal Locally Encodable and Decodable Codes With Two Local Subcodes}
\label{sec:cyclic}
In this section we restrict ourselves to the case $m=2$, i.e.
when the LEDC has exactly two local subcodes. We provide two
constructions of optimal LEDCs over fields of sizes linear in $n$.
Without loss of generality, we consider the following locality structure:
\begin{align*}
K_1&=\{1,2,\ldots,k_1\},\ K_2 =\{k_1-t+1,k_1-t+2,\ldots,k\},\\
N_1&=\{1,2,\ldots,n_1\},\ N_2 =\{n_1+1,n_1+2,\ldots,n\}.
\end{align*}
Here $t= |K_1 \cap K_2|$ is the number of common data symbols
shared by the two local subcodes.
For brevity purpose, we denote an LEDC with this locality structure as an $[n_1,k_1;n_2,k_2;t]$-LEDC.
When $t = k$, i.e. $K_1 = K_2 = [k]$, there is no locality in
encoding, as both subcodes use all of the data symbols in their
encoding process. An optimal LEDC is simply an $[n,k]$ MDS code
with minimum distance $d = n - k + 1$.
In the remainder of this section, we always assume that $t < k$.
The following is a straightforward corollary of Theorem~\ref{thm:large_field}.
\begin{corollary}
Suppose that $t = |K_1 \cap K_2| < k$.
Additionally, assume that either $t < \min\{k_1,k_2\}$ or $n_1-k_1 = n_2 - k_2$.
If there exists a linear $[n_1,k_1;n_2,k_2;t]$-LEDC with minimum
distance $d$ then
\begin{equation}
\label{eq:d2}
d \leq 1 + t + \min\{n_1 - k_1, n_2 - k_2\}.
\end{equation}
\end{corollary}
The upper bound \eqref{eq:d2} reflects the fact that the more common data
the local subcodes have, the larger the global minimum distance of the LEDC. In one extreme case when $t = 0$, i.e. the two subcodes share no common data symbols, the global minimum distance is equal to one local minimum distance, whichever smaller.
Hence, the global LEDC offers no further protection against erasures to each local subcode. When $t$ is sufficiently large, however, the
global LEDC can offer considerable amount of additional protection
against erasures.
We henceforth assume, without loss of generality, that $n_1 -k_1
\leq n_2 - k_2$. Then \eqref{eq:d2} can be rewritten as
\begin{equation}
\label{eq:d2_reduced}
d \leq 1 + t + n_1 - k_1.
\end{equation}
We aim to construct optimal LEDCs whose minimum distances meet the upper bound \eqref{eq:d2_reduced}.
In what follows
|
, we consider a linear $[n_1,k_1;n_2,k_2;t]$-LEDC ${\mathscr{C}}$ and
describe ${\mathscr{C}}$ via its generator matrix ${\boldsymbol{G}}$.
From the local encoding property, we may write ${\boldsymbol{G}}$ as
\begin{equation}
\label{mat:G}
\begin{blockarray}{ccl}
\ n_1 & \ n_2 & \\
\begin{block}{@{\hspace*{2pt}}(c|c@{\hspace*{2pt}})l}
\ \boldsymbol{U} \quad & {\boldsymbol 0} \ & \quad k_1-t \\
\cline{1-2}
\ {\boldsymbol A} & {\boldsymbol B} \ & \quad t \\
\cline{1-2}
\ {\boldsymbol 0} & \boldsymbol{V} \ & \quad k_2-t \\
\end{block}
\end{blockarray}
\vspace{-5pt}
\end{equation}
\noindent where $\boldsymbol{U}$, ${\boldsymbol A}$, ${\boldsymbol B}$, and $\boldsymbol{V}$ are $(k_1-t)\times n_1$,
$t\times n_1$, $t\times n_2$, and $(k_2-t)\times n_2$ matrices, respectively.
From the properties of an LEDC, the linear code ${\mathscr{A}}$ with generator matrix
${\boldsymbol{G}}_{\mathscr{A}}\triangleq\left(\begin{array}{c} \boldsymbol{U} \\\hline {\boldsymbol A} \end{array}\right)$
is an $[n_1,k_1]$ MDS code and
the linear code ${\mathscr{B}}$ with generator matrix
${\boldsymbol{G}}_{\mathscr{B}}\triangleq\left(\begin{array}{c} {\boldsymbol B} \\\hline \boldsymbol{V} \end{array}\right)$
is an $[n_2,k_2]$ MDS code.
\subsection{Optimal LEDCs from Nested MDS codes}
Our first construction uses pairs of nested MDS codes.
More formally, for $1\leq k'<k\leq n$, the pair $({\mathscr{C}}',{\mathscr{C}})$ is a pair of nested $(k',k;n)$ MDS codes
if ${\mathscr{C}}'\subset{\mathscr{C}}$ and ${\mathscr{C}}'$ and ${\mathscr{C}}$ are $[n,k']$ and $[n,k]$ MDS codes, respectively.
\begin{theorem}[Construction I]
\label{thm:constructionI}
Suppose $n_2-k_2+1\geq t$ and
there exist pairs of nested $(k_1-t,k_1;n_1)$ and $(k_2-t,k_2;n_2)$ MDS codes.
Then there exists an optimal $[n_1,k_1;n_2,k_2;t]$-LEDC ${\mathscr{C}}$ whose minimum distance is $n_1-k_1+t+1$.
\end{theorem}
\begin{proof}
Let $({\mathscr{A}}',{\mathscr{A}})$ be the pair of nested $(k_1-t,k_1;n_1)$ MDS codes with generator matrices $\boldsymbol{U}$ and $\left(\begin{array}{c} \boldsymbol{U} \\\hline {\boldsymbol A} \end{array}\right)$.
Similarly, let $({\mathscr{B}}',{\mathscr{B}})$ be the pair of nested $(k_2-t,k_2;n_2)$ MDS codes with generator matrices $\boldsymbol{V}$ and $\left(\begin{array}{c} {\boldsymbol B} \\\hline \boldsymbol{V} \end{array}\right)$.
Then we consider a typical nonzero codeword ${\boldsymbol c}=({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2},{\boldsymbol{x}}_{3}){\boldsymbol{G}}$, where ${\boldsymbol{x}}_1=(x_i:i\in K_1\setminus K_2)$, ${\boldsymbol{x}}_2=(x_i:i\in K_1\cap K_2)$ and
${\boldsymbol{x}}_3=(x_i:i\in K_2\setminus K_1)$. Hence, ${\boldsymbol c}=({\boldsymbol{x}}_1\boldsymbol{U}+{\boldsymbol{x}}_2{\boldsymbol A},{\boldsymbol{x}}_2{\boldsymbol B} +{\boldsymbol{x}}_3\boldsymbol{V})$. We have the following cases.
\begin{enumerate}[(i)]
\item When ${\boldsymbol{x}}_2={\boldsymbol 0}$ and ${\boldsymbol{x}}_1\ne {\boldsymbol 0}$, then the first $n_1$ coordinates ${\boldsymbol{x}}_1\boldsymbol{U}$ of ${\boldsymbol c}$ is a nonzero codeword from ${\mathscr{A}}'$ and has weight at least $n_1-k_1+t+1$.
\item When ${\boldsymbol{x}}_2={\boldsymbol 0}$ and ${\boldsymbol{x}}_3\ne {\boldsymbol 0}$, then the last $n_2$ coordinates ${\boldsymbol{x}}_2\boldsymbol{V}$ of ${\boldsymbol c}$ is a nonzero codeword from ${\mathscr{B}}'$ and has weight at least $n_2-k_2+t+1 \geq n_1 - k_1 + t + 1$.
\item When ${\boldsymbol{x}}_2\ne {\boldsymbol 0}$, then ${\boldsymbol c}$ consists of nonzero codewords from ${\mathscr{A}}$ and ${\mathscr{B}}$. Hence, ${\boldsymbol c}$ has weight at least $(n_1-k_1+1)+(n_2-k_2+1) \geq n_1 - k_1 + t + 1$, because we assume $n_2-k_2+1\geq t$.
\end{enumerate}
Therefore, the minimum distance of ${\mathscr{C}}$ is $n_1-k_1+t+1$, achieving
the upper bound in \eqref{eq:d2_reduced}.
\end{proof}
Pairs of nested MDS codes were studied by Ezerman~{{\emph{et al.}}}~\cite{Ezerman.etal:2013} in the context of quantum codes.
Specifically, pairs of nested MDS codes of all possible parameters
(assuming the MDS conjecture) were constructed and
in general, a pair of $(k',k;n)$ MDS codes exists if $n\leq q$.
Therefore, $\max\{n_1,n_2\}\leq q$ suffices for Construction I.
The LEDC in Fig~\ref{fig:toy-example} is, in fact, constructed
using two pairs of nested codes over $\mathbb{F}_7$, generated by
Vandemonde matrices. Note that a Cauchy matrix also generates a pair of nested MDS codes, while requires a larger field size than a Vandermonde matrix.
When the assumption in Theorem~\ref{thm:constructionI} does not hold,
i.e. $t > 1+\max\{n_1-k_1, n_2-k_2\}$, Construction I may fail to produce
an optimal LEDC. We demonstrate this fact below.
\begin{example}
\label{ex:counter_ex}
Let $K_1 = \{1,2,3,4\}$, $K_2 = \{2,3,4,5\}$, and $n_1 = n_2 = 5$.
In this case, $t = |K_1 \cap K_2| = 3 > 1+\max\{n_1-k_1,n_2-k_2\}$.
Hence, the assumption of Theorem~\ref{thm:constructionI} is violated.
In fact, using two Vandermonde matrices over $\mathbb{F}_7$ for the local subcodes
results in the following generator matrix:
\[
{\boldsymbol{G}} =
\begin{pmatrix}
1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0\\
1 & 2 & 3 & 4 & 5 & 1 & 1 & 1 & 1 & 1\\
1 & 4 & 2 & 2 & 3 & 1 & 2 & 3 & 4 & 5\\
1 & 1 & 6 & 1 & 6 & 1 & 4 & 2 & 2 & 3\\
0 & 0 & 0 & 0 & 0 & 1 & 1 & 6 & 1 & 6
\end{pmatrix}.
\]
This matrix generates an LEDC of distance $d = 4$, which does not reach
the upper bound \eqref{eq:d2_reduced}. Hence, it is not optimal.
\end{example}
\subsection{Algebraic Construction of Optimal LEDCs}
We aim to construct optimal $[n_1,k_1; n_2, k_2;t]$-LEDCs whenever $n_1+n_2 \leq q-1$ and $r\triangleq n_1-k_1=n_2-k_2$.
Such LEDCs have minimum distance achieving the bound \eqref{eq:d2_reduced}, i.e. $d = r + t + 1$.
The generator matrices ${\boldsymbol{G}}$ of such optimal LEDCs
have the block form given in \eqref{mat:G}.
In what follows, we find appropriate field elements $\{u_j,v_j: 0\leq j\leq r+t-1\}$, $\{a^{(\ell)}_j: 1\leq \ell\leq t, 0\leq j\leq r+t-\ell \}$, $\{b^{(\ell)}_j: 1\leq \ell\leq t, 0\leq j\leq r+ \ell-1\}$
and construct four component matrices of ${\boldsymbol{G}}$, namely $\boldsymbol{U}$, $\boldsymbol{V}$, ${\boldsymbol A}$, and ${\boldsymbol B}$
in the following forms.
{\small
\[
\begin{split}
\boldsymbol{U} &=
\left(\begin{array}{ccc ccc ccc ccc}
u_0 & u_1 & \cdots & \cdots & u_{r+t} & 0 & \cdots & 0 \\
0 & u_0 & u_1 & \cdots & u_{r+t-1} & u_{r+t} & \cdots & 0 \\
\vdots & \vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \vdots \\
0 & 0 & \cdots & u_0 & \cdots & \cdots & u_{r+t-1} & u_{r+t}
\end{array}\right),\\
{\boldsymbol A} &=
\left(\begin{array}{cc cccc cccccc}
0 & 0 & \cdots & 0 & a_0^{(1)} & a_1^{(1)} & \cdots & \cdots & \cdots & a_{r+t-1}^{(1)} \\
0 & 0 & \cdots & 0 & 0 & a_0^{(2)} & \cdots & \cdots & \cdots & a_{r+t-2}^{(2)} \\
\vdots & \vdots & \ddots & \vdots & \vdots& \vdots & \ddots& \ddots& \ddots& \vdots \\
0 & 0 & \cdots & 0 & 0 & 0 & \cdots & a_0^{(t)} & \cdots & a_{r}^{(t)} \\
\end{array}\right),\\
{\boldsymbol B} &=
\left(\begin{array}{cc cccc cccccc}
0 & 0 & \cdots & 0 & 0 & \cdots & 0 & b_0^{(1)} & \cdots & b_{r}^{(1)}\\
0 & 0 & \cdots & 0 & 0 & \cdots & b_0^{(2)} & b_1^{(2)} & \cdots & b_{r+1}^{(2)}\\
\vdots & \vdots & \reflectbox{$\ddots$} & \vdots & \vdots& \reflectbox{$\ddots$} & \reflectbox{$\ddots$}& \reflectbox{$\ddots$}& \reflectbox{$\ddots$}& \vdots \\
0 & 0 & \cdots & 0 & b_0^{(t)} & \cdots & \cdots & \cdots & \cdots & b_{r+t-1}^{(t)}\\
\end{array}\right),\\
\boldsymbol{V} &=
\left(\begin{array}{ccc ccc ccc ccc}
0 & 0 & \cdots & v_0 & \cdots & \cdots & v_{r+t-1} & v_{r+t}\\
\vdots & \vdots & \reflectbox{$\ddots$} & \reflectbox{$\ddots$} & \reflectbox{$\ddots$} & \reflectbox{$\ddots$} & \reflectbox{$\ddots$} & \vdots \\
0 & v_0 & v_1 & \cdots & v_{r+t-1} & v_{r+t} & \cdots & 0 \\
v_0 & v_1 & \cdots & \cdots & v_{r+t} & 0 & \cdots & 0 \\
\end{array}\right).
\end{split}
\]
}
In this subsection, we demonstrate the existence of field elements
$u_j$'s, $v_j$'s, $a_j^{(\ell)}$, and $b_j^{(\ell)}$ such that the above construction yields a generator matrix ${\boldsymbol{G}}$ of an optimal LEDC.
In particular, we prove the following theorem.
\begin{theorem}[Construction II]
\label{thm:small_field}
Suppose $n_1-k_1=n_2-k_2=r$ and $n_1+n_2\leq q-1$.
Then there exists an optimal $[n_1,k_1;n_2,k_2;t]$-LEDC ${\mathscr{C}}$ whose minimum distance is $r+t+1$.
\end{theorem}
To derive the distance properties, our strategy is to show that
the linear codes ${\mathscr{A}}$, ${\mathscr{B}}$, and ${\mathscr{C}}$ are \emph{subcodes} of certain cyclic codes~\cite[Chap. 7]{MW_S}.
To this end, suppose that $q-1 \geq n_1+n_2$ and identify a vector
$(c_0,c_1,\ldots,c_{q-2})\in {\mathbb F}_q^{q-1}$ with the polynomial $\sum_{j=0}^{q-2}c_j x^j$
in the quotient ring ${\mathbb F}_q[x]/\langle x^{q-1}-1\rangle$.
Note, however, that while the cyclic codes that we consider are of length $q-1$ over $\mathbb{F}_q$, our codes ${\mathscr{A}}$, ${\mathscr{B}}$, and ${\mathscr{C}}$ may have shorter lengths, namely $n_1$, $n_2$, and $n$, respectively.
However, we can regard ${\mathscr{A}}$, ${\mathscr{B}}$, and ${\mathscr{C}}$ as codes of length $q-1$ by simply
adding a sufficient number of zero coordinates to the right of each codeword.
Doing so obviously does not affect the polynomial representation of each codeword.
Thus, from now on we can treat ${\mathscr{A}}$, ${\mathscr{B}}$, and ${\mathscr{C}}$ as subspaces of $\mathbb{F}_q^{q-1}$.
Under the mapping of vectors to polynomials, let $(u_0,u_1,\ldots, u_{r+t})$ and $(v_0,v_1,\ldots, v_{r+t})$ correspond to $u(x)$ and $v(x)$, which are polynomials of degree at most $r+t$, respectively.
For $1\leq \ell\leq t$, let $\left(a_0^{(\ell)}, a_1^{(\ell)}, \ldots, a_{r+t-\ell}^{(\ell)}\right)$ and
$\left(b_0^{(\ell)}, b_1^{(\ell)}, \ldots, b_{r+\ell-1}^{(\ell)}\right)$ correspond to
$a^{(\ell)}(x)$ and $b^{(\ell)}(x)$, which are polynomials of degrees at most $r+t-\ell$ and $r+\ell-1$, respectively.
Furthermore, the codewords in the linear codes ${\mathscr{C}}$, ${\mathscr{A}}$, and ${\mathscr{B}}$,
in their polynomial representations, can be obtained via linear combinations of
certain polynomials described by $u(x)$, $v(x)$, $a^{(\ell)}(x)$, and $b^{(\ell)}(x)$.
Specifically,
\begin{enumerate}
\item ${\mathscr{C}}$ is the vector subspace of $\mathbb{F}_q^{q-1}$ spanned by the polynomials
$x^iu(x)$ for $0\leq i\leq k_1-t-1$,
$x^jv(x)$ for $n_1\leq j\leq n_1+k_2-t-1$, and
$c^{(\ell)}(x) \stackrel{\mbox{\tiny $\triangle$}}{=} x^{k_1-t+\ell-1}a^{(\ell)}(x)+x^{n_1+k_2-\ell}b^{(\ell)}(x)$ for $1\leq \ell\leq t$,
\item ${\mathscr{A}}$ is the vector subspace of $\mathbb{F}_q^{q-1}$ spanned by the polynomials
$x^iu(x)$ for $0\leq i\leq k_1-t-1$ and
$x^{k_1-t+\ell-1}a^{(\ell)}(x)$ for $1\leq \ell\leq t$,
\item ${\mathscr{B}}$ is the vector subspace of $\mathbb{F}_q^{q-1}$ spanned by the polynomials
$x^jv(x)$ for $n_1\leq j\leq n_1+k_2-t-1$ and
$x^{n_1+k_2-\ell}b^{(\ell)}(x)$ for $1\leq \ell\leq t$.
\end{enumerate}
Next, we provide sufficient conditions for
the polynomials $u(x)$, $v(x)$, $a^{(\ell)}(x)$, and $b^{(\ell)}(x)$
so that the codes ${\mathscr{C}}$, ${\mathscr{A}}$, and ${\mathscr{B}}$ have the desired dimension and distance properties.
\begin{proposition}\label{prop:poly}
Let $\omega$ be a primitive element in ${\mathbb F}_q$.
Suppose that the polynomials $u(x)$, $v(x)$, $a^{(\ell)}(x)$, and $b^{(\ell)}(x)$, $1\leq \ell\leq t$,
satisfy the following conditions.
\begin{enumerate}[(D1)]
\item $u_0$, $v_0$, $a^{(\ell)}_0$ and $b^{(\ell)}_0$ are nonzero for $1\leq \ell\leq t$.
\item $u(\omega^j)=v(\omega^j)=0$ for $0\leq j\leq r+t-1$.
\item For $1\leq \ell\leq t$, $a^{(\ell)}(\omega^j)=b^{(\ell)}(\omega^j)=0$ for $0\leq j\leq r-1$.
\item For $1\leq \ell\leq t$, $c^{(\ell)}(\omega^j)=0$ for $0\leq j\leq r+t-1$,
where
$c^{(\ell)}(x)=x^{k_1-t+\ell-1}a^{(\ell)}(x)+x^{n_1+k_2-\ell}b^{(\ell)}(x)$.
\end{enumerate}
Then the LEDC ${\mathscr{C}}$ defined as above is an optimal $[n_1,k_1;n_2,k_2;t]$-LEDC of minimum distance $r+t+1$.
\end{proposition}
To prove this proposition, we employ the well-known BCH bound on minimum distance of a cyclic code.
We first recall some necessary notations from ~\cite[Ch. 7]{MW_S}.
Let ${\mathcal C}$ be a cyclic code of length $q-1$ over $\mathbb{F}_q$.
An element $z \in \mathbb{F}_q$ is called a \emph{zero} of ${\mathcal C}$
if $c(z) = 0$ for every codeword $c(x) \in {\mathcal C}$.
Let $Z$ be the set of all zeros of ${\mathcal C}$.
The polynomial $g(x) \triangleq \prod_{\alpha \in Z} (x-\alpha)$ is called the \emph{generator polynomial} of ${\mathcal C}$.
Then $c(x) \in {\mathcal C}$ if and only if $g(x) | c(x)$.
\begin{theorem}[BCH bound] Let $\omega$ be the primitive element of ${\mathbb F}_q$ and $r$ be an integer.
The cyclic code with the generator polynomial $(x-1)(x-\omega)\cdots(x-\omega^{r-1})$
has minimum distance at least $r+1$.
\end{theorem}
\begin{proof}[Proof of Proposition \ref{prop:poly}]
First observe that from condition (D1), the matrices ${\boldsymbol{G}}$ given in \eqref{mat:G},
${\boldsymbol{G}}_{\mathscr{A}} = \left(\begin{array}{c} \boldsymbol{U} \\\hline {\boldsymbol A} \end{array}\right)$,
and ${\boldsymbol{G}}_{\mathscr{B}} = \left(\begin{array}{c} {\boldsymbol B} \\\hline \boldsymbol{V} \end{array}\right)$
all have full rank. Therefore, the corresponding linear codes ${\mathscr{C}}$, ${\mathscr{A}}$, and ${\mathscr{B}}$ have the desired dimensions.
For the distance properties, we make the following two claims.
Recall that we may regard ${\mathscr{C}}$, ${\mathscr{A}}$, and ${\mathscr{B}}$ as codes of length $q-1$ by
adding a sufficient number of zero coordinates to the right of each codeword.
\begin{enumerate}[(C1)]
\item ${\mathscr{C}}$ is a subcode of the cyclic code with generator polynomial
$g_1(x) = (x-1)(x-\omega^{})\cdots(x-\omega^{r+t-1})$.
\item ${\mathscr{A}}$ and ${\mathscr{B}}$ are subcodes of the cyclic code with generator polynomial
$g_2(x) = (x-1)(x-\omega^{})\cdots(x-\omega^{r-1})$.
\end{enumerate}
Then by the BCH bound, the codes ${\mathscr{C}}$, ${\mathscr{A}}$, and ${\mathscr{B}}$ have minimum distance $r+t+1$, $r+1$ and $r+1$, respectively and the proposition is immediate.
Hence, it suffices to prove the claim.
From conditions (D2) and (D4), we deduce that
\[
u(\omega^j)=v(\omega^j)=c^{(1)}(\omega^j)=\cdots=c^{(t)}(\omega^j),
\ 0\leq j\leq r+t-1.
\]
Therefore, $g_1(x)=(x-1)(x-\omega^{})\cdots(x-\omega^{r+t-1})$ divides all polynomials in the basis of ${\mathscr{C}}$. Hence, (C1) follows.
It can be also verified that $g_2(x)= (x-1)(x-\omega^{})\cdots(x-\omega^{r-1})$ divides all polynomials in the bases
of ${\mathscr{A}}$ and ${\mathscr{B}}$, respectively. Hence, (C2) follows.
\end{proof}
\vskip 5pt
The following proposition shows that the polynomials satisfying conditions (D1)-(D4)
in Proposition~\ref{prop:poly} do exist. Hence, Theorem~\ref{thm:small_field} follows.
\begin{proposition}\label{prop:equal}
There exist polynomials $u(x)$, $v(x)$, $a^{(\ell)}(x)$, and $b^{(\ell)}(x)$ for $1\leq \ell \leq t$
that satisfy conditions (D1)-(D4).
\end{proposition}
To prove this proposition, we consider the following lemma.
\begin{lemma}
\label{lem:equal}
Fix $1\leq \ell\leq t$, $r\geq 1$ and $t-\ell<T<q-\ell$.
Then there exists polynomials
$a(x)$ with degree at most $t-\ell$ and
$b(x)$ with degree at most $\ell-1$,
having nonzero constants, such that
\begin{equation}\label{eq:equal}
a(\omega^j)+x^Tb(\omega^j)=0 \mbox{ for }r\leq j\leq r+t-1.
\end{equation}
\end{lemma}
\begin{proof}
Let $a(x)=\sum_{j=0}^{t-\ell}a_jx^j$ and $b(x)=\sum_{j=0}^{\ell-1}b_jx^j$.
Then \eqref{eq:equal} is equivalent to the linear system of equations
$\bM(a_0 , a_1 , \ldots , a_{r-\ell}, b_0 , b_1 , \ldots , b_{\ell-1})^T={\boldsymbol 0}$,
where $\bM$ is the $t\times (t+1)$ matrix:
{\small
\begin{equation*}\label{linsys:equal}
\left(\begin{array}{cccc cccc}
1 & \cdots & (\omega^r)^{t-\ell} & (\omega^r)^T & \cdots & (\omega^r)^{T+\ell-1}\\
1 & \cdots & (\omega^{r+1})^{t-\ell} & (\omega^{r+1})^T & \cdots & (\omega^{r+1})^{T+\ell-1}\\
\vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\
1 & \cdots & (\omega^{r+t-1})^{t-\ell} & (\omega^{r+t-1})^T & \cdots & (\omega^{r+t-1})^{T+\ell-1}
\end{array}\right).
\end{equation*}
}
Then $\bM$ can be rewritten as
{\small
\begin{equation*}
\left(\begin{array}{cccc cccc}
1 & \cdots & (\omega^{t-\ell})^{r} & (\omega^T)^{r} & \cdots & (\omega^{T+\ell-1})^{r}\\
1 & \cdots & (\omega^{t-\ell})^{r+1} & (\omega^{T})^{r+1} & \cdots & (\omega^{T+\ell-1})^{r+1}\\
\vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\
1 & \cdots & (\omega^{t-\ell})^{r+t-1} & (\omega^{T})^{r+t-1} & \cdots & (\omega^{T+\ell-1})^{r+t-1}
\end{array}\right).
\end{equation*}
}
Since $t-\ell<T$ and $T+\ell-1 < q - 1$, the values $1$, \ldots, $\omega^{t-\ell}$, $\omega^{T}$, \ldots, $\omega^{T+\ell-1}$ are distinct nonzero elements in ${\mathbb F}_q$. Therefore $\bM$ is the generator matrix of a $[t+1,t,2]$ MDS code.
Hence, $(a_0,a_1,\ldots,a_{t-\ell}, b_0,b_1,\ldots,b_{\ell-1})$ belongs to its dual code.
Since the dual code is a $[t+1,1,t+1]$ MDS code, which has minimum distance $t+1$,
we can choose $(a_0,a_1,\ldots,a_{t-\ell}, b_0,b_1,\ldots,b_{\ell-1})$ such that $a_0,a_1,\ldots,a_{t-\ell}, b_0,b_1,\ldots,b_{\ell-1}$ are nonzero.
In particular, $a_0$ and $b_0$ are nonzero. Thus, $a(x)$ and $b(x)$ are the desired polynomials.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:equal}]
Clearly, setting $u(x)=v(x)=(x-1)(x-\omega^{})\cdots(x-\omega^{r+t-1})$ satisfies conditions (D1) and (D2).
For $1\leq \ell\leq t$, let $a_{*}^{(\ell)}(x)$ and $b_{*}^{(\ell)}(x)$ be the polynomials obtained from Lemma \ref{lem:equal}
by setting $T=(n_1+k_2-\ell)-(k_1-t+\ell-1)$.
(It is obvious to verify that such $T$ satisfies $t-\ell < T < q - \ell$ for all $1 \leq \ell \leq t$.)
Then we set
\begin{align*}
a^{(\ell)}(x) &=(x-1)(x-\omega)\cdots(x-\omega^{r-1})a_{*}^{(\ell)}(x),\\
b^{(\ell)}(x) &=(x-1)(x-\omega)\cdots(x-\omega^{r-1})b_{*}^{(\ell)}(x).
\end{align*}
We can then verify that conditions (D1), (D3) and (D4) are satisfied and
this completes the proof.
\end{proof}
\begin{remark}
For $1\leq \ell \leq t$, observe that the polynomials $a_*^{(\ell)}(x)$ and $b_*^{(\ell)}(x)$ are
found by solving $t$ linear equations in $t+1$ variables in Lemma \ref{lem:equal}.
Therefore, the codes ${\mathscr{C}}$, ${\mathscr{A}}$ and ${\mathscr{B}}$ can be constructed in time polynomial in $q=O(n_1+n_2)$ and $t$.
\end{remark}
\begin{example}[Example~\ref{ex:equal} continued]
\label{ex:2}
We provide the optimal LEDC in Example~\ref{ex:equal} using Construction II.
In this case, $K_1 = \{1,2,3,4\}$, $K_2 = \{2,3,4,5,6,7\}$. Hence, $k_1 = 4$,
$k_2 = 6$, $k = 7$, and $t = 3$.
Moreover, $n_1 = 5$, $n_2 = 7$, and hence, $r = n_1 - k_1 = n_2 - k_2 = 1$.
Let $q = 13$ and $\omega=2$ be the primitive element of ${\mathbb F}_{13}$.
Since $r+t-1 = 3$, we set
\[
\begin{split}
u(x)=v(x)&=(x-1)(x-2)(x-2^2)(x-2^3)\\
&= 12+10x+5x^2+11x^3+x^4.
\end{split}
\]
Repeated applications of Lemma~\ref{lem:equal} yield the polynomials
{\small
\begin{align*}
a^{(1)}(x)&= 12+2x+5x^2+7x^3, &
b^{(1)}(x)&= 8+5x, \\
a^{(2)}(x)&= 12+7x+7x^2, &
b^{(2)}(x)&= 7+2x+4x^2, \\
a^{(3)}(x)&=12+x, &
b^{(3)}(x)&=10+12x+3x^2+x^3.
\end{align*}
}
Hence, the optimal $[5,4;7,6;3]$-LEDC is given by the generator matrix
\begin{equation*}
\left(\begin{array}{ccccc| ccccccc}
12 & 10 & 5 & 11 & 1 &
0 & 0 & 0 & 0 & 0 & 0 & 0\\ \hline
0 & 12 & 2 & 5 & 7 &
0 & 0 & 0 & 0 & 0 & 8 & 5\\
0 & 0 & 12 & 7 & 7 &
0 & 0 & 0 & 0 & 7 & 2 & 4\\
0 & 0 & 0 & 12 & 1 &
0 & 0 & 0 & 10 & 12 & 3 & 1\\ \hline
0 & 0 & 0 & 0 & 0 &
0 & 0 & 12 & 10 & 5 & 11 & 1 \\
0 & 0 & 0 & 0 & 0 &
0 & 12 & 10 & 5 & 11 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 &
12 & 10 & 5 & 11 & 1 & 0 & 0
\end{array}\right).
\end{equation*}
\end{example}
\bibliographystyle{IEEEtran}
|
\section{Introduction}
Entanglement is one of the key concepts in the quantum information sciences, particularly due to its significance
for information processing tasks \cite{ekert1991quantum,bennett1992experimental,bennett1992communication,bennett1993teleporting,mattle1996dense,bouwmeester1997experimental}. For example, with the aid of shared entanglement, one can teleport a quantum state without any knowledge of the state itself \cite{bennett1993teleporting,bouwmeester1997experimental}, encode an amount of information beyond classical capacity \cite{mattle1996dense}, or even set up a communication channel where eavesdropping is impossible \cite{bennett2020quantum}.
With such applications in mind, a comprehensive foundation for entanglement has been developed in quantum information theory:
Axioms for entanglement measures were established and formal definitions for measures were suggested such as the ``entanglement of formation'' \cite{bennett1996mixed}, ``entanglement cost'' \cite{hayden2001asymptotic} and ``entanglement of distillation'' \cite{rains1999rigorous}. In particular, enormous effort has been dedicated towards understanding their operational meaning for information processing \cite{horodecki2009quantum,plenio2014introduction}.
In striking contrast to the rather abstract foundation of entanglement, its practical side is remarkable underdeveloped. Even for systems of just two electrons on two lattice sites no closed formula is known for measuring (faithfully) entanglement.
At first sight, this claim seems to be surprising given the
huge body of literature emphasizing the significance of entanglement, e.g., for quantum phase transitions \cite{osborne2002entanglement,osterloh2002scaling,vidal2003entanglement}, topological order \cite{kitaev2006topological,levin2006detecting}, chemical bonding \cite{boguslawski2013orbital,szalay2017correlation}
and the implementation of numerical methods \cite{white1992density,legeza2003optimizing,schollwock2011density,stein2016automated}. Yet, it is crucial to notice that all these works had to make significant concessions, restricting either their scope or conclusiveness. For instance, since the simple von Neumann entropy refers to pure states, it can quantify the entanglement of a subsystem $A$
\emph{only} with the \emph{entire} complementary system, i.e., the universe. For the physically relevant case of two \emph{arbitrary} subsystems $A,B$ --- their joint state $\rho_{AB}$ thus being mixed --- there is no practically feasible alternative to the so-called ``entanglement negativity''. The latter's deficiency, to vanish for some entangled states (non-faithfulness), however, raises doubts about its usefulness for investigating the true role of entanglement in physics, chemistry, and materials science.
A systematic way to avoid all these fundamental deficiencies would be to resort to the relative entropy of entanglement which measures the minimal ``distance'' $S(\rho_{AB}||\sigma^\ast)$ of a quantum state $\rho_{AB}$ to the convex set of unentangled/separable states \cite{vedral1998entanglement}. Intriguingly, the problem of determining for a given boundary point $\sigma^\ast$ of that convex set all $\rho_{AB}$ for which $\sigma^\ast$ is the closest separable state has been solved \cite{miranowicz2008closed,friedland2011explicit}.
Yet, since no efficient description of the boundary of the set of separable states is known, the solution to this inverse problem
will not simplify our task of calculating the relative entropy of entanglement.
Accordingly, the need and the challenging character of deriving a corresponding closed formula can hardly be overestimated.
Even for the case of just two qubits, with Hilbert space $\mathbb{C}^2 \otimes \mathbb{C}^2$,
this problem is listed as one of the long-standing problems in quantum information theory \cite{krueger2005some}. The application in many-electron systems would even require a solution for the setting $\mathbb{C}^4 \otimes \mathbb{C}^4$ since
the Fock space of a single orbital or lattice site is four dimensional, spanned by
$\{|0\rangle, |\!\uparrow\rangle, |\!\downarrow\rangle, |\!\uparrow\downarrow\rangle\}$.
It will be the main achievement of our work to derive a \emph{closed formula} for exactly this setting, quantifying the entanglement between any two electronic orbitals. Our derivation necessitates a number of non-trivial ingredients such as the superselection rule, the formalism for taking into account local and non-local symmetries and a compact description of the set of unentangled states.
Most importantly, this anticipated key result --- the closed formula \eqref{eqn:rel_ent_formula} --- is information theoretical complete in the sense that the entire information about ground states is encoded in their reduced density matrices of the setting $\mathbb{C}^4 \otimes \mathbb{C}^4$. For instance, in lattice models with hopping and interaction restricted to nearest neighbors, their ground state properties are uniquely described by the two-site reduced density matrices $\rho_{i,i+1}$.
This generalizes to arbitrary continuous systems with pair interaction \cite{Col63,Mazz12,Mazz16}, by replacing $\rho_{i,i+1}$ by $\rho_{\vec{x},\vec{y}}$. In turn, this constitutes the prominent \emph{Coulson Challenge} \cite{Col00}:
Finding an efficient description of the corresponding set of $N$-representable reduced density matrices would allow one to solve \emph{efficiently} the ground state problem for any quantum system.
\section{Notation and Concepts}
The concept of entanglement refers to a notion of subsystems. But how could the latter be established in the context of \emph{fermionic} quantum systems? Clearly, due to their indistinguishable nature, individual fermions do not constitute proper subsystems. A meaningful and particularly appealing partition, however, emerges within the framework of ``2nd quantization'', namely by dividing the set of lattice sites or molecular orbitals into disjoint subsets (see Figure \ref{fig:resource}a).
In such fermionic settings, the concept of entanglement is well understood from a conceptual and mathematical point of view \cite{zanardi2002quantum,schuch2004quantum,banuls2007entanglement,friis2013fermionic,eisler2015gaussian,Eisert18,Spee18}, and its application with an emphasis on information processing tasks has become an active field of research \cite{boguslawski2015orbital,gigena2015entanglement,franco2016quantum,franco2018indistinguishability,Pachos18free,ding2020correlation,ding2020concept,morris2020entanglement,benatti2020entanglement,debarba2020teleporting,olofsson2020quantum,aoto2020calculating,galler2021orbital,pusuluk2021classical,faba2021two,faba2021correlation,faba2022Lipkin,sperling2022entanglement,ding2022quantum}. Before considering general fermionic systems, we introduce and explain various relevant concepts first in the simplest possible setting of just \emph{two} spinless fermionic sites, orbitals or modes $A, B$. By referring to those two modes $A,B$ we automatically established a notion of subsystems \cite{zanardi2001virtual}. From a more mathematical point of view, this means to define an unambiguous starting point for our quantum informational analysis by splitting the corresponding two-dimensional one-fermion Hilbert space $\mathcal{H}^{(1)}\cong \mathbb{C}^2$ into two orthogonal subspaces, $\mathcal{H}^{(1)} = \mathcal{H}^{(1)}_A \oplus \mathcal{H}^{(1)}_B$. Each mode can be either empty or occupied and the configuration states $|n_A,n_B\rangle$, where $n_{A,B} = 0,1$, span the corresponding total Fock space accordingly. Finally, by referring to the formal splitting $|n_A, n_B\rangle \mapsto |n_A\rangle \otimes |n_B\rangle$ we can decompose the underlying Fock space, $\mathcal{F}[\mathcal{H}^{(1)}] \cong \mathcal{F}[\mathcal{H}_A^{(1)}]\otimes \mathcal{F}[\mathcal{H}_B^{(1)}]$, and reveal the sought-after tensor product structure of our composite quantum system $AB$.
\begin{figure}[t]
\centering
\includegraphics[scale=0.38]{QIT_resource.pdf}
\caption{a) A 2D Fermi-Hubbard lattice (left) and a set of Hartree-Fock (HF) molecular orbitals ordered with increasing orbital energy (right). b) Alice and Bob performing superluminal signalling of one-bit information with two spinless fermion modes in a world free of superselection rules (SSR). (See text for more information.) c) Alice and Bob transfer the state $\rho_{ij}$ on two physical orbitals $i$ and $j$, to a state $\tilde{\rho}_{AB}$ on the local quantum registers $A$ and $B$ via LOCC.}
\label{fig:resource}
\end{figure}
Before we discuss the entanglement between the two modes, we must incorporate a final fermionic ingredient, a crucial superselection rule \cite{wick1970superselection}. Namely, nature does not allow for every possible operation on fermionic subsystems: local observables must preserve local particle numbers. In other words, observables on mode $A$ must take the form $\hat{O}_A = \lambda_0 |0_A\rangle\langle 0_A| + \lambda_1 |1_A\rangle\langle1_A|$, and likewise for mode $B$. The direct consequence is that only the diagonal blocks of the quantum state $\rho$ with fixed local particle numbers $N_{\!A}, N_{\!B}$ can be observed in reality. This then means that $\rho$ can never be distinguished through local observables from its so-called superselected variant $\tilde{\rho}$ \cite{bartlett2003entanglement,banuls2007entanglement}. Since $A$ and $B$ are both spinless modes, the superselection works quite simply. We demonstrate here with a general state $\rho$ (in the ordered basis $|0,0\rangle,|0,1\rangle,|1,0\rangle,|1,1\rangle$) the effect of the particle number superselection rule
\begin{equation}
\rho =\left[\begin{smallmatrix}
\rho_{11} & \rho_{12} & \rho_{13} & \rho_{14}
\\
\rho_{21} & \rho_{22} & \rho_{23} & \rho_{24}
\\
\rho_{31} & \rho_{32} & \rho_{33} & \rho_{34}
\\
\rho_{41} & \rho_{42} & \rho_{43} & \rho_{44}
\end{smallmatrix}\right]\,\xrightarrow{\mathrm{SSR}}\: \tilde{\rho} =\left[\begin{smallmatrix}
\rho_{11} & 0 & 0 & 0
\\
0 & \rho_{22} & 0 & 0
\\
0 & 0 & \rho_{33} & 0
\\
0 & 0 & 0 & \rho_{44}
\end{smallmatrix}\right]\!.
\end{equation}
The generalization of various concepts to arbitrary fermionic systems is straightforward.
To introduce a notion of subsystems, one just divides some chosen orthonormal basis of the underlying (higher-dimensional) one-particle Hilbert space $\mathcal{H}^{(1)}$ into two disjoint subsets $\{\ket{\varphi_i}\}_{i=1}^k$, $\{\ket{\varphi_i}\}_{i=k+1}^d$ whose states then span $\mathcal{H}^{(1)}_A$ and $\mathcal{H}^{(1)}_B$, respectively. Using the corresponding occupation number representation, this leads according to $\ket{n_1,\ldots,n_d}\mapsto \ket{n_1,\ldots,n_k}\otimes \ket{n_{k+1},\ldots,n_d} $ to a tensor product structure
\begin{equation}
\mathcal{F}[\mathcal{H}^{(1)}] \cong \mathcal{F}[\mathcal{H}^{(1)}_{A}] \otimes \mathcal{F}[\mathcal{H}^{(1)}_{B}].
\end{equation}
On this level, the superselection can be formalized as
\begin{equation}
\rho \mapsto \tilde{\rho} = \sum_{N_{\!A}, N_{\!B} \geq 0} P_{N_{\!A}}\! \otimes \! P_{N_{\!B}} \, \rho \, P_{N_{\!A}}\! \otimes\! P_{N_{\!B}}, \label{eqn:tilde}
\end{equation}
where $P_N$ is the projection onto the $N$-particle sector of the respective Fock space $\mathcal{F}[\mathcal{H}^{(1)}_{A/B}]$.
\section{Superselection Rules and Accessible Entanglement}
The important restriction $\rho \mapsto \tilde{\rho}$ is urged by the fundamental laws of physics. Violating the superselection rule (SSR) would lead to dire scenarios where superluminal signalling becomes possible \cite{johansson2016comment,ding2020concept}(Figure \ref{fig:resource}b), in striking contradiction to the law of special relativity. To explain this key aspect, let us perform a Gedankenexperiment by assuming that the SSR could be violated. Suppose Alice and Bob each holds a spinless fermionic mode $A$ and $B$, respectively. Each mode can either be empty or occupied, resulting in two orthogonal local basis states $|0\rangle_{A/B}$ and $|1\rangle_{A/B} = f^{\dagger}_{A/B} |0\rangle_{A/B}$, where $f^{\dagger}_{A/B}$ denotes the respective fermionic creation operator. The state shared between Alice and Bob could be prepared as $|\Psi\rangle_{AB} = \frac{1}{\sqrt{2}}(|0\rangle_A \otimes |0\rangle_B + |0\rangle_A \otimes |1\rangle_B)$ by applying $\frac{1}{\sqrt{2}}(\openone + f^\dagger_{B})$ to the vacuum. When the two parties are far away, Alice could then send an instant bit of information ($0$ or $1$) to Bob with the following actions: if Alice wishes to communicate a bit ``$0$", she does nothing (applies $\openone$ to her mode); If she wishes to send ``$1$", she then performs the local unitary operation $U_A = i(f^\dagger_A - f^{\phantom{\dagger}}_A)$. Remarkably, when Bob measures the local observable $\hat{O}_B = \frac{1}{2}(\openone - f^\dagger_B - f^{\phantom{\dagger}}_B)$ the outcome would be definite. To be more specific, one easily verifies that in either case Bob's local state would be an eigenstate of $\hat{O}_B$, and its eigenvalue would coincide with the value of Alice's message.
Clearly, because of the SSR, such paradoxical scenarios would never occur. Instead, this Gedankenexperiment highlights in the most compelling way that the entanglement shared between Alice and Bob was never physical to begin with. Only the entanglement in accordance with SSR is of physical relevance and can be extracted and utilized for quantum information processing tasks. From a practical point of view, the useful entanglement in a fermionic quantum state $\rho$ is that which can be transferred via local operations and classical communications (LOCC) to a new state on a suitable quantum register, on which there is no restriction of SSR. After the transferring process, the state left on the quantum register is precisely $\tilde{\rho}$ \cite{vaccaro2003identical}, whose entanglement can be manipulated with the unrestricted local operations (see Figure \ref{fig:resource}c). Therefore taking into account SSR when quantifying fermionic entanglement is not only the theoretically accurate way, but also of serious practical relevance in the on-going second quantum revolution.
Last but not least, the relative entropy of entanglement $E(\rho)$ is defined \cite{vedral1997quantifying} as the minimal relative entropy
\begin{equation}
S(\rho||\sigma) \equiv \mathrm{Tr}[\rho(\log(\rho)-\log(\sigma))]
\end{equation}
of $\rho$ relative to the set of separable (i.e., unentangled) states.
As it has been motivated above and explained in detail in \cite{bartlett2003entanglement,banuls2007entanglement}, the effect of the SSR can be transferred into the quantum state $\tilde{\rho}$, leading to
\begin{equation}\label{eqn:rel_ent}
E(\rho) = \min_{\sigma \in \mathcal{D}_{sep}} S(\tilde{\rho}|| \sigma),
\end{equation}
with $\mathcal{D}_{sep}$ being the set of all classical mixtures of product states $\rho_A \otimes \rho_B$.
\section{Derivation of closed formula}
We apply now these general concepts to our context. For this, we first observe that the one-electron Hilbert space factorizes into an orbital part and a spin part, $\mathcal{H}^{(1)}= \mathcal{H}_l^{(1)} \otimes \mathcal{H}_s^{(1)}$, where the latter follows as $\mathcal{H}_s^{(1)} \cong \mathbb{C}^2$. Accordingly, any decomposition of the orbital part $\mathcal{H}_l^{(1)}$ implies a corresponding decomposition of the full one-particle Hilbert space $\mathcal{H}^{(1)}$.
The notion of entanglement between two orbitals $\ket{\varphi_1}, \ket{\varphi_2} \in \mathcal{H}_l^{(1)}$ thus refers to the splitting
\begin{equation}
\mathcal{F}[\mathcal{H}^{(1)}] \cong \mathcal{F}[\mathcal{H}_A^{(1)}]\otimes \mathcal{F}[\mathcal{H}_B^{(1)}]\otimes \mathcal{F}[\mathcal{H}_C^{(1)}],
\end{equation}
where $\mathcal{H}_{A/B}^{(1)}$ is spanned by the two spin-orbitals $\ket{\varphi_{1/2}}\otimes \ket{\!\uparrow}$, $\ket{\varphi_{1/2}}\otimes \ket{\!\downarrow}$ and $\mathcal{H}_{C}^{(1)}$ by all the remaining ones.
The orbital-entanglement between $\ket{\varphi_1}$ and $\ket{\varphi_2}$ then follows by first
tracing out the complementary system $C$ and then calculating the relative entropy of entanglement \eqref{eqn:rel_ent}
of the reduced state $\rho\equiv \rho_{AB}$. For example, if we wish to study the entanglement between two sites $i$ and $j$ in a lattice system, or the entanglement between two Hartree-Fock orbitals $i$ and $j$, we must first trace out all other site/orbital degrees of freedom except for site/orbital $i$ and $j$ (see Figure \ref{fig:resource}a). The sought-after entanglement between the two sites/orbitals can then be calculated from the resulting reduced state $\rho_{ij}$.
Since the Fock space of one orbital is spanned by the states \mbox{$\{|0\rangle, |\!\uparrow\rangle, |\!\downarrow\rangle, |\!\uparrow\downarrow\rangle\}$}, where $\ket{0}$ denotes the vacuum state, the mathematical setting underlying our following derivation is $\mathbb{C}^4 \otimes \mathbb{C}^4$. Moreover, since $\rho,\sigma$ in \eqref{eqn:rel_ent} are density operators on a 16-dimensional Hilbert space, the corresponding minimization problem \eqref{eqn:rel_ent} would in principle involve $16\times16-1=255$ real parameters. Even worse, no efficient description of the convex set $\mathcal{D}_{sep}$ is known and our task of deriving a closed formula seems to be hopeless. In contrast to generic states, two-orbital reduced density operators $\rho$ of realistic many-electron quantum states exhibit, however, a number of simplifying symmetries. Referring to the most relevant scenarios in quantum chemistry and solid state physics, we assume in the following
\begin{equation}
[\rho,\hat{N}]=[\rho,\hat{S}^z]=[\rho, \hat{\vec{S}}^2]=0,
\end{equation}
where $\hat{N}, \hat{S}^z, \hat{\vec{S}}^2$ denote the particle number and the spin operators of the two-orbital subsystem. The first two symmetries of $\rho$ are inherited directly from the many-electron quantum state, which in most practical cases has even a fixed particle number and magnetization. As it is proven in the Appendix \ref{sec:syminherit}, the third symmetry is in particular valid whenever the many-electron state is a singlet. Since the vast majority of molecular states in nature are singlets (otherwise the system would react with another one to form a closed-shell singlet structure), this is a reasonable assumption as well. Yet, for systems studied in solid state physics, this is not always the case and we therefore present in Appendix \ref{sec:general} a respective closed formula for the context of lattice models without assuming $[\rho, \hat{\vec{S}}^2]=0$.
In general, local symmetries of $\rho$ can be exploited to simplify the search space $\mathcal{D}_{sep}$ in \eqref{eqn:rel_ent} \cite{vollbrecht2001entanglement}.
To explain this, assume $[U(g),\rho]=0$ for all $g$ in a discrete group $G$, where $U(g)$ denotes its unitary representation on the Hilbert space. Any arbitrary state $\sigma$ can be turned into a $G$-symmetric one by applying the corresponding ``twirl'' $T_G(\cdot)$,
\begin{equation}\label{eqn:twirl}
T_G(\sigma) =\frac{1}{|G|} \sum_{g \in G} U(g) \sigma U(g)^\dagger.
\end{equation}
Then one observes \cite{vollbrecht2001entanglement}:
\begin{equation}\label{eqn:entropy}
\begin{split}
S(\rho||\sigma) &= \frac{1}{|G|} \sum_{g \in G} S(U(g) \rho U(g)^\dagger || U(g) \sigma^\ast U(g)^\dagger)
\\
&= \frac{1}{|G|} \sum_{g \in G} S(\rho || U(g) \sigma^\ast U(g)^\dagger)
\\
&\geq S(\rho||T_G(\sigma)).
\end{split}
\end{equation}
In the first line, we used the unitary invariance of the relative entropy $S$, in the second line the symmetry of $\rho$ and in the last one the convexity of $S$.
Since $U(g)$ was assumed to be local, $U(g)\equiv U_{\!A}(g)\otimes U_{\!B}(g)$, $T_G(\sigma)$ remains separable for all $\sigma \in \mathcal{D}_{sep}$. Hence, application of \eqref{eqn:entropy} to a minimizer state $\sigma^\ast \in \mathcal{D}_{sep}$ of \eqref{eqn:rel_ent} yields the desired result: Whenever $\rho$ has a local symmetry $G$, we can restrict the search space $\mathcal{D}_{sep}$ to all $G$-symmetric states $\sigma \in T_G(\mathcal{D}_{sep})$.
In our case, one local symmetry is the one corresponding to $\hat{S}^z$.
Indeed, since $\hat{S}^z=\hat{S}^z_{\!A}\otimes \mathbb{1}_B+\mathbb{1}_A \otimes \hat{S}^z_{\!B}$ we have
\begin{equation}
U_{\hat{S}^z}(\alpha)\equiv \exp(i \alpha \hat{S}^z)= \exp(i \alpha \hat{S}_{A}^z) \otimes \exp(i \alpha \hat{S}_{B}^z),
\end{equation}
for all $\alpha$. Hence, by generalizing \eqref{eqn:twirl}, \eqref{eqn:entropy} to integrals \cite{vollbrecht2001entanglement}, we can restrict $\mathcal{D}_{sep}$ to states
\begin{eqnarray}
T_{\hat{S}^z}(\sigma)&=&\frac{1}{2\pi} \int_{0}^{2 \pi}\! \mathrm{d} \alpha \exp(i \alpha \hat{S}^z) \sigma \exp(-i \alpha \hat{S}^z) \nonumber \\
&=&\sum_{S^z} P_{S^z} \, \sigma \, P_{S^z},
\end{eqnarray}
which are block-diagonal with respect to the magnetization $S^z$, where $P_{S^z}$ denotes the projection onto the $S^z$-sector.
The same reasoning applies to the particle number operator $\hat{N}$. Also the particle number superselection rule \eqref{eqn:tilde} can be interpreted as a two-fold symmetry with corresponding $\hat{U}(\alpha)\equiv \exp{(i \alpha \hat{N}_{\!A})} \otimes \mathbb{1}_B$ and $\hat{U}(\beta)\equiv \mathbb{1}_A \otimes \exp{(i \beta \hat{N}_{\!B})}$, respectively.
\begin{table}[t!]
\begin{tabular}{|c|c|c|c|l|}
\hline
$\,\,N\,\,$ & $S^z$ & \,\,\,$|\vec{S}| \,\,\,$ & $(N_A,N_B)$ & \qquad \quad State \rule{0pt}{2.6ex}\rule[-1.2ex]{0pt}{0pt} \\ \hline
0 & 0 & 0 & $(0,0)$ & $\,\,|\Psi_1\rangle =|0\rangle \otimes |0\rangle$ \rule{0pt}{2.6ex}\rule[-1.2ex]{0pt}{0pt} \\ \hline
\multirow{4}{*}{1} & \multirow{2}{*}{$1/2$}
|
& \multirow{2}{*}{$1/2$} & $(0,1)$ & $\,\,|\Psi_2\rangle =|0\rangle \otimes |\!\uparrow\rangle$ \rule{0pt}{2.6ex}\rule[-1.2ex]{0pt}{0pt}
\\ \cline{4-5}
& & & $(1,0)$ & $\,\,|\Psi_3\rangle =|\!\uparrow\rangle \otimes |0\rangle$ \rule{0pt}{2.6ex}\rule[-1.2ex]{0pt}{0pt}
\\ \cline{2-5}
& \multirow{2}{*}{$-1/2$} & \multirow{2}{*}{$1/2$} & $(0,1)$ & $\,\,|\Psi_4\rangle =|0\rangle \otimes |\!\downarrow\rangle$ \rule{0pt}{2.6ex}\rule[-1.2ex]{0pt}{0pt}
\\ \cline{4-5}
& & & $(1,0)$ & $\,\,|\Psi_5\rangle = |\!\downarrow\rangle \otimes |0\rangle$ \rule{0pt}{2.6ex}\rule[-1.2ex]{0pt}{0pt}
\\ \hline
\multirow{6}{*}{2} & \multirow{4}{*}{$0$} & \multirow{3}{*}{0} & $(2,0)$ & $\,\,|\Psi_6\rangle =|\!\uparrow\downarrow\rangle \otimes |0\rangle$ \rule{0pt}{2.6ex}\rule[-1.2ex]{0pt}{0pt}
\\ \cline{4-5}
& & & $(0,2)$ & $\,\,|\Psi_7\rangle =|0\rangle \otimes |\!\uparrow\downarrow\rangle$ \rule{0pt}{2.6ex}\rule[-1.2ex]{0pt}{0pt}
\\ \cline{4-5}
& & & $(1,1)$ & $\,\,|\Psi_8\rangle =\frac{|\uparrow\rangle|\otimes|\downarrow\rangle - |\downarrow\rangle \otimes |\uparrow\rangle}{\sqrt{2}}$ \rule{0pt}{2.6ex}\rule[-1.2ex]{0pt}{0pt}
\\ \cline{3-5}
& & 1 & $(1,1)$ & $\,\,|\Psi_9\rangle =\frac{|\uparrow\rangle|\otimes|\downarrow\rangle + |\downarrow\rangle \otimes |\uparrow\rangle}{\sqrt{2}}$ \rule{0pt}{2.6ex}\rule[-1.2ex]{0pt}{0pt}
\\ \cline{2-5}
& $1$ & 1 & $(1,1)$ & $\,\,|\Psi_{10}\rangle =|\!\uparrow\rangle \otimes |\!\uparrow\rangle$ \rule{0pt}{2.6ex}\rule[-1.2ex]{0pt}{0pt}
\\ \cline{2-5}
& $-1$ & 1 & $(1,1)$ & $\,\,|\Psi_{11}\rangle =|\!\downarrow\rangle \otimes |\!\downarrow\rangle$ \rule{0pt}{2.6ex}\rule[-1.2ex]{0pt}{0pt}
\\ \hline
\multirow{4}{*}{3} & \multirow{2}{*}{$1/2$} & \multirow{2}{*}{$1/2$} & $(2,1)$ & $\,\,|\Psi_{12}\rangle =|\!\uparrow\downarrow\rangle \otimes |\!\uparrow\rangle$ \rule{0pt}{2.6ex}\rule[-1.2ex]{0pt}{0pt}
\\ \cline{4-5}
& & & $(1,2)$ & $\,\,|\Psi_{13}\rangle =|\!\uparrow\rangle \otimes |\!\uparrow\downarrow\rangle$ \rule{0pt}{2.6ex}\rule[-1.2ex]{0pt}{0pt}
\\ \cline{2-5}
& \multirow{2}{*}{$-1/2$} & \multirow{2}{*}{$1/2$} & $(2,1)$ & $\,\,|\Psi_{14}\rangle =|\!\uparrow\downarrow\rangle \otimes |\!\downarrow\rangle$ \rule{0pt}{2.6ex}\rule[-1.2ex]{0pt}{0pt}
\\ \cline{4-5}
& & & $(1,2)$ & $\,\,|\Psi_{15}\rangle =|\!\downarrow\rangle \otimes |\!\uparrow\downarrow\rangle$ \rule{0pt}{2.6ex}\rule[-1.2ex]{0pt}{0pt}
\\ \hline
4 & $0$ & $0$ & $(2,2)$ & $\,\, |\Psi_{16}\rangle =|\!\uparrow\downarrow\rangle \otimes |\!\uparrow\downarrow\rangle$ \rule{0pt}{2.6ex}\rule[-1.2ex]{0pt}{0pt}
\\
\hline
\end{tabular}
\caption{Decomposition of the Fock space into symmetry sectors labeled by the quantum numbers $(N, S^z,|\vec{S}|,{N}_{\!A}, {N}_{\!B})$.} \label{tab:sym}
\end{table}
It is astonishing, however, that even the twirl $T_{|\hat{\vec{S}}|}$ with respect to the total spin does not violate the separability criterion, despite its non-local character. This key result of our work is proven in the Appendix \ref{sec:ent_sym}.
In summary, we can restrict $\mathcal{D}_{sep}$ in \eqref{eqn:rel_ent} to the states sharing the same symmetries as $\tilde{\rho}$. Their fully-symmetric eigenstates $\ket{\Psi_i}$ are presented in Table \ref{tab:sym} together with their quantum numbers $(N, S^z,|\vec{S}|,{N}_{\!A}, {N}_{\!B})$.
Since we can restrict $\mathcal{D}_{sep}$ to the density operators $\sigma = \sum_{i=1}^{16} q_i \ket{\Psi_i}\!\bra{\Psi_i}$ with the same eigenbasis as $\tilde{\rho}=\sum_{i=1}^{16} p_i \ket{\Psi_i}\!\bra{\Psi_i}$, the quantum relative entropy in \eqref{eqn:rel_ent} simplifies to the Kullback-Leibler divergence \cite{kullback1951information}
\begin{equation}
D_\text{KL}(\vec{p}\,||\vec{q}\,)=\sum_i p_i\log(p_i/q_i).
\end{equation}
The problem of finding the closest separable state is then transformed to that of finding the coefficients $q_i$ that minimizes $D_\text{KL}(\vec{p}\,||\vec{q}\,)$. Yet, it also remains to find a compact description of the convex set of $\vec{q}$ corresponding to fully symmetric separable states $\sigma$. One can exploit for this the fact that most of the states $\ket{\Psi_i}$ in Table \ref{tab:sym} are separable. To be more specific, one can even show (see Appendix \ref{sec:opt}) that all the entanglement in $\rho$ is confined to the sector $M = \mathrm{Span} \{ |\Psi_8\rangle, |\Psi_9\rangle, |\Psi_{10}\rangle, |\Psi_{11}\rangle \}$, for which eventually the Peres-Horodecki separability criterion \cite{peres1996separability,horodecki1997separability} becomes necessary and sufficient. Realizing all these technical ideas and solving the remaining minimization problem (see Appendix \ref{sec:analytic_form} for more details) leads finally to our closed formula for the relative entropy of entanglement $E(\rho)$. It depends on two crucial parameters, $t\equiv \max\{p_8,p_9\}$ and $r\equiv \min\{p_8,p_9\}+p_{10}+p_{11}$, where $p_i=\bra{\Psi_i}\tilde{\rho} \ket{\Psi_i}=\bra{\Psi_i}\rho \ket{\Psi_i}$. If
$r \geq t$, the state $\rho$ is unentangled, $E(\rho) =0$, and otherwise it is entangled with
\begin{equation}
E(\rho) = r \log\left(\frac{2r}{r+t}\right) + t \log\left(\frac{2t}{r+t}\right). \label{eqn:rel_ent_formula}
\end{equation}
A generalization of this key result of our work is presented in Appendix \ref{sec:general} for the less relevant cases in which $\rho$ does not emerge from a \emph{singlet} many-electron state.
\section{Applications}\label{sec:applic}
The prospects of having a closed formula for the electron entanglement at hand can hardly be overestimated.
We briefly present in the following three applications which shall serve the scientific community as an inspiration. At the same time, this also provides first evidence for the broad applicability and potential relevance of our key result in physics, chemistry and materials science.
\subsection{Free Fermions and Kohn's Locality Principle}
\begin{figure}[t]
\centering
\includegraphics[scale=0.29]{FreeFermions.pdf}
\caption{Site-site entanglement in the ground state of free fermions on a 1D infinite lattice at various inter-site distances $l$ (a) and disentangling distance $l_\text{min}$ (b) as a function of the filling fraction $\eta$.}
\label{fig:FreeFermions}
\end{figure}
As a first example, we discuss free electrons on a 1D lattice with Hamiltonian
\begin{equation}\label{eqn:Hfree}
\hat{H}_\text{free} = - \sum_{i,\sigma} (f^\dagger_{i\sigma} f^{\phantom{\dagger}}_{i+1,\sigma} + f^\dagger_{i+1,\sigma} f^{\phantom{\dagger}}_{i\sigma}),
\end{equation}
where $f_{i \sigma}^\dagger \,(f_{i \sigma})$ creates (annihilates) an electron with spin $\sigma$ at site $i$.
Since such quadratic models are analytically solvable, they have enjoyed a great popularity in the past decades in quantum many-body physics \cite{Peschel01,Peschel03,cheong2004free,peschel2009reduced,Calabrese_2016,vidmar2017free,eisler2020noncrit}. The $N$-electron ground state of \eqref{eqn:Hfree} follows as a Slater determinant with the $N$ energetically lowest spin-momentum states occupied and any physical quantity depends according to Wick's theorem on the corresponding one-particle reduced density operator only \cite{Gaudin60}.
By referring to this, we can provide (after a straightforward but lengthy calculation) a concise answer in \emph{analytic} terms to one central question: Does the ground state of \eqref{eqn:Hfree} exhibit long-distance entanglement?
For this, we consider the thermodynamic limit at filling factor $\eta$ and introduce the distance $l \equiv |i-j|$.
Result \eqref{eqn:rel_ent_formula} leads to a closed formula for the distance $l_{min}$ beyond which the entanglement between sites $i, j$ with $|i-j| \geq l_{min}$ vanishes,
\begin{equation}\label{eqn:dminleading}
l_\text{min} = \frac{\sqrt{2}}{\pi} \frac{1}{\eta(1-\eta)} + \mathcal{O}(1).
\end{equation}
Figure \ref{fig:FreeFermions} reveals that our leading order result \eqref{eqn:dminleading} approximates very well on the entire $\eta$-regime the numerically exact result for $l_{min}$. Most importantly, \eqref{eqn:dminleading} confirms in analytic terms that the site-site entanglement can become long-ranged, provided the electron ($\eta$) or hole filling factor ($1-\eta$) is sufficiently small.
At the same time, these results, particularly Eq.~\eqref{eqn:dminleading}, reveal univocally that the entanglement vanishes \emph{exactly} for most filling factors already when the sites are separated by just a few lattice constants. Accordingly, this resembles in the context of free electron chains Walter Kohn's seminal locality principle \cite{kohn1978locality,prodan2005nearsightedness}: ``\emph{[\ldots] local electronic properties, such as the density $n(r)$, depend significantly on the effective external potential only at nearby points.}'' \cite{prodan2005nearsightedness}. Indeed, this principle plays a pivotal role in modern chemistry since it ``\emph{[\ldots] can be viewed as underlying such important ideas as Pauling's ``chemical bond'', ``transferability'', and Yang's computational principle of ``divide and conquer''.}'' \cite{prodan2005nearsightedness}. It will therefore be one of the important future challenges to explore and quantify the long-distance entanglement in molecular systems based on our closed formula \eqref{eqn:rel_ent_formula}. If the entanglement turns out to vanish again exactly whenever two atomic orbitals are sufficiently far separated it will suggest a refinement of the locality principle from a quantum information perspective. In that case, the quantum correlations are \emph{strictly} local in the sense that they do vanish \emph{exactly} beyond some critical separation distance. This then implies that the correlation function of any two local and spatially separated observables is merely of classical nature.
\subsection{Quantum Phase Transition}
\begin{figure}[t]
\centering
\includegraphics[scale=0.29]{EHM.pdf}
\caption{(a) Nearest neighbor entanglement as a function of the on-site potential $U/t$ and nearest neighbor interaction strength $V/t$ with $50$ sites. (b) Entanglement in two adjacent pairs of nearest neighbors sharing a common site, $\rho_\text{sb}$ (strong bond) and $\rho_\text{wb}$ (weak bond) as function of $V/t$ at $U/t=6$, with $128$ and $256$ sites.}
\label{fig:EHM}
\end{figure}
The extended Hubbard model in 1D, described by the Hamiltonian
\begin{equation}\label{eqn:HamEHM}
\begin{split}
\hat{H} &= t \hat{H}_\text{free} + U \sum_i \hat{n}_{i\uparrow} \hat{n}_{i\downarrow} + V \sum_i \hat{n}_i \hat{n}_{i+1},
\end{split}
\end{equation}
where $\hat{n}_i\equiv \hat{n}_{i\uparrow}+\hat{n}_{i\downarrow}$, has a rich phase diagram at half filling, which has been extensively studied \cite{hirsch1984charge,cannon1991ground,voit1992phase,nakamura2000tricritical,tsuchiizu2002phase,sengupta2002bond,jeckelmann2002ground,gu2004entanglement,zhang2004dimerization,mund2009quantum}. Around $V \approx U/2$, between the well-understood charge density wave (CDW) and spin density wave (SDW) phase \cite{voit1992phase}, there exists an elusive, narrow bond order wave (BOW) phase \cite{nakamura2000tricritical}. Since the BOW phase is characterized by nearest neighbor pairs that dimerize by forming alternatingly strong and weak bonds, this phase cannot be detected by the commonly used single-site entanglement entropy \cite{gu2004entanglement}. In striking contrast, our elaborated measure \eqref{eqn:rel_ent_formula} is ideally suited: By quantifying the entanglement between neighboring sites, we can not only detect the BOW phase and distinguish it form the CDW phase but also attribute its emergence to the three individual terms in the Hamiltonian \eqref{eqn:HamEHM}. The latter is due to the fact that each of them affects the two-orbital reduced density operator and \eqref{eqn:rel_ent_formula} in a different manner.
In Figure \ref{fig:EHM} (a) we present the nearest neighbor entanglement based on DMRG results for the case of $50$ sites. To also probe the bonding behaviour near $V = U/2$, we fix $U/t=6$ and present in the right panel the entanglement between the sites $i-1,i$ and between $i,i+1$, for $128$ and $256$ sites. The corresponding two-site reduced density operators $\rho_\text{sb}$ and $\rho_\text{wb}$ describe the strong and weak bond, respectively. A peak (dip) is observed around $V/t \approx 3.17$ in the entanglement in $\rho_\text{sb}$ ($\rho_\text{wb}$), demonstrating the alternating bond strength of spontaneous dimerization. Therefore this critical value of $V/t$ is interpreted as a BOW-CDW transition point.
The transition SDW-BOW cannot be detected in the same manner since it is of infinite order \cite{nakamura2000tricritical}.
\subsection{Electronic Structure Analysis}
\begin{figure}[t!]
\centering
\includegraphics[scale=0.33]{qchem.pdf}
\caption{Total correlation separated into entanglement (``Quantum'') and classical correlation (``Classical'') between 28 natural orbitals in the ground state of $\mathrm{Cr_2}$.}
\label{fig:qchem}
\end{figure}
As a third example, we discuss the electronic structure of a molecular system, with an emphasis on static correlation.
To calculate the entanglement and total correlation in the strongly correlated chromium dimer, we first determine its ground state $\ket{\Psi_{gs}}$ by a DMRG calculation involving 28 orbitals. For any two natural orbitals $\ket{\varphi_i},\ket{\varphi_j}$ (eigenstates of the orbital one-particle reduced density operator) we then determine the respective two-orbital reduced density operators
\begin{equation}
\rho_{i,j}= \mbox{Tr}_{\setminus \{i,j\}}[\ket{\Psi_{gs}}\!\bra{\Psi_{gs}}],
\end{equation}
as well as $\rho_{i/j}= \mbox{Tr}_{j/i}[\rho_{i,j}]$, by tracing out all the remaining orbitals. The \emph{total correlation} between any two natural orbitals follows as the quantum mutual information $S(\rho_{i,j}|| \rho_i \otimes \rho_j)$ \cite{vedral1997quantifying}. Remarkably, as it is revealed in Figure \ref{fig:qchem}, most of the correlation between the natural orbitals is classical, measured according to Ref.~\cite{Lindblad73,henderson2001classical} by $S(\sigma_{i,j}^\ast||\rho_i\otimes\rho_j)$, where $\sigma_{i,j}^\ast$ denotes the separable state closest to $\rho_{i,j}$. This surprising finding highlights the distinctive role of the natural orbitals as a reference basis for chemical computation. In this basis, the strong correlation contained in the molecule is rendered classical and the entangled is marginalized.
These findings also rationalize the ``zero seniority''-wavefunction ansatzes which play an important role since several decades in nuclear physics, with recent applications also in quantum chemistry for electronic structure calculations \cite{ring2004nuclear,stein2014seniority,boguslawski2017benchmark}. To explain this point, we recall that an $N$-electron wavefunction $|\Psi\rangle$ (with $N$ even) has zero seniority if its configuration interaction expansion with respect to the natural orbital basis simplifies
to
\begin{equation}\label{eqn:SZ}
|\Psi\rangle = \sum_{\vec{\nu}} c_{\vec{\nu}} |\vec{\nu}\,\rangle\,,
\end{equation}
where
\begin{equation}
|\vec{\nu}\,\rangle \equiv |\nu_1,\nu_2,\ldots,\nu_d\rangle \equiv \prod_{i=1}^d(f_{i\uparrow}^\dagger f_{i\downarrow}^\dagger)^{\nu_i}\ket{0}\,,
\end{equation}
$\nu_i=0,1$ and $\sum_i \nu_i =N/2$.
This means that the expansion \eqref{eqn:SZ} contains only Slater determinants with \emph{paired} electrons. To understand what this means from a quantum information theoretical point of view, we determine for any two natural orbitals $i,j$ the respective two-orbital reduced density matrix. Its form follows directly as
\begin{equation}\label{eqn:SZrdm}
\rho_{i,j} = \sum_{\nu_i,\nu_j=0,1} \kappa_{i,j}^{(\nu_i,\nu_j)} \ket{\nu_i,\nu_j}\!\bra{\nu_i,\nu_j}\,
\end{equation}
with some non-negative coefficients $\kappa_{i,j}^{(\nu_i,\nu_j)}$ that depend on $\{c_{\vec{\nu}}\}$. The reduced states \eqref{eqn:SZrdm} are nothing else than classical mixtures of uncorrelated states $\ket{\nu_i,\nu_j}\!\bra{\nu_i,\nu_j} \equiv \ket{\nu_i}\!\bra{\nu_i} \otimes \ket{\nu_j}\!\bra{\nu_j}$ and are thus unentangled. It is worth noticing that this would be not the case anymore if we expressed the state \eqref{eqn:SZ} with respect to a different orbital reference basis $\mathcal{B}$ (also since this would lead to unpaired electrons in the respective configuration interaction expansion).
Accordingly, the entanglement of a zero seniority wavefunction can be fully transformed into classical correlation through orbital rotations. This observation suggests to establish the sum
\begin{equation}\label{eqn:costf}
E^{(\mathcal{B})}(\rho) \equiv \sum_{1\leq i <j \leq d} E(\rho_{i,j}^{(\mathcal{B})})
\end{equation}
as an alternative measure of the effective seniority of an $N$-electron quantum state with respect to a chosen orbital reference basis $\mathcal{B}$. Minimizing then $E^{(\mathcal{B})}(\rho)$ with respect to all reference bases would yield the lowest possible effective seniority that could be reached through suitable orbital rotations. In that sense, the electron entanglement \eqref{eqn:rel_ent_formula} may serve through \eqref{eqn:costf} as a cost function for reducing the computational complexity of approaches to the ground state problem in quantum chemistry and nuclear physics.
\section{Summary and Conclusions}
Despite its comprehensive mathematical foundation, no closed formula has been known yet for measuring (faithfully) entanglement in generic many-electron quantum states. In the form of Eq.~\eqref{eqn:rel_ent_formula}, we succeeded in deriving such a formula for the entanglement between any two orbitals/sites in arbitrary many-electron quantum systems. For this, we exploited the common symmetries of realistic systems and incorporated the crucial particle number superselection rule. Because of the latter, our formula is operationally meaningful in contrast to most recent applications of quantum information theoretical tools in quantum many-body physics: It quantifies the true physical entanglement that could be extracted from a system and eventually be used as a resource for quantum information processing tasks. For the sake of completeness, we also provide in Appendix \ref{sec:PSSR} the analog of \eqref{eqn:rel_ent_formula} for the weaker parity superselection rule.
The study of three physical examples has emphasized the potential significance of our key result \eqref{eqn:rel_ent_formula} and shall serve the scientific community as an inspiration for future applications. First, the presence of long-distant entanglement in free electron chains could be proven for low and high filling factors. For intermediate filling factors, however, the entanglement vanishes \emph{exactly} whenever the sites are separated by a few lattice constants. The latter refines from a quantum information theoretical perspective Kohn's locality principle which lies at the heart of modern quantum chemistry. Second, the existence of the distinctive bond-order wave phase in the extended Hubbard model could be confirmed. Third, the total correlation in molecular systems was shown to be mainly classical and a distinctive pairing structure among natural orbitals could be revealed, even for the strongly correlated chromium dimer. In that sense, we could elucidate the success and the limitations of the zero-seniority wave function ansatz, as it is used in nuclear physics and quantum chemistry. In the form of the averaged electron entanglement \eqref{eqn:costf} we put forth a corresponding measure of the effective seniority of a strongly correlated molecular ground state. Due to its concise quantum information theoretical character it may serve as a cost function, helping to improve the convergence rate of corresponding numerical implementations.
Last but not least, we recall that the two-orbital setting $\mathbb{C}^4 \otimes \mathbb{C}^4$ underlying our entanglement formula \eqref{eqn:rel_ent_formula} is information theoretical complete, in accordance with Coulson's important challenge \cite{Col00}: all relevant information about many-electron ground states is contained in the respective reduced density matrices. Indeed, in lattice models with hopping and interaction restricted to nearest neighbors, the calculation of the ground state energy involves effectively only the two-site reduced density matrices $\rho_{i,i+1}$. This and its generalization to continuous systems \cite{Col63,Mazz12,Mazz16} opens a novel avenue for advancing density functional theory since their functionals necessitate an ansatz of $\rho_{i,i+1}$ as function of the single-site density matrices. The latter means nothing else than quantifying various correlation types between sites (or localized orbitals), particularly the electron entanglement as described \emph{analytically} by Eq.~\eqref{eqn:rel_ent_formula}.
\begin{acknowledgments}
We thank S.\hspace{0.5mm}Mardazad for his support concerning the DMRG ground state calculations. We acknowledge financial support
from the Deutscher Akademischer Austauschdienst (DAAD Completion Scholarship 2020) (L.D.), the Deutsche Forschungsgemeinschaft (Grant SCHI 1476/1-1) (L.D., C.S.), the NKFIH through the Quantum Information National Laboratory of Hungary program, and Grants No. K124152, FK135220, K124351 (Z.Z.). The project/research is also part of the Munich Quantum Valley, which is supported by the Bavarian state government with funds from the Hightech Agenda Bayern Plus.
\end{acknowledgments}
|
\section{Introduction}
The composition of the nucleon in terms of its fundamental quark and
gluon degrees of freedom is a key focus of QCD phenomenology. In the
light-cone (LC) Fock state description of bound states, each
hadronic eigenstate of the QCD Hamiltonian is expanded at fixed
light-cone time $\tau = t + z/c$ on the complete set of
color-singlet eigenstates $\{ \vert n \rangle \}$ of the free
Hamiltonian which have the same global quantum numbers: $\vert p
\rangle = \Sigma_n \psi_n^H(x_i, k_{\perp i}, \lambda_i) \vert n
\rangle.$ Thus each Fock component in the light-cone wavefunction
of a nucleon is composed of three valence quarks, which gives the
nucleon its global quantum numbers, plus a variable number of sea
quark-antiquark ($q \bar q$) pairs of any flavor, plus any number of
gluons. The quark distributions $q(x,\widetilde Q)$ of the nucleon
measured in deep inelastic scattering are computed from the sum of
squares of the light-cone wavefunctions integrated over transverse
momentum $k_\perp$ up to the factorization scale $\widetilde Q$,
where the light-cone momentum fraction $x = {k^+ \over p^+} =
{k^0+k^3\over P^0 + P^3}$ of the struck quark is set equal to the
Bjorken variable $x_{BJ}.$
It is important to distinguish two distinct types of quark and gluon
contributions to the nucleon sea measured in deep inelastic
lepton-nucleon scattering: ``extrinsic" and ``intrinsic"
\cite{Bro81,Bur92,Bro95}. The extrinsic sea quarks and gluons are
created as part of the lepton-scattering interaction and thus exist
over a very short time $\Delta \tau \sim 1/Q$. These factorizable
contributions can be systematically derived from the QCD hard
bremsstrahlung and pair-production (gluon-splitting) subprocesses
characteristic of leading twist perturbative QCD evolution. In
contrast, the intrinsic sea quarks and gluons are multiconnected to
the valence quarks and exist over a relatively long lifetime within
the nucleon bound state. Thus the intrinsic $q \bar q$ pairs can
arrange themselves together with the valence quarks of the target
nucleon into the most energetically-favored meson-baryon
fluctuations.
In conventional studies of the ``sea'' quark distributions, it is
usually assumed that, aside from the effects due to
antisymmetrization, the quark and antiquark sea contributions have
the same momentum and helicity distributions. However, the ansatz of
identical quark and antiquark sea contributions has never been
justified, either theoretically or empirically. Obviously the sea
distributions which arise directly from gluon splitting in leading
twist are necessarily CP-invariant; {\it i.e.},\ they are symmetric under
quark and antiquark interchange. However, the initial distributions
which provide the boundary conditions for QCD evolution need not be
symmetric since the nucleon state is itself not CP-invariant. Only
the global quantum numbers of the nucleon must be conserved. The
intrinsic sources of strange (and charm) quarks reflect the
wavefunction structure of the bound state itself; accordingly, such
distributions would not be expected to be CP symmetric. Thus the
strange/antistrange asymmetry of nucleon structure functions
provides a direct window into the quantum bound-state structure of
hadronic wavefunctions.
It is also possible to consider the nucleon wavefunction at low
resolution as a fluctuating system coupling to intermediate
hadronic Fock states such as noninteracting meson-baryon pairs. The
most important fluctuations are most likely to be those closest to
the energy shell and thus have minimal invariant mass. For example,
the coupling of a proton to a virtual $K^+ \Lambda$ pair provides a
specific source of intrinsic strange quarks and antiquarks in the
proton. Since the $s$ and $\bar s$ quarks appear in different
configurations in the lowest-lying hadronic pair states, their
helicity and momentum distributions are distinct.
The purpose of this paper is to investigate the quark and antiquark
asymmetry in the nucleon sea which is implied by a light-cone
meson-baryon fluctuation model of intrinsic $q\bar q$ pairs. Such
fluctuations are necessarily part of any quantum-mechanical
description of the hadronic bound state in QCD and have also been
incorporated into the cloudy bag model \cite{Sig87} and Skyrme
solutions to chiral theories \cite{Bur92}. We shall utilize a
boost-invariant light-cone Fock state description of the hadron
wavefunction which emphasizes multi-parton configurations of
minimal invariant mass. We find that such fluctuations predict a
striking sea quark and antiquark asymmetry in the corresponding
momentum and helicity distributions in the nucleon structure
functions. In particular, the strange and antistrange distributions
in the nucleon generally have completely different momentum and spin
characteristics. For example, the model predicts that the intrinsic
$d$ and $s$ quarks in the proton sea are negatively polarized,
whereas the intrinsic $\bar d$ and $\bar s$ antiquarks provide zero
contributions to the proton spin. We also predict that the intrinsic
charm and anticharm helicity and momentum distributions are not
strictly identical. We show that the above picture of quark and
antiquark asymmetry in the momentum and helicity distributions of
the nucleon sea quarks has support from a number of experimental
observations, and we suggest processes to test and measure this
quark and antiquark asymmetry in the nucleon sea.
\section{The light-cone meson-baryon fluctuation model of intrinsic
$q \bar q$ pairs}
In order to characterize the momentum and helicity distributions of
intrinsic $q \bar q$ pairs, we shall adopt a light-cone two-level
convolution model of structure functions \cite{Ma91} in which the
nucleon is a two-body system of meson and baryon which are also
composite systems of quarks and gluons. We first study the intrinsic
strange $s \bar s$ pairs in the proton. In the meson-baryon
fluctuation model, the intrinsic strangeness fluctuations in the
proton wavefunction are mainly due to the intermediate $K^+ \Lambda$
configuration since this state has the lowest off-shell light-cone
energy and invariant mass \cite{Bro95}. The $K^+$ meson is a
pseudoscalar particle with negative parity, and the $\Lambda$ baryon
has the same parity of the nucleon. The $K^+ \Lambda$ state retains
the parity of the proton, and consequently, the $K^+ \Lambda$
system must have odd orbital angular momentum. We thus write the
total angular momentum space wavefunction of the intermediate $K
\Lambda$ state in the center-of-mass reference frame as
\begin{eqnarray}
\left|J=\frac{1}{2},J_z=\frac{1}{2}\right\rangle
&=&\sqrt{\frac{2}{3}} \left|L=1,L_z=1\right\rangle\
\left|S=\frac{1}{2},S_z=-\frac{1}{2}\right\rangle\nonumber\\
&-&\sqrt{\frac{1}{3}}\
\left|L=1,L_z=0\right\rangle \
\left|S=\frac{1}{2},S_z=\frac{1}{2}\right\rangle\ .
\end{eqnarray}
In the constituent quark model, the spin of $\Lambda$ is provided
by its strange quark and the net spin of the antistrange quark in
$K^+$ is zero. The net spin projection of $\Lambda$ in the
$K^+\Lambda$ state is $S_z(\Lambda)=-\frac{1}{6}$. Thus the
intrinsic strange qu
|
ark normalized to the probability
$P_{K^+\Lambda}$ of the $K^+\Lambda$ configuration yields a
fractional contribution $\Delta S_{s}=2
S_z(\Lambda)=-\frac{1}{3}P_{K^+\Lambda} $ to the proton spin,
whereas the intrinsic antistrange quark gives a zero contribution:
$\Delta S_{\bar s}=0.$ There thus can be a significant quark and
antiquark asymmetry in the quark spin distributions for the
intrinsic $s \bar s$ pairs.
We also need to estimate the relative probabilities for other
possible fluctuations with higher off-shell light-cone energy and
invariant mass, such as $K^+(u\bar{s}) \Sigma^0(uds)$,
$K^0(d\bar{s})\Sigma^+(uus)$, and $K^{* +}(u\bar s) \Lambda(uds)$.
For example, we find that the relative probability of finding the
$K^{* +} \Lambda$ configuration (which has positively-correlated
strange quark spin) compared with $K^+ \Lambda$ is 3.6\% (8.9\%) by
using a same normalization constant for a light-cone Gaussian type
(power-law type) wavefunction. We find that the higher fluctuations
of intrinsic $s \bar s$ pairs do not alter the qualitative estimates
of the quark and antiquark spin asymmetry in the nucleon sea based
on using the $K^+ \Lambda$ fluctuation alone.
The quark helicity projections measured in deep inelastic scattering
are related to the quark spin projections in the target rest frame
by multiplying by a Wigner rotation factor of order 0.75 for light
quarks and of order 1 for heavy quarks \cite{Ma91b}. We therefore
predict that the net strange quark helicity arising from the
intrinsic $s \bar s$ pairs in the nucleon wavefunction is negative,
whereas the net antistrange quark helicity is approximately zero.
This aspect of quark/antiquark helicity asymmetry is in qualitative
agreement with the predictions of a broken-U(3) version of the
chiral quark model \cite{Man84} where the intrinsic quark-antiquark
pairs are introduced through quark to quark and Goldstone boson
fluctuations.
In principle, one can measure the helicity distributions of strange
quarks in the nucleon sea from the $\Lambda$ longitudinal
polarization of semi-inclusive $\Lambda$ production in polarized
deep inelastic scattering \cite{Bro95,Lu95,Ell95}. We expect that
the polarization is negative for the produced $\Lambda$ and zero (or
slightly positive \cite{Bur92}) for the produced $\bar{\Lambda}$.
The expectation of negative longitudinal $\Lambda$ polarization is
supported by the measurements of the WA59 Collaboration for the
reaction $\bar{\nu}+{\rm N} \rightarrow\mu^+\Lambda+{\rm X}$
\cite{WA59}. The complementary measurement of $\bar {\Lambda}$
polarization in semi-inclusive deep inelastic scattering is clearly
important in order to test the physical picture of the light-cone
meson-baryon fluctuation model.
It is also interesting to study the sign of the $\Lambda$
polarization in the current fragmentation region as the Bjorken
variable $x \rightarrow 1$ in polarized proton deep inelastic
inclusive reactions. From perturbative QCD arguments on helicity
retention, one expects a positive helicity distribution for any
quark struck in the end-point region $x \rightarrow 1$, even though
the global helicity correlation is negative \cite{Bro95b}. Thus we
predict that the sign of the $\Lambda$ polarization should change
from negative to positive as $x$ approaches the endpoint regime.
The momentum distributions of the intrinsic strange and antistrange
quarks in the $K^+ \Lambda$ state can be modeled from the two-level
convolution formula
\begin{equation}
s(x)=\int_{x}^{1} \frac{{\rm d}y}{y}
f_{\Lambda/K^+\Lambda}(y) q_{s/\Lambda}\left(\frac{x}{y}\right);
\;\;\;
\bar s(x)=\int_{x}^{1} \frac{{\rm d}y}{y}
f_{K^+/K^+\Lambda}(y) q_{\bar s/K^+}\left(\frac{x}{y}\right),
\end{equation}
where $f_{\Lambda/K^+\Lambda}(y)$, $f_{K^+/K^+\Lambda}(y)$ are
probabilities of finding $\Lambda$, $K^+$ in the $K^+ \Lambda$ state
with the light-cone momentum fraction $y$ and $q_{s/\Lambda}(x/y)$,
$q_{\bar s/K^+}(x/y)$ are probabilities of finding strange,
antistrange quarks in $\Lambda$, $K^+$ with the light-cone momentum
fraction $x/y$. We shall estimate these quantities by adopting
two-body momentum wavefunctions for $p=K^+ \Lambda$, $K^+=u {\bar
s}$, and $\Lambda=s u d$ where the $u d$ in $\Lambda$ serves as a
spectator in the quark-spectator model \cite{Ma96}. We choose two
simple functions of the invariant mass $ {\cal M}^2=\sum_{i=1}^{2} \
\frac{{\bf k}^2_{\perp i}+m_i^2}{x_i} $ for the two-body
wavefunction: the Gaussian type and power-law type wavefunctions
\cite{BHL}, \begin{equation} \psi_{{\rm Gaussian}}({\cal
M}^2)=A_{{\rm Gaussian}}\ {\rm exp} (-{\cal M}^2/2\alpha^2),
\end{equation} \begin{equation} \psi_{{\rm Power}}({\cal
M}^2)=A_{{\rm Power}}\ (1+{\cal M}^2/\alpha^2)^{-p}, \end{equation}
where $\alpha$ sets the characteristic internal momentum scale. We
do not expect to produce realistic quark distributions with simple
two-body wavefunctions; however, we can hope to explain some
qualitative features of the data as well as relations between the
different quark distributions \cite{Ma96}. The predictions for the
momentum distributions $s(x)$, $\bar s(x)$, and $\delta
_s(x)=s(x)-\bar s(x)$ are presented in Fig.~\ref{bmfig1}. The
strong, structured momentum asymmetry of the $s(x)$ and $\bar s(x)$
intrinsic distributions reflects the tendency of the wavefunction
to minimize the relative velocities of the intermediate meson baryon
state and is approximately same for the two wavefunctions.
\vspace{.5cm}
\begin{figure}[htbp]
\begin{center}
\leavevmode
{\epsfxsize=4in \epsfbox{8159A01.eps}}
\end{center}
\caption{\baselineskip 13pt The momentum distributions for the
strange quarks and antiquarks in the light-cone meson-baryon
fluctuation model of intrinsic $q \bar q$ pairs, with the
fluctuation wavefunction of $K^+\Lambda$ normalized to 1. The curves
in (a) are the calculated results of $s(x)$ (solid curves) and $\bar
s(x)$ (broken curves) with the Gaussian type (thick curves) and
power-law type (thin curves) wavefunctions and the curves in (b) are
the corresponding $\delta_s(x)=s(x)-\bar s(x)$. The parameters are
$m_q=330$ MeV for the light-flavor quark mass, $m_s=480$ MeV for the
strange quark mass, $m_D=600 $ MeV for the spectator mass, the
universal momentum scale $\alpha=330$ MeV, and the power constant
$p=3.5$, with realistic meson and baryon masses. }
\label{bmfig1}
\end{figure}
We have performed similar calculations for the momentum
distributions
|
to Ref.~\cite{Lee:2004we} for details of the
calculations of the rates
$\Gamma_{t\,,\widetilde{t}}^-$ and
$\Gamma^\pm_{\widetilde{H}^\pm\widetilde{W}^\pm\,,
\widetilde{H}^0\widetilde{W}^0\,,\widetilde{H}^0\widetilde{B}^0\,,\widetilde{H}^0\widetilde{S}}$.
Here we present the analytic expression for the
singlino-driven $CP$-violating source term
appearing in the parameter ${\cal A}$ in Eq.~(\ref{eq:cal_a})
as follows:
\begin{equation}
S_{\widetilde{S}\widetilde{H}^0}^{\rm CPV}
= -2|\lambda|^2|M_{\widetilde{S}}||\mu_{\rm eff}|v^2
\dot{\beta}\sin(\phi_\lambda-\phi_\kappa)\,
\mathcal{I}_{\widetilde{S}\widetilde{H}^0}^f \;,
\end{equation}
where $|\mu_{\rm eff}|=|\lambda|v_S/\sqrt{2}$ and
\begin{equation}
|M_{\tilde S} (T)| =
\left[ 2 |\kappa|^2 v_S^2 +
\frac{|\lambda|^2+2|\kappa|^2}{8}\,T^2
\right]^{1/2}
\end{equation}
including the singlino thermal mass term.
We assume that there is no spontaneous $CP$ violation or $\theta=\varphi=0$.
We note that the source term vanishes when
$\sin(\phi_\lambda - \phi_\kappa)=0$ or
$\dot{\beta}=0$.
If $\beta$ has a kink-type profile, $\dot{\beta}\simeq v_w \Delta\beta/L_w$
and ${\cal A}$ becomes independent of $L_w$.
In the MSSM, $\Delta\beta$ was found to
be $\mathcal{O}(10^{-2}-10^{-3})$~\cite{Moreno:1998bq,Funakubo:2009eg}.
We are taking $\Delta\beta=0.02$ in our estimation of the source term
\footnote{
Note the source term grows linearly with $\Delta\beta$ and our
choice is optimal.
We observe that the variation depending on the choice of $\Delta\beta$
is to be regarded as the theoretical uncertainty
and the precise determination of $\Delta\beta$ in the NMSSM is
beyond the scope of this Letter. }.
The fermionic source function ${\cal I}^f$ takes
the generic form of
\begin{align}
\mathcal{I}_{ij}^f &= \frac{1}{4\pi^2}\int_0^\infty dk~\frac{k^2}
{\omega_i\omega_j}
\Big[
\big(1-2{\rm Re}(n_j)\big)
I(\omega_i, \Gamma_i,\omega_j,\Gamma_j)
+\big(1-2{\rm Re}(n_i)\big)\,
I(\omega_j, \Gamma_j,\omega_i,\Gamma_i)\nonumber\\
&\hspace{4cm}
-2\big({\rm Im}(n_i)+{\rm Im}(n_j)\big)\,
G(\omega_i,\Gamma_i,\omega_j, \Gamma_j)
\Big],
\end{align}
where
\begin{eqnarray}
n_{i}\equiv \frac{1}{e^{(\omega_{i}-i\Gamma_{i})/T}+1}
\end{eqnarray}
with $\omega_i=\sqrt{k^2+m^2_i}$.
We note that the thermal width $\Gamma_i$ at finite temperature is given by
the imaginary part of its self-energy
which is nonvanishing independently of whether the particle
$i$ is stable or not~\cite{Weldon:1983jn}. Specifically,
for the calculation of the source function,
we have taken the thermal widths given in Ref.~\cite{Chung:2009qs}
\footnote{For the thermal width of the singlino, we are taking
$\Gamma_{\widetilde{S}}=0.03\,T$ considering the large
coupling $|\lambda|=0.81$.
We find $Y_B$ is affected by the amount of about (25-35) \%
as $\Gamma_{\widetilde{S}}$ varies between $0.003\,T$ and $0.03\,T$.}.
The thermal functions $I$ and $G$ are defined by
\begin{align}
I(a,b,c,d)
&= \frac{1}{2}\frac{1}{[(a+c)^2+(b+d)^2]}\sin\left[2\arctan\frac{a+c}{b+d}\right] \nonumber\\
& +\frac{1}{2}\frac{1}{[(a-c)^2+(b+d)^2]}\sin\left[2\arctan\frac{a-c}{b+d}\right],
\\[0.2cm]
G(a,b,c,d)
&= -\frac{1}{2}\frac{1}{[(a+c)^2+(b+d)^2]}\cos\left[2\arctan\frac{a+c}{b+d}\right] \nonumber\\
& +\frac{1}{2}\frac{1}{[(a-c)^2+(b+d)^2]}\cos\left[2\arctan\frac{a-c}{b+d}\right].
\end{align}
We note the thermal functions lead to
${\cal I}^f_{ij} \propto \Gamma_i + \Gamma_j$ when $m_i\sim m_j\,,T \gg \Gamma_i$.
\begin{figure}[!t]
\begin{center}
{\epsfig{figure=YB_vSc_MsbR_GSinoL.eps,height=10.0cm,width=10.0cm}}
\end{center}
\caption{\it The singlino-driven $Y_B$ as functions of
the singlino VEV at the phase transition which, in principle,
can take any value between 250 GeV and 600 GeV in the type-B
EWPT. We are normalizing our predictions to $Y_B
|
^{\rm ob}=8.8\times 10^{-11}$
for three values of $M_{{\widetilde D}_{3}}=250$ GeV,
$400$ GeV, and $1$ TeV.
We fix $M_{{\widetilde Q}_{3}}=M_{{\widetilde U}_{3}}=1$ TeV.}
\label{fig:ybvsc}
\end{figure}
\begin{figure}[!t]
\begin{center}
{\epsfig{figure=YB_vSc_MstR_GSinoL.eps,height=10.0cm,width=10.0cm}}
\end{center}
\caption{\it The same as in Fig.~\ref{fig:ybvsc} but varying
$M_{{\widetilde U}_{3}}$:
$M_{{\widetilde U}_{3}}=150$ GeV, $250$ GeV, $400$ GeV and $1$ TeV.
We fix $M_{{\widetilde Q}_{3}}=M_{{\widetilde D}_{3}}=1$ TeV.}
\label{fig:ybvsc_MstR}
\end{figure}
In Fig.~\ref{fig:ybvsc}, we show our predictions for the
singlino-driven $Y_B/Y_B^{\rm ob}$ as functions of $v_S(T_C)$
taking three values of $M_{{\widetilde D}_{3}}=250$~GeV,
$400$~GeV, and $1$~TeV
with $Y_B^{\rm ob}$ being the averaged value of the two BAU data in Eq.~(\ref{YB_ob}).
Here $v_S(T_C)$ denotes the singlino VEV at $T_C$
and it can take on any value between 250 GeV and 600 GeV in the type-B EWPT:
\begin{equation}
(v, v_S)=(0, 600~{\rm GeV})\to(208~{\rm GeV}, 249~{\rm GeV})
\end{equation}
at $T_C=110$ GeV~\cite{Funakubo:2005pu}.
We find that
the singlino-Higgsino mass difference becomes smaller
as $v_S$ decreases, leading to the larger $Y_B$.
The more accurate determination of $Y_B$ requires
the knowledge of the profiles of the bubble wall
and the treatment of the diffusion equation beyond the
formalism developed in Refs.~\cite{Huet:1995sh,Lee:2004we}.
We leave the more precise determination
of $Y_B$ in the NMSSM framework for future work~\cite{future1}.
{}From Fig.~\ref{fig:ybvsc}, we can see that $Y_B$ is much suppressed
when $M_{{\widetilde D}_{3}}=1$~TeV
since the ratio $r_1$ is almost vanishing
when both the stops and sbottoms are heavy and/or degenerate.
However, $Y_B$ grows quickly as $M_{{\widetilde D}_{3}}$
decreases. When
$M_{{\widetilde D}_{3}} = 400$~GeV, the ratio
$Y_B/Y_B^{\rm ob}$ is larger than $1$ in the region
$v_S(T_C)\lsim 440$ GeV. Furthermore, in the case with
$M_{{\widetilde D}_{3}} = 250$~GeV,
sufficient BAU can be generated
via the singlino-driven mechanism,
irrespective of the nonlinear dynamics during the EWPT.
We found the similar behavior by fixing $M_{\tilde{Q}_3}$ and $M_{\tilde{D}_3}$
and varying $M_{\tilde{U}_3}$, as shown in Fig. 2. We observe the same mass
$M_{\tilde{U}_3}$ would give a slightly smaller $Y_B$ compared to the same
mass of $M_{\tilde{D}_3}$. Summarizing these results, we conclude that
a sizable mass splitting either in the stop
sector or the sbottom sector is needed
for the successful singlino-driven BAU.
In passing, we note that the resonance enhancement of the widths
$\Gamma_{\widetilde{H}^\pm\widetilde{W}^\pm\,,
\widetilde{H}^0\widetilde{W}^0\,,\widetilde{H}^0\widetilde{B}^0}$
induces the dip around $v_S(T_C)=350$ GeV
where $|M_1|=|M_2|=|\mu_{\rm eff}|$.
Lastly, the EWBG scenario considered here
includes the two light Higgs states well below 100 GeV
escaping LEP constraints
\cite{Cheung:2010ba}
and the lightest neutralino of about 45 GeV with the
singlino fraction of $\sim$ 40 \%.
The light Higgs bosons and the lightest
neutralino deserve further studies
in connection with the current LHC Higgs searches and
the abundance of dark matter in the Universe
In this Letter, we have
examined a new possibility of
a singlino-driven mechanism
for the BAU in the NMSSM framework.
In contrast to the MSSM,
explicit and/or spontaneous $CP$ violation can occur
in the NMSSM Higgs potential even at the tree level.
We emphasize that this new source of
$CP$ violation may solely give rise to the sufficient BAU
without any conflicts with the current EDM constraints,
as long as there is a sizable mass splitting in the stop and/or
the sbottom sectors
with the lighter stop and/or sbottom weighing
below $\sim 500$ GeV~\cite{future1}.
\vspace{-0.2cm}
\subsection*{Acknowledgements}
\vspace{-0.3cm}
\noindent
E.S. would like to thank K. Funakubo, Y. Okada, Y.-F. Zhou and M. Asano
for useful discussions on the CTP formalism.
The work was supported in parts by the National Science Council of
Taiwan under Grant Nos. 100-2112-M-007-023-MY3, 99-2112-M-007-005-MY3
and NSC-100-2811-M-001-037, and by the WCU program through the KOSEF
funded by the MEST (R31-2008-000-10057-0).
|
\section{Introduction}
ACM's consolidated article template, introduced in 2017, provides a
consistent \LaTeX\ style for use across ACM publications, and
incorporates accessibility and metadata-extraction functionality
necessary for future Digital Library endeavors. Numerous ACM and
SIG-specific \LaTeX\ templates have been examined, and their unique
features incorporated into this single new template.
If you are new to publishing with ACM, this document is a valuable
guide to the process of preparing your work for publication. If you
have published with ACM before, this document provides insight and
instruction into more recent changes to the article template.
The ``\verb|acmart|'' document class can be used to prepare articles
for any ACM publication --- conference or journal, and for any stage
of publication, from review to final ``camera-ready'' copy, to the
author's own version, with {\itshape very} few changes to the source.
\section{Template Overview}
As noted in the introduction, the ``\verb|acmart|'' document class can
be used to prepare many different kinds of documentation --- a
double-blind initial submission of a full-length technical paper, a
two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready''
journal article, a SIGCHI Extended Abstract, and more --- all by
selecting the appropriate {\itshape template style} and {\itshape
template parameters}.
This document will explain the major features of the document
class. For further information, the {\itshape \LaTeX\ User's Guide} is
available from
\url{https://www.acm.org/publications/proceedings-template}.
\subsection{Template Styles}
The primary parameter given to the ``\verb|acmart|'' document class is
the {\itshape template style} which corresponds to the kind of publication
or SIG publishing the work. This parameter is enclosed in square
brackets and is a part of the {\verb|documentclass|} command:
\begin{verbatim}
\documentclass[STYLE]{acmart}
\end{verbatim}
Journals use one of three template styles. All but three ACM journals
use the {\verb|acmsmall|} template style:
\begin{itemize}
\item {\verb|acmsmall|}: The default journal template style.
\item {\verb|acmlarge|}: Used by JOCCH and TAP.
\item {\verb|acmtog|}: Used by TOG.
\end{itemize}
The majority of conference proceedings documentation will use the {\verb|acmconf|} template style.
\begin{itemize}
\item {\verb|acmconf|}: The default proceedings template style.
\item{\verb|sigchi|}: Used for SIGCHI conference articles.
\item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles.
\item{\verb|sigplan|}: Used for SIGPLAN conference articles.
\end{itemize}
\subsection{Template Parameters}
In addition to specifying the {\itshape template style} to be used in
formatting your work, there are a number of {\itshape template parameters}
which modify some part of the applied template style. A complete list
of these parameters can be found in the {\itshape \LaTeX\ User's Guide.}
Frequently-used parameters, or combinations of parameters, include:
\begin{itemize}
\item {\verb|anonymous,review|}: Suitable for a ``double-blind''
conference submission. Anonymizes the work and includes line
numbers. Use with the \verb|\acmSubmissionID| command to print the
submission's unique ID on each page of the work.
\item{\verb|authorversion|}: Produces a version of the work suitable
for posting by the author.
\item{\verb|screen|}: Produces colored hyperlinks.
\end{itemize}
This document uses the following string as the first command in the
source file:
\begin{verbatim}
\documentclass[sigconf]{acmart}
\end{verbatim}
\section{Modifications}
Modifying the template --- including but not limited to: adjusting
margins, typeface sizes, line spacing, paragraph and list definitions,
and the use of the \verb|\vspace| command to manually adjust the
vertical spacing between elements of your work --- is not allowed.
{\bfseries Your document will be returned to you for revision if
modifications are discovered.}
\section{Typefaces}
The ``\verb|acmart|'' document class requires the use of the
``Libertine'' typeface family. Your \TeX\ installation should include
this set of packages. Please do not substitute other typefaces. The
``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used,
as they will override the built-in typeface families.
\section{Title Information}
The title of your work should use capital letters appropriately -
\url{https://capitalizemytitle.com/} has useful rules for
capitalization. Use the {\verb|title|} command to define the title of
your work. If your work has a subtitle, define it with the
{\verb|subtitle|} command. Do not insert line breaks in your title.
If your title is lengthy, you must define a short version to be used
in the page headers, to prevent overlapping text. The \verb|title|
command has a ``short title'' parameter:
\begin{verbatim}
\title[short title]{full title}
\end{verbatim}
\section{Authors and Affiliations}
Each author must be defined separately for accurate metadata
identification. Multiple authors may share one affiliation. Authors'
names should not be abbreviated; use full first names wherever
possible. Include authors' e-mail addresses whenever possible.
Grouping authors' names or e-mail addresses, or providing an ``e-mail
alias,'' as shown below, is not acceptable:
\begin{verbatim}
\author{Brooke Aster, David Mehldau}
\email{dave,judy,[email protected]}
\email{[email protected]}
\end{verbatim}
The \verb|authornote| and \verb|authornotemark| commands allow a note
to apply to multiple authors --- for example, if the first two authors
of an article contributed equally to the work.
If your author list is lengthy, you must define a shortened version of
the list of authors to be used in the page headers, to prevent
overlapping text. The following command should be placed just after
the last \verb|\author{}| definition:
\begin{verbatim}
\renewcommand{\shortauthors}{McCartney, et al.}
\end{verbatim}
Omitting this command will force the use of a concatenated list of all
of the authors' names, which may result in overlapping text in the
page headers.
The article template's documentation, available at
\url{https://www.acm.org/publications/proceedings-template}, has a
complete explanation of these commands and tips for their effective
use.
Note that authors' addresses are mandatory for journal articles.
\section{Rights Information}
Authors of any work published by ACM will need to complete a rights
form. Depending on the kind of work, and the rights management choice
made by the author, this may be copyright transfer, permission,
license, or an OA (open access) agreement.
Regardless of the rights management choice, the author will receive a
copy of the completed rights form once it has been submitted. This
form contains \LaTeX\ commands that must be copied into the source
document. When the document source is compiled, these commands and
their parameters add formatted text to several areas of the final
document:
\begin{itemize}
\item the ``ACM Reference Format'' text on the first page.
\item the ``rights management'' text on the first page.
\item the conference information in the page header(s).
\end{itemize}
Rights information is unique to the work; if you are preparing several
works for an event, make sure to use the correct set of commands with
each of the works.
The ACM Reference Format text is required for all articles over one
page in length, and is optional for one-page articles (abstracts).
\section{CCS Concepts and User-Defined Keywords}
Two elements of the ``acmart'' document class provide powerful
taxonomic tools for you to help readers find your work in an online
search.
The ACM Computing Classification System ---
\url{https://www.acm.org/publications/class-2012} --- is a set of
classifiers and concepts that describe the computing
discipline. Authors can select entries from this classification
system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the
commands to be included in the \LaTeX\ source.
User-defined keywords are a comma-separated list of words and phrases
of the authors' choosing, providing a more flexible way of describing
the research being presented.
CCS concepts and user-defined keywords are required for for all
articles over two pages in length, and are optional for one- and
two-page articles (or abstracts).
\section{Sectioning Commands}
Your work should use standard \LaTeX\ sectioning commands:
\verb|section|, \verb|subsection|, \verb|subsubsection|, and
\verb|paragraph|. They should be numbered; do not remove the numbering
from the commands.
Simulating a sectioning command by setting the first word or words of
a paragraph in boldface or italicized text is {\bfseries not allowed.}
\section{Tables}
The ``\verb|acmart|'' document class includes the ``\verb|booktabs|''
package --- \url{https://ctan.org/pkg/booktabs} --- for preparing
high-quality tables.
Table captions are placed {\itshape above} the table.
Because tables cannot be split across pages, the best placement for
them is typically the top of the page nearest their initial cite. To
ensure this proper ``floating'' placement of tables, use the
environment \textbf{table} to enclose the table's contents and the
table caption. The contents of the table itself must go in the
\textbf{tabular} environment, to be aligned properly in rows and
columns, with the desired horizontal and vertical rules. Again,
detailed instructions on \textbf{tabular} material are found in the
\textit{\LaTeX\ User's Guide}.
Immediately following this sentence is the point at which
Table~\ref{tab:freq} is included in the input file; compare the
placement of the table here with the table in the printed output of
this document.
\begin{table}
\caption{Frequency of Special Characters}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
Non-English or Math&Frequency&Comments\\
\midrule
\O & 1 in 1,000& For Swedish names\\
$\pi$ & 1 in 5& Common in math\\
\$ & 4 in 5 & Used in business\\
$\Psi^2_1$ & 1 in 40,000& Unexplained usage\\
\bottomrule
\end{tabular}
\end{table}
To set a wider table, which takes up the whole width of the page's
live area, use the environment \textbf{table*} to enclose the table's
contents and the table caption. As with a single-column table, this
wide table will ``float'' to a location deemed more
desirable. Immediately following this sentence is the point at which
Table~\ref{tab:commands} is included in the input file; again, it is
instructive to compare the placement of the table here with the table
in the printed output of this document.
\begin{table*}
\caption{Some Typical Commands}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Command &A Number & Comments\\
\midrule
\texttt{{\char'134}author} & 100& Author \\
\texttt{{\char'134}table}& 300 & For tables\\
\texttt{{\char'134}table*}& 400& For wider tables\\
\bottomrule
\end{tabular}
\end{table*}
\section{Math Equations}
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of the three are
discussed in the next sections.
\subsection{Inline (In-text) Equations}
A formula that appears in the running text is called an inline or
in-text formula. It is produced by the \textbf{math} environment,
which can be invoked with the usual
\texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with
the short form \texttt{\$\,\ldots\$}. You can use any of the symbols
and structures, from $\alpha$ to $\omega$, available in
\LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few
examples of in-text equations in context. Notice how this equation:
\begin{math}
\lim_{n\rightarrow \infty}x=0
\end{math},
set here in in-line math style, looks slightly different when
set in display style. (See next section).
\subsection{Display Equations}
A numbered display equation---one set off by vertical space from the
text and centered horizontally---is produced by the \textbf{equation}
environment. An unnumbered display equation is produced by the
\textbf{displaymath} environment.
Again, in either environment, you can use any of the symbols and
structures available in \LaTeX\@; this section will just give a couple
of examples of display equations in context. First, consider the
equation, shown as an inline equation above:
\begin{equation}
\lim_{n\rightarrow \infty}x=0
\end{equation}
Notice how it is formatted somewhat differently in
the \textbf{displaymath}
environment. Now, we'll enter an unnumbered equation:
\begin{displaymath}
\sum_{i=0}^{\infty} x + 1
\end{displaymath}
and follow it with another numbered equation:
\begin{equation}
\sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f
\end{equation}
just to demonstrate \LaTeX's able handling of numbering.
\section{Figures}
The ``\verb|figure|'' environment should be used for figures. One or
more images can be placed within a figure. If your figure contains
third-party material, you must clearly identify it as such, as shown
in the example below.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{sample-franklin}
\caption{1907 Franklin Model D roadster. Photograph by Harris \&
Ewing, Inc. [Public domain], via Wikimedia
Commons. (\url{https://goo.gl/VLCRBB}).}
\Description{The 1907 Franklin Model D roadster.}
\end{figure}
Your figures should contain a caption which describes the figure to
the reader. Figure captions go below the figure. Your figures should
{\bfseries also} include a description suitable for screen readers, to
assist the visually-challenged to better understand your work.
Figure captions are placed {\itshape below} the figure.
\subsection{The ``Teaser Figure''}
A ``teaser figure'' is an image, or set of images in one figure, that
are placed after all author and affiliation information, and before
the body of the article, spanning the page. If you wish to have such a
figure in your article, place the command immediately before the
\verb|\maketitle| command:
\begin{verbatim}
\begin{teaserfigure}
\includegraphics[width=\textwidth]{sampleteaser}
\caption{figure caption}
\Description{figure description}
\end{teaserfigure}
\end{verbatim}
\section{Citations and Bibliographies}
The use of \BibTeX\ for the preparation and formatting of one's
references is strongly recommended. Authors' names should be complete
--- use full first names (``Donald E. Knuth'') not initials
(``D. E. Knuth'') --- and the salient identifying features of a
reference should be included: title, year, volume, number, pages,
article DOI, etc.
The bibliography is included in your source document with these two
commands, placed just before the \verb|\end{document}| command:
\begin{verbatim}
\bibliographystyle{ACM-Reference-Format}
\section{Background} \label{sec:background}
\subsection{Quantum Programs and Architectures} \label{quantum-architecture}
The typical fundamental unit of quantum information is the qubit (quantum bit). Unlike classical bits which occupy either 1 or 0 at any given time, quantum bits may exist in a superposition of the two basis states $\ket{0}$ and $\ket{1}$. Qubits are manipulated via quantum gates, operations which are both reversible and preserve a valid probability distribution over the basis states. There is a single irreversible quantum operation called measurement, which transforms the qubit to either $\ket{0}$ or $\ket{1}$ probabilistically. Pairs of qubits are interacted via two-qubit gates, which are generally much more expensive in terms of error rates and latency.
There are a variety of competing styles of quantum systems each with a hardware topology specifying the relative location of the machine's qubits. This topology indicates between which pairs of qubits two-qubit interactions may be performed.
Typical quantum hardware does not readily support long-range multi-qubit operations but does provide a mechanism for moving qubits, either by swapping qubits (in the case of nearest neighbor or 2D-grid devices), teleportation via photon mediated entanglement, physically moving qubits (as in ion-trap devices), \reviewaddition{or a resonant bus (as in superconducting devices)}. Interacting qubits which are distant generate additional latency which is undesirable for near-term qubits with limited coherence time (the expected lifetime of a qubit before an error). \reviewaddition{These machines have expected error rates on the order of 1 in every 100-1000 two-qubit gates \cite{ionq, ibm_error}, and non-local communication has error on average 10-100x worse.}
In this paper, we are motivated by a specific set of architectures or extensions to such architectures, as in \cite{schuster_machine, ion1, ion2, ion3}. In these devices, qubits are arranged into several regions of high connectivity \reviewaddition{with expensive communication between the clusters, referred to as non-local communication.} These devices naturally lend themselves to mapping techniques which utilize partitioning algorithms.
\begin{figure}
\centering%
\scalebox{\figscale}{
\input{figs/example-circuit.qcircuit}%
}%
\\%
\scalebox{\figscale}{%
\resizebox{\linewidth}{!}{\input{figs/moment-graphs.tikz}}}%
\caption{(Top) An example of a quantum program with single-qubit gates not shown. The inputs are on the left and time flows to the right toward the outputs. The two-qubit operations here are CNOT (controlled-NOT).
(Bottom) The graph representations of the quantum circuit of the above circuit. On the far left is the total interaction graph where each edge is weighted by the total number of interactions for the whole circuit. To the right is the sequence of time slice graphs, where an edge is only present if the qubits interact in the time slice. The sum of all time slice graphs is the total interaction graph.}
\label{fig:sample_program}
\end{figure}
Quantum programs are often represented as circuit diagrams, for example the one in Figure \ref{fig:sample_program}a. We define a \textit{time slice} in a quantum program as a set of operations which are parallel in the circuit representation of the program. We express time slices as a function of both the circuit representation and limitations of the specific architecture. We also define a \textit{time slice range} as a set of contiguous time slices; we also refer to them as \textit{slices} and when no length is specified, it will be assumed to be of length 1.
For evaluation, we consider two primary metrics: the \textit{width} and the \textit{depth} of a circuit. The width is the total number of qubits used and the depth, or the run time, is the total number of time slices required to execute the program. Qubit movement operations which are inserted in order \reviewaddition{to move interacting qubits into the same partition} contribute to the overall depth of the circuit.
We consider two abstract representations of quantum programs: the total interaction graph and a sequence of time slice interaction graphs, examples of which are found in Figure \ref{fig:sample_program}b. In both representations, each qubit is a vertex and edges between qubits indicate two-qubit operations acting on these qubits. In the total interaction graph, edges are weighted by the total number of interactions between pairs of qubits. In time slice graphs, an edge with weight 1 exists only if the pair of qubits interact at that time slice.
\subsection{Graph Partitioning}
\subsubsection*{\textbf{Static Partitioning}}
Finding graph partitions is a well studied problem \cite{fiduccia1982linear, park1995algorithms, kernighan1970efficient, hendrickson1995multi} and is used frequently in classical architecture. In this paper, we consider a variant of the problem which fixes the total number of partitions and bounds the total number of elements in each partition. Specifically, given a fixed number of partitions $k$, a maximum partition size $p$, and an undirected weighted graph $G$ with $\abs{V(G)} \le k \cdot p$ we want to find a $k$-way assignment of the vertices to partitions such that the weight of edges between vertices in different partitions is minimized. This can be rephrased in terms of \reviewaddition{statically} mapping a quantum circuit to the aforementioned architectures. Let the total interaction graph be $G$ and let $k$ and $p$ fixed by the topology of the architecture. Minimizing the edge weight between partitions corresponds to minimizing the total number of swaps which must be executed.
Solving for an optimal $k$-way partition is known to be hard \cite{partition_hardness}, but there exist many algorithms which find approximate solutions \cite{kernighan1970efficient, park1995algorithms, fiduccia1982linear}. There are several heuristic solvers such as in \cite{METIS, graph1} which can be used to find approximate $k$-way partition of a graph. However, they often cannot make guarantees about the size of the resulting partitions, preventing us from using them for the fixed size partitioning problem.
\subsubsection*{\textbf{Partitioning Over Time}}
Rather than considering a single graph to be partitioned we instead consider the problem of generating a \textit{sequence} of assignments of qubits to clusters, one for each moment of the circuit. We want to minimize the total number of differences between consecutive assignments, naturally corresponding to minimizing the total number of non-local communications between clusters. This problem is much less explored than the prior approach. Partitioning in this way guarantees interacting qubits will be placed in the same partition making the schedule for the input program immediate. In the case of a static partition, which gives only the initial mapping, a further step is needed to generate a schedule.
\subsubsection*{\textbf{Optimal Compilation and Exact Solvers}}
It is too computationally expensive to find a true optimal solution for even reasonably sized input programs. Use of constraint-based solvers has been used recently to look for optimal and near-optimal solutions \cite{murali, uwsic_spatial_arch1, uwisc_spatial_arch2}. Unfortunately, these approaches will not scale in the near-term let alone to larger, error-corrected devices. We explored the use of these solvers but found them to be too slow. Finding a static mapping with SMT is impractical with more than 30 to 40 qubits, and SMT partitioning over time is impractical when number of qubits times the depth became more than 40.
\section{Experimental Setup} \label{sec:benchmarks}
All experiments were run on an Intel(R) Xeon(R) Silver 4100 CPU at 2.10 GHz with 128 GB of RAM with 32 cores running Ubuntu 16.04.5. Each test was run on a single core. Our framework runs on Python 3.6.5 using Google's Cirq framework for circuit processing and for implementing our benchmarks \cite{cirq}. For testing exact solvers, we used the Z3 SMT solver \cite{z3}, though results could not be obtained for the size of benchmarks tested because Z3 never completes on problems this size.
\subsection{Benchmarks}
We benchmark the performance of our circuit mapping algorithms on some common sub-circuits used in many algorithms (for example Shor's and Grovers) and, for comparison, on random circuits. Our selection of benchmarks covers a wide variety of internal structure. For every benchmark, we use a representative cluster-based architecture with 100 qubits with 10 clusters each containing 10 qubits but our methods are not limited to any size. We sweep over the number of qubits used from 50 to 100, when in the cases of a few benchmarks the remaining qubits are available for use as either clean or dirty ancilla\footnote{An ancilla is a temporary quantum bit used often to reduce the depth or gate count of a circuit. ``Clean'' indicates the initial state of the ancilla is known while ``dirty'' means the state is unknown.}.
\subsubsection*{\textbf{Generalized Toffoli Gate}}
The Generalized Toffoli gate ($C^nU$) is an $n$-controlled $U$ gate for any single qubit unitary $U$ and is well studied \cite{cnx1, cnx2,cnx3,cnx4,cnx5,cnx6}. A $C^nX$ gate works by performing an $X$ gate on the target conditioned on all control qubits being in the $\ket{1}$ state. There are many known decompositions \cite{GidneyBlogPost, He_circuit, Barenco} both with and without the use of ancilla. A complete description of generating these circuits is given by \cite{cnx_decomps}, which provides a method for using clean ancilla.
\subsubsection*{\textbf{Multi-Target Gate}}
The multi-target gate performs a single-qubit gate on many targets conditioned on a single control qubit being in the $\ket{1}$ state. This is useful in several applications such as one quantum adder design \cite{cnx4} and can also be used in the implementation of error correcting codes \cite{ecc}. These circuits can be generated with different numbers of ancilla (both clean and dirty), as given by \cite{cnx_decomps}.
\subsubsection*{\textbf{Arithmetic Circuits}}
Arithmetic circuits in quantum computing are typically used as subcircuits of much larger algorithms like Shor's factoring algorithm and are well studied \cite{cnx3, cnx4, rev_mult}. Many arithmetic circuits, such as modular exponentiation, lie either at the border or beyond the range of NISQ era devices, typically requiring either error correction or large numbers of data ancilla to execute. We examine two types of quantum adders - the Cuccaro Adder and the QFT Adder - as representatives of a class of highly structured and highly regular arithmetic circuits \cite{cuccaro2004adder, qft_adder}.
\subsubsection*{\textbf{Random Circuit}}
The gates presented above have a lot of regular structure when decomposed into circuits. We want to contrast this with circuits with less structure.
We create these random circuits by picking some probability $p$ and some number of samples and generate an interaction between two qubits with probability $p$ for each sample. These circuits have the same structure as QAOA solving a min-cut problem on a random graph with edge probability $p$, so these circuits are a realistic benchmark.
\subsection{Circuit to Hardware}
We begin with a quantum program which is specified at the gate level, consisting of one and two qubit gates. We then generate the total interaction and time slice graphs, where we assume gates are inserted at the earliest possible time. Any further optimization, such as via commutivity or template matching, should be done prior to mapping the program to hardware. We also take the specifications of the hardware, such as number of clusters and the maximum size of the clusters, which constrain possible mappings.
We use our rOEE as our algorithm for Fine Grained Partitioning. Therefore, we pass the total interaction graph to a static partitioning algorithm to obtain a good starting assignment. This serves as a seed to rOEE rather than starting with a random assignment which may introduce unnecessary starting communication. To the time slice graphs, we apply the lookahead function to obtain the lookahead graphs. We run rOEE on this set of graphs to obtain an assignment sequence such that at every time slice qubits which interact appear in the same bucket. This assignment describes what non-local communication is added before each slice. Finally, we compute the cost and insert the necessary movement operations into the circuit \reviewaddition{to move interacting qubits into the same partition}, this is a path. As a byproduct, by generating a partitioning over time, we obtain a schedule of operations to be performed.
\section{Conclusion} \label{sec:conclusion}
Alternative to using \reviewaddition{near-optimal} graph partitioning algorithms to find a single static assignment for an entire circuit, we show considering the locality in a circuit during a mapping gives a reduction in the total non-local communication required when running a quantum circuit. There is a natural restriction in using static mappings suggesting the problem of mapping qubits to cluster-based architectures has a different structure than partitioning a single graph for minimum weight between the partitions. Our modification to OEE no
|
longer attempts to optimize the weights at every time slice. It is much more effective in practice to guide the partitioning based on heuristics and not to find the optimal value for every time slice. Optimality at every time slice does not correspond to a global reduction in non-local communication overhead.
We propose to use similar schemes for other cluster-based quantum hardware, especially those based on internally connected clusters. In our model, the different clusters of the architecture are also very well connected, but is not limited to only this specific instance of a clustered architecture.
\reviewaddition{Our proposed algorithm produces partitions based on a simplifying assumption about the connectivity of the clusters because the cost of non-local communication is substantially more expensive than any in-cluster operations. Our method can be adapted to other cluster-based architectures by first applying our partitioning algorithm to obtain good clusters of operations and then adding a device-specific scheduling algorithm for scheduling much cheaper in-cluster operations.}
A relaxed version with well chosen lookahead functions of a heuristic outperforms a well selected initial static mapping. Using lookahead weights has been explored previously, as in \cite{paler1}, and more can be done to better choose the lookahead function, for example based on a metric of circuit regularity. Techniques for mapping which attempt to solve for near optimal mappings will not scale and instead heuristics will be the dominant approach. Our approach is computationally tractable and adaptable to changes in machine architecture, such as additional or varied size clusters.
Non-local communication overhead in quantum programs makes up a large portion of all operations performed, therefore, minimizing non-local communication is critical. In recent hardware \cite{mount2016scalable}, the cost of moving between clusters makes non-trivial computation impossible with current standards for mapping qubits to hardware. Reducing this hardware bottleneck or finding algorithms to reduce the non-local communication are critical for quantum computation. We reduce this cost substantially in cluster-based architectures \reviewaddition{(see Table \ref{tab:est_cost})}.
\section{Introduction} \label{introduction}
Quantum computing aims to provide significant speedup to many problems by taking advantage of quantum mechanical properties such as superposition and entanglement \cite{quantum_ml, quantum_chemistry, quantum_optimization}. Important applications such as Shor's integer factoring algorithm \cite{Shor} and Grover's unordered database search algorithm \cite{Grover} provide potentially exponential and quadratic speedups, respectively.
\reviewaddition{Current quantum hardware of the NISQ era \cite{preskill_nisq}, which has on the order of tens to hundreds of physical qubits, is insufficient to run these important quantum algorithms. Scaling these devices even to a moderate sizes with low error rates has proven extremely challenging. Manufacturers of quantum hardware such as IBM and IonQ have had only limited success in extending the number of physical qubits present on a single contiguous piece of hardware. Issues on these devices such as crosstalk error scaling with the number of qubits or increased difficulty in control will limit the size this single-chip architecture can achieve \cite{bruzewicz2019trapped, brown2016co}}.
\reviewaddition{Due to these challenges, as well as developing technology for communicating between different quantum chips \cite{blakestad2009high, wallraff2018deterministic}, we expect quantum hardware to scale via a modular approach similar to how a classical computer can be scaled increasing the number of processors not just the size of the processors. Two of the leading quantum technologies, ion trap and superconducting physical qubits, are already beginning to explore this avenue and experimentalists project modularity will be the key to moving forward \cite{brecht2016multilayer, devoret2013superconducting, duan2010colloquium, bapat2018unitary, maslov2018outlook, monroe2013scaling, hucul2017spectroscopy}. One such example for ion traps is shown in Figure \ref{fig:modular-ion} where many trapped ion devices are connected via a single central optical switch. Technology such as resonant busses in superconducting hardware or optical communication techniques in ion trap devices will enable a more distributed approach to quantum computing, having many smaller, well-connected devices with sparser and more expensive non-local connections between them. Optimistically, due to current technology in the near term, we expect these non-local communication operations to be somewhere between 5-100x higher latency than in-cluster communication.}
\reviewaddition{With cluster-based approaches becoming more prominent, new compiler techniques for mapping and scheduling of quantum programs are needed. As the size of executable computations increase it becomes more and more critical to employ program mappings exhibiting both adaptivity of dynamic techniques and global optimization of static techniques. Key to realizing both advantages is to simplify the problem. Since non-local communication is dominant, we focus on only non-local costs. This simplification, along with static knowledge of all control flow, allows us to map a program in many timeslices with substantial lookahead for future program behavior. This approach would not be computationally tractable on a non-clustered machine.}
\begin{figure}
\centering
\quad\qquad
\scalebox{\figscale}{%
\input{figs/plot-static-vs-best-bar.tikz}}
\caption{Non-local communication overhead in circuits mapped to cluster-based machines. Our new mapping scheme FPG-rOEE provides \reviewaddition{reduces the number of operations added for non-local communication} on all benchmarks.}
\label{fig:com_costs_results}
\end{figure}
For devices with many modular components mapping quantum programs translates readily to a graph partitioning problem with a goal of minimizing edge crossings between partitions. This approach is standard in many classical applications such as high performance parallel computing, etc. \cite{vlsi_partitioning, classical_partitioning, hpc_graph_partitioning} with the goal of minimizing total latency. Here latency is approximated by the total number of times qubits must be shuttled between different regions of the device. Graph partitioning is known to be hard and heuristics are the dominant approach \cite{fiduccia1982linear, park1995algorithms, kernighan1970efficient, hendrickson1995improved, heuristic1}.
\reviewaddition{While this problem is related to many problems in distributed or parallel computing, there are a few very important distinctions. In a typical quantum program, the control flow is statically known at compile time, meaning all interactions between qubits are known. Furthermore, the no-cloning theorem states we cannot make copies of our data, meaning non-local communication between clusters is \textit{required} to interact data qubits. Finally, any additional non-local operations affect not only latency as they would classically but are directly related to the probability a program will succeed since operations in quantum computing are error prone and therefore reducing non-local communication is especially critical for successful quantum program execution.}
Our primary contribution is the development of a complete system for mapping quantum programs to near-term cluster-based quantum architectures via graph partitioning techniques where qubit interaction in-cluster is relatively free compared to expensive out-of-cluster interaction. Our primary goal is to minimize the communication overhead by reducing the number of low-bandwidth, high-latency operations such as moving qubits which are required in order to execute a given quantum program. Rather than partitioning the circuit once to obtain a generally good global assignment of the qubits to clusters, we find a sequence of assignments, one for each time slice in the circuit. This fine-grained approach is much less studied, especially for this class of architectures. With our techniques, we reduce the total number of non-local communication operations by 89.8\% in the best case and 60.9\% in the average case; Figure \ref{fig:com_costs_results} shows a few examples of circuits compiled statically versus with our methods.
The rest of the paper is organized as follows: \reviewaddition{In Section \ref{sec:background}, we introduce the basics of quantum circuits and graph partitioning.} In Section \ref{sec:mapping}, we introduce our proposed methodology for mapping qubits to the clusters of these modular systems, specifically a method for \textit{fine-grained partitioning}. In Section \ref{sec:lookahead}, we introduce a method for applying lookahead weights to tune what is considered \textit{local} at each time slice and evaluate their effect on non-local communication. In Section \ref{sec:benchmarks}, we introduce the benchmarks we test on and present our explicit toolflow for taking quantum programs to a sequence of mappings \reviewaddition{which guarantee interacting qubits are moved into the same partition before each time slice using non-local communication}. In Section \ref{sec:results}, we present our results and provide a brief discussion, and in Section \ref{sec:prior}, we present a summary of related work for hardware mapping. We conclude in Section \ref{sec:conclusion}.
\begin{figure}
\centering
\scalebox{\figscale}{%
\includegraphics[width=\columnwidth,keepaspectratio=true]{figs/modular-ion.png}}\vspace*{-.2in}%
\caption{An example modular architecture of qubits in individual ion traps connected with optics proposed by Monroe et al \cite{modular-ion}. Communication between traps is supported by photon-mediated entanglement. Similar communication for superconducting qubits \cite{yale-modular} can facilitate modular architectures for that technology.}
\label{fig:modular-ion}
\end{figure}
\section{Lookahead Weights} \label{sec:lookahead}
Finding a suitable lookahead weight function to use in Fine Grained Partitioning is necessary to maximize the benefit gained from choosing our swaps appropriately between time slices. We only require the lookahead function to be monotonically decreasing and non-negative. Throughout this section, we denote our lookahead weight function as $D$.
\subsection{Natural Candidates}
We explore a few natural candidate weighting functions from the huge space of possible functions. In each of the functions we explore below, we vary a stretching factor or scale $\sigma$ which can be tuned for the given circuit, providing a trade-off between local and global information.
\subsubsection*{\textbf{Constant Function}}
\[ D(n) = \begin{cases}
1 & n\leq \sigma \\
0 & n > \sigma
\end{cases}
\]
A constant function captures a fixed amount of local information in the circuit. This is just the number of times the pair of qubits interact in the next $\sigma$ time slices. For $\sigma = 0$, this function corresponds to no lookahead applied.
\subsubsection*{\textbf{Exponential Decay}}
\[ D(n) = 2^{-n/\sigma}
\]
An exponential is a natural way to model a decaying precedence. When $\sigma\le 1$, any interaction will always have a weight at least as high as the sum of interactions after it.
\begin{table*}[]
\caption{\reviewaddition{A subset of our benchmarks. Clean multi-control has a maximum size of 87. With more than 87 data qubits and fewer than 13 clean ancilla, the depth of the multi-control decomposition is too large to run on these cluster-based machines with predicted error rates.}}
\centering
\reviewaddition{
\input{figs/benchmarks-table.tex}
}
\label{tab:benchmarks-table}
\end{table*}
\subsubsection*{\textbf{Gaussian Decay}}
\[ D(n) = e^{-n^2/\sigma^2}
\]
Similar to an exponential, a Gaussian is natural to model decaying precedence with more weight given to local interactions.
\subsection{Evaluating Lookahead Functions}
To evaluate the choice of lookahead function as well as choice of $\sigma$, we study Fine Grained Partitioning using rOEE with all of the above candidate functions with varying $\sigma$ on benchmarks of various types: those with lots of local structure (a quantum ripple carry adder), those with very little structure (a random circuit), and those which lie somewhere in between (a Generalized Toffoli decomposition).
In Figure \ref{fig:lookahead-bar}, we show an example of a circuit which benefits from having a large scale $\sigma$, the Cuccaro Adder \cite{cuccaro2004adder}. In contrast, all of the random benchmarks benefit from having small $\sigma$ values, functions which decay quickly even for small $n$.
We also compare the different natural lookahead functions we described in the previous section on some representative benchmarks in Figure \ref{fig:lookahead-results}. In these figures, we see the exponential decay has a clear benefit over the rest in the structured circuits of the Multi-Control gate and the Cuccaro Adder. In random circuits, there seems to be no clear benefit to any of the lookahead functions, so long as they have some small lookahead scaling factor. So, we use exponential decay with $\sigma=1$ for our primary benchmarks in Section \ref{sec:benchmarks}.
\section{Mapping Qubits to Clusters} \label{sec:mapping}
We define an \textit{assignment} as a set of partitions of the qubits, usually at a specific time slice. We present algorithms which take a quantum circuit and output a \textit{path}, defined as a sequence of assignments of the qubits with the condition that every partitioning in the sequence is \textit{valid}. An assignment is valid if each pair of interacting qubits in a time slice are located within the same partition. \reviewaddition{Finally, we define the \textit{non-local communication} between consecutive assignments as the total number of operations which must be executed to transition the system from the first assignment to the second assignment.} The total communication of a path is the sum over all communication along the path.
\subsection{Computing Non-local Communication}
To compute the non-local communication overhead between consecutive assignments of $n$ qubits, we first construct a directed graph with multiple edges where the nodes in the graph are the partitions and the edges indicate a qubit moving from partition $i$ to partition $j$. We extract all 2-cycles from this graph and remove those edges from the graph. We proceed extracting all 3-cycles, and so on and record the number of $k$-cycles extracted as $c_k$. When there are no cycles remaining, the total number of remaining edges is $r$, and the total communication overhead $C$ is given by
$$C = r + \sum_{k=2}^n (k-1)\cdot c_k$$
The remaining edges indicate a qubit swapping with an unused qubit. We repeat this process for every pair of consecutive assignments in the path to compute the total non-local communication of the path. These cycles specify where qubits will be moved with non-local communication.
\subsection{Baseline Non-local Communication} \label{baseline}
As a baseline we consider using a \textit{Static Mapping} \reviewaddition{using an owner computes model}, which takes into account the full set of qubit interactions for the circuit, providing a generally good assignment of the qubits for the entire duration of the program, called the static assignment. At each time step in the circuit, a good static assignment ensures, on average, qubits are not \textit{too far} from other qubits they will interact with frequently.
\reviewaddition{We find the assignment which requires the fewest number of swaps from the static assignment but has each pair of interacting qubits in a common partition. \reviewaddition{These assignments form} a path for the computation. We refer to this method of path generation in conjunction with a partitioning algorithm, for example Static Mapping with OEE (Overall Extreme Exchange, discussed further later) is referred to as Static-OEE.}
\subsection{Fine Grained Partitioning}
The primary approach we developed to dynamically map a circuit to hardware is \textit{Fine Grained Partitioning} (FGP). In this algorithm, we find an assignment at every time slice using the time slice graphs. By default, these time slice graphs give only immediately local information about the circuit but have no knowledge about upcoming interactions. Alone, they only specify the constraints of which qubits interact in that time slice. The key advantage for this method is using \textit{lookahead weights}. The main idea is to construct modified time slice graphs capturing more structure in the circuit than the default time slice graphs. We refer to these graphs as time slice graphs with lookahead weights, or \textit{lookahead graphs}.
\begin{figure}
\centering
\scalebox{\figscale}{%
\scalebox{0.8}{\input{figs/lookahead-example.tikz}}}
\caption{An example of a time slice graph with lookahead weights based on the circuit in Figure \ref{fig:sample_program}. We take the graph from the left and add weight to the edges of qubits that interact in the future. In this case, we take the weight equal to the number of times the qubits will interact in the future.}
\label{fig:lookahead}
\end{figure}
To construct the lookahead graph at time $t$, we begin with the original time slice graph and give the edges present infinite weight. For every pair of qubits we add the weight
$$w_t(q_i,q_j) = \sum_{t< m\le T} I(m,q_i,q_j)\cdot D(m-t)$$
to their edge, where $D$ is some monotonically decreasing, non-negative function, which we call the lookahead function, and $I(m,q_i,q_j)$ is an indicator that is 1 if $q_i$ and $q_j$ interact in time slice $m$ and 0 otherwise, and $T$ is the number of time slices in the circuit. The new time slice graphs consider the remainder of the circuit, more heavily weighting sooner interactions. The effectively infinite weight on edges between interacting qubits is present to guarantee any assignment will place interacting qubits into the same partition. An example is shown in Figure~\ref{fig:lookahead}.
The final mapping of the qubits in our model is obtained by partitioning each of these time slices. Iteratively, we find the next assignment with a partitioning algorithm, seeded with the assignment obtained from the previous time slice. The first can choose a seed randomly or use the static assignment (presented in \ref{baseline}). The new weights in the time slice graphs will force any movement necessary in the partitioning algorithm. Together, these assignments give us a valid path for the circuit to be mapped into our hardware.
\subsection{Choosing the Partitioning Algorithm}
We assume full connectivity within clusters and the ability to move between clusters. These assumptions give us the liberty to tap into well studied partitioning algorithms. The foundation of many partitioning algorithms is largely considered to be the Kernighan-Lin heuristic for partitioning graphs with bounded partition sizes \cite{kernighan1970efficient, fiduccia1982linear, park1995algorithms}. The KL heuristic selects pairs of vertices in a graph to exchange between partitions based on the weights between the vertices themselves and the total weight between the vertices and the partitions.
We consider a natural extension of the KL algorithm, Overall Extreme Exchange presented by Park and Lee \cite{park1995algorithms}. The OEE algorithm finds a sequence of pairs of vertices to exchange and makes as many exchanges as give it an overall benefit. Using OEE, the Fine Grained Partitioning scheme often over corrects (see Figure \ref{fig:partitioner_results}). If a qubit needs to interact in another partition, then it can ``drag along'' a qubit it is about to interact with because OEE attempts to minimize weight between partitions regardless of its relation to the previous or next time slice graphs. Choosing an optimal partitioning algorithm would not give better solutions to our non-local communication based mapping problem. Instead, we consider a more relaxed version of a partitioning algorithm using the KL heuristic.
\subsubsection*{\textbf{Relaxing the Partitioning Algorithm}}
We provide relaxed version of the algorithm better suited to generating a path over time, called relaxed-OEE (rOEE). We run OEE until the partition is valid for the time slice (all interacting qubits are in the same partition) and then make no more exchanges. This is similar in approach to finding the time slice partitions in our Static Mapping approaches. It is critically important we make our exchange choices using lookahead weights applied to the time slice graphs. Choosing without information about the upcoming circuit provides no insight into which qubits are beneficial to exchange. As a side benefit, making this change strictly speeds up OEE, an already fast heuristic algorithm. Although a strict asymptotic time bound for OEE is difficult to prove, rOEE never took more than a few seconds on any instance it was given.
With such a significant non-local communication overhead improvement (see Figure \ref{fig:partitioner_results}), this relaxed KL partitioning algorithm is much better suited for the problem at hand. It has the ability to take into account local structure in the circuit and avoid over correcting and swapping qubits unnecessarily.
\section{\reviewaddition{Motivation}} \label{sec:motivation}
\reviewaddition{Current quantum hardware, which has on the order of tens of physical qubits, is insufficient to run important quantum algorithms. Scaling these devices even to a moderate size with low error rates has proven extremely challenging. Manufacturers of quantum hardware such as IBM and IonQ have had only limited success in extending the number of physical qubits present on a single contiguous piece of hardware. Issues on these devices such as crosstalk error scaling with the number of qubits or increased difficulty in control will limit the size this single-chip architecture can achieve \cite{bruzewicz2019trapped, brown2016co}}.
\reviewaddition{Due to these challenges, as well as developing technology for communicating between different quantum chips \cite{blakestad2009high, wallraff2018deterministic}, we expect quantum hardware to scale via a modular approach similar to how a classical computer could be scaled increasing the number of processors not just the size of the processors. Two of the leading quantum technologies, ion trap and superconducting physical qubits, are already beginning to explore this avenue and experimentalists project modularity will be the key to moving forward \cite{brecht2016multilayer, devoret2013superconducting, duan2010colloquium, bapat2018unitary, maslov2018outlook, monroe2013scaling, hucul2017spectroscopy}. Technology such as resonant busses in superconducting hardware or optical communication techniques in ion trap devices will enable this more distributed approach to quantum computing, having many smaller, well-connected devices with sparser and more expensive non-local connections between them. Optimistically, due to current technology in the near term, we expect these non-local communication operations to be somewhere between 5-100x higher latency than in-cluster communication.}
\reviewaddition{With cluster-based approaches becoming more prominent, new compiler techniques for mapping and scheduling of quantum programs are needed. Furthermore, as the size of executable computations increase it becomes more and more critical to employ program mappings that exhibit both the adaptivity of dynamic techniques and the global optimization of static techniques. Key to realizing both advantages is to simplify the problem. Since non-local communication is dominant, we can focus on only non-local costs. This simplification, along with static knowledge of all control flow, allows us to map a program in many timeslices with substantial lookahead for future program behavior. This approach would not be computationally tractable on a non-clustered machine.}
\section{Related Work} \label{sec:prior}
Current quantum hardware is extremely restricted and has prompted a great deal of research aimed at making the most of current hardware conditions. This usually amounts to a few main categories of optimization. The first is circuit optimization at a high level to reduce the number of gates or depth via template matching as in \cite{rw1-template-matching, rw-template-rewriting} or via other optimization techniques as in \cite{optimization-qiskit, automated_optimization}. Other work focuses on optimization at the device level, such as by breaking the circuit model altogether as in \cite{YunongPaper} or by simply improving pulses via Quantum Optimal Control \cite{qoc}.
At an architectural level, optimization has been studied for many different types hardware with various topologies. The general strategy in most of these works is to reduce SWAP counts with the same motivation as this work, as in \cite{intel1, ai1, siraichi, optimization-qiskit, paler1, paler2, automatic_layout}. Much of this work focuses primarily on linear nearest neighbor (LNN) architectures or 2D lattice architectures as in \cite{lnn1, lnn2, lnn3, lnn4, 2d1}. Some work has focused on ion trap mappings as in \cite{ion_trap_mapping1} though the architecture of this style of device resembles more closely that of a 2D architecture. Some work has recently focused on optimization around specific error rates in near term machines as in \cite{murali, li-ding-xie}. Many of these techniques promise an extension to arbitrary topologies but are not specifically designed to accommodate cluster-based architectures. Work by \cite{qc_paritioning} has explored using graph partitioning to reduce swap counts in near term machines, but their focus is on LNN architectures exclusively. Other work focuses on architectures of the more distant future, namely those with error correction such as in \cite{future1, future2, future3}.
\section{Results and Discussion} \label{sec:results}
We run our mapping algorithms on each of our benchmark circuits. The results are shown in Figure \ref{fig:partitioner_results}.
Baseline mapping and the original version of OEE perform worse than our best scheme on any benchmark tested. Baseline mapping uses global structure of the graph, but often maintains this structure too much throughout the execution of the circuit. This lack of local awareness and rigid nature of the Static Mapping limits its usefulness. Most out of the box graph partitioning algorithms are designed to only minimize the edge weight between partitions; this will tend to over correct for local structure in the circuit. FGP can overcome this limitation with its choice of partitioning algorithm. By relaxing the partitioning algorithm and not requiring local optimality, we only move qubits until all interacting pairs are together, we require far fewer non-local operations.
The most noticeable changes between FGP-OEE and FGP-rOEE are on the clean multi-control gate with many controls and on the Cuccaro adder. Here, there are often consecutive, overlapping operations with little parallelism. With this structure, after the first operation is performed, the original OEE algorithm will exchange qubits to comply with the next time slice for the next operation. OEE is required to separate qubits which will later interact. To minimize the total crossing weight between partitions, more qubits are shuffled around, usually towards this displaced qubit. In rOEE, this reshuffle optimization never takes place because we terminate once \reviewaddition{each pair of interacting qubits in a time slice is placed in a common partition}. The reshuffling detriments the overall non-local communication when running the circuit because of how often qubits will be displaced from their common interaction partners. In rOEE, not reshuffling keeps the majority of the qubits in sufficiently good spots and the displaced qubit has the opportunity to immediately move back with its interaction partners later.
\begin{figure*}[h!]
\centering
\input{figs/plot-final-results.tikz}
\caption{The non-local communication overhead for our benchmark circuits mapped by each mapping algorithm. The x-axis is the number of qubits that are used in the circuit. \reviewaddition{The y-axis is the number of non-local communication operations inserted to make the circuit executable in our hardware model.} In Clean multi-control, Clean multi-target, and Dirty multi-target, the remainder of the 100 qubits are used as ancilla (clean or dirty determined by the circuit name). FGP-rOEE outperforms all other mapping algorithms on all but the multi-target circuits, and shows substantial improvement over the static baseline. As the size of the circuit increases, rOEE tends to outperform by a greater margin, indicating scales better into the future.}
\label{fig:partitioner_results}
\end{figure*}
We include the algorithm Fixed Length Slicing as an alternative not presented in this paper. It is a method with slower computation which explores grouping time slices at fixed intervals. Fixed Length Slicing was consistently the best performing time slice range based mapping algorithm, so we present it in our results. FLS-OEE only beats FGP-rOEE on some instances of the multi-target benchmarks and consistently performs worse on all other benchmarks.
In Figure \ref{fig:com_costs_results}, we show the percentage of operations used for non-local communication for each of the benchmark circuits, and in Table \ref{tab:improvement-table} we show the percent improvement of our algorithm over the baseline. On average, we save over 60\% of the non-local communication operations added. When each non-local communication operation is implemented in hardware, the amount of time each takes is significantly longer than the operations between the qubits in the clusters \cite{mount2016scalable}. Based on current communication technology, we expect these non-local communication operations to take anywhere from 5x to 100x longer than local in-cluster operations. Furthermore, the choice in technology limits how many of these expensive operations can be performed in parallel.
In Table \ref{tab:est_cost} we compute the estimated running time based on this ratio of costs and show that by substantially reducing the non-local communication via FGP-rOEE, we can drastically reduce the expected run time. We compare our algorithm to the baseline when non-local communication can be performed in parallel (such as in optically connected ion trap devices) and when it is forced to occur sequentially (as when using a resonant bus in superconducting devices). Based on current technology, a 5-10x multiplier is optimistic while 100x is realistic in the near term.
\begin{table}[]
\caption{Comparing Static-OEE against FGP-rOEE over all benchmarked instances. We obtain improvement across the board with the worst case still reducing non-local communication by 22.6\%.}
\label{tab:improvement-table}
\centering
\input{figs/improvement-table.tex}
\end{table}
\begin{table}[]
\caption{Estimated execution time of the clean multi-control benchmark with 76 data qubits and 24 ancilla. Two-qubit gates take 300ns \cite{ibm_error} and the multiplier indicates how many times longer non-local communication operations take.}
\label{tab:est_cost}
\centering
\input{figs/estimated_execution_table.tex}
\end{table}
|
\section{Introduction}
Although the standard single-degenerate (SD) Chandrasekhar-mass
scenario (see \citealt{Hillebrandt2000a} for a review) is capable
of explaining most of the observed diversity of Type Ia supernovae
(SNe~Ia) \citep{Hoeflich1996b,Kasen2009a} via the delayed-detonation
\citep{Khokhlov1991a} model, it suffers from severe problems in
explaining the observed rate of SNe~Ia. In particular, binary evolution
population synthesis calculations predict rates which are an order
of magnitude too low compared to the observed rate of SNe~Ia
(\citealt{Ruiter2009a}, but see \citealt{Meng2010a}). In addition,
recent observational studies suggest that Chandrasekhar-mass
explosions of hydrogen-accreting carbon-oxygen (C/O) White Dwarfs
(WDs) in SD binary systems cannot account for all SNe~Ia:
\citet{Gilfanov2010a} found that the X-ray flux of nearby elliptical
galaxies is significantly weaker than expected for a population of WDs
accreting hydrogen towards the Chandrasekhar-mass needed to explain
the observed supernova rate in elliptical galaxies (see also
\citealt{diStefano2010a}). Moreover, there is growing observational
evidence that there are different populations of SNe~Ia
\citep{Mannucci2005a,Scannapieco2005a}.
This has led to a revived interest in alternative explosion mechanisms.
Here we consider the double detonation scenario applied to
sub-Chandrasekhar-mass C/O WDs. In that scenario a helium-accreting
C/O WD explodes below $M_\mathrm{Ch}$ due to a detonation in the
accreted helium shell which triggers a secondary core detonation by
compressional heating \citep{Woosley1994a,Livne1995a,Fink2007a}. This
model has some very appealing features. Depending on the initial mass
of the WD, a wide range of explosion strengths can be realized
(e.g. \citealt{Woosley1994a,Livne1995a,Hoeflich1996b}). Moreover,
population synthesis studies \citep{Ruiter2009a} predict rates
comparable to the observed galactic supernova rate.
However, earlier work \citep{Hoeflich1996b,Hoeflich1996c,Nugent1997a}
found light curves and spectra of such models to be in conflict with the
observed spectra and light curves of SNe~Ia. The differences were
mainly attributed to the composition of the outer layers. Due to the
initial helium detonation in the outer shell, the ejecta of
sub-Chandrasekhar-mass models are surrounded by a layer of helium
and its burning products (which can include iron-peak nuclei). This,
however, is in apparent contradiction to the layered composition
structure of observed SNe~Ia, where the composition changes from
iron-group elements in the core to lower mass elements in the outer
layers.
In a preceding paper \citet{Sim2010a} have shown that
artificial explosions of ``naked'' sub-Chandrasekhar-mass WDs can
reproduce the observed diversity of SNe~Ia. Thus it is natural to
ask if somewhat modified properties in the initial helium shells of
realistic sub-Chandrasekhar-mass models can reduce the negative
post-explosion effect of this shell on the observables. In particular,
\citet{Bildsten2007a} recently presented new calculations, indicating
that detonations might occur for much less massive helium shells than
previously thought. \citet{Fink2010a} adopted the minimum helium shell
masses of \citet{Bildsten2007a} and investigated whether such low-mass
helium detonations are capable of triggering a secondary detonation in
the C/O core of the WD\@. In that study, they concluded that as soon as
a detonation in the helium shell initiates, a subsequent core detonation
is virtually inevitable. For example, they found that even a helium
shell mass as low as $0.039\,M_\odot$ is sufficient to detonate a
C/O WD of $1.125\,M_\odot$.
Here, we focus on the observable properties of the models presented in
\citet{Fink2010a} and their comparison to real SNe Ia. In particular,
we investigate whether the low helium shell masses of these models help
to alleviate the problems encountered previously when comparing double
detonation sub-Chandrasekhar-mass models to observed spectra and light
curves of SNe~Ia \citep{Hoeflich1996b,Nugent1997a}.
The outline of the paper is as follows: in Section~\ref{sec:models} we
give a short summary of the models of \citet{Fink2010a} before briefly
describing details of our radiative transfer simulations in Section~\ref{sec:rt}.
In Section~\ref{sec:oo} we present synthetic observables for the \citet{Fink2010a}
models and compare them to the observed properties of SNe~Ia. The results
of this comparison and implications for future work on sub-Chandrasekhar-mass
double detonations are discussed in Section~\ref{sec:discussion}.
Finally, we draw conclusions in Section~\ref{sec:conclusions}.
\section{Models}
\label{sec:models}
Adopting the minimum helium shell masses required to initiate a helium
detonation in the shell according to \citet{Bildsten2007a},
\citet{Fink2010a} investigated the double detonation scenario for six
models representing a range of different C/O core masses (the model parameters
are summarized in Table~\ref{tab:modelparas}). In all their models, they
ignite an initial helium detonation in a single point at the base of
the helium shell located on the positive $z$-axis (hereafter referred to
as the ``north-pole'' of the WD). From there the helium detonation wave
sweeps around the WD core until it converges at the south pole. At the
same time a shock wave propagates into the core and converges at a
point off-centre. Finding conditions that might be sufficient to initiate
a core detonation in a finite volume around this point, \citet{Fink2010a}
then trigger a secondary core detonation at that point. This secondary
detonation disrupts the entire WD and yields ejecta with a characteristic
abundance distribution which is shown for Model 3 of \citet{Fink2010a}
in Figure~\ref{fig:m03_composition}.
\begin{deluxetable*}{lcccccccc}
\tabletypesize{\scriptsize}
\tablecaption{Parameters of our model sequence.\label{tab:modelparas}}
\tablehead{Model & \colhead{1} & \colhead{2} & \colhead{3} & \colhead{4} & \colhead{5} & \colhead{6} & \colhead{3c} & \colhead{3m}}
\startdata
$M_\mathrm{tot}$ & 0.936 & 1.004 & 1.080 & 1.164 & 1.293 & 1.3885 & 1.025 & 1.071\\
\tableline
$M_\mathrm{core}$ & 0.810 & 0.920 & 1.025 & 1.125 & 1.280 & 1.3850 & 1.025 & 1.025\\
$M_\mathrm{core}(^{56}\mathrm{Ni})\tablenotemark{a}$ & $1.7 \times 10^{-1}$ & $3.4 \times 10^{-1}$ & $5.5 \times 10^{-1}$ & $7.8 \times 10^{-1}$ & $1.05$ & $1.10$ & $5.5 \times 10^{-1}$ & $5.6 \times 10^{-1}$ \\
$M_\mathrm{core}(^{52}\mathrm{Fe})\tablenotemark{a}$ & $7.6 \times 10^{-3}$ & $9.9 \times 10^{-3}$ & $9.6 \times 10^{-3}$ & $7.9 \times 10^{-3}$ & $4.2 \times 10^{-3}$ & $1.7 \times 10^{-4}$ & $9.6 \times 10^{-3}$ & $9.4 \times 10^{-3}$\\
$M_\mathrm{core}(^{48}\mathrm{Cr})\tablenotemark{a}$ & $3.9 \times 10^{-4}$ & $4.6 \times 10^{-4}$ & $4.5 \times 10^{-4}$ & $3.8 \times 10^{-4}$ & $2.1 \times 10^{-4}$ & $7.1 \times 10^{-5}$ & $4.5 \times 10^{-4}$ & $4.4 \times 10^{-4}$ \\
$M_\mathrm{core}(^{44}\mathrm{Ti})$ & $7.2 \times 10^{-6}$ & $9.0 \times 10^{-6}$ & $1.1 \times 10^{-5}$ & $1.4 \times 10^{-5}$ & $1.4 \times 10^{-5}$ & $9.9 \times 10^{-6}$ & $1.1 \times 10^{-5}$ & $1.2 \times 10^{-5}$ \\
$M_\mathrm{core}(^{40}\mathrm{Ca})$ & $2.0 \times 10^{-2}$ & $2.1 \times 10^{-2}$ & $1.8 \times 10^{-2}$ & $1.4 \times 10^{-2}$ & $6.9 \times 10^{-3}$ & $1.8 \times 10^{-3}$ & $1.8 \times 10^{-2}$ & $1.8 \times 10^{-2}$ \\
$M_\mathrm{core}(^{36}\mathrm{Ar})$ & $2.2 \times 10^{-2}$ & $2.2 \times 10^{-2}$ & $1.9 \times 10^{-2}$ & $1.5 \times 10^{-2}$ & $6.7 \times 10^{-3}$ & $1.7 \times 10^{-3}$ & $1.9 \times 10^{-2}$ & $1.9 \times 10^{-2}$ \\
$M_\mathrm{core}(^{32}\mathrm{S})$ & $1.3 \times 10^{-1}$ & $1.2 \times 10^{-1}$ & $1.0 \times 10^{-1}$ & $7.5 \times 10^{-2}$ & $3.3 \times 10^{-2}$ & $8.0 \times 10^{-3}$ & $1.0 \times 10^{-1}$ & $1.0 \times 10^{-1}$ \\
$M_\mathrm{core}(^{28}\mathrm{Si})$ & $2.7 \times 10^{-1}$ & $2.5 \times 10^{-1}$ & $2.1 \times 10^{-1}$ & $1.4 \times 10^{-1}$ & $6.1 \times 10^{-2}$ & $1.5 \times 10^{-2}$ & $2.1 \times 10^{-1}$ & $2.0 \times 10^{-1}$ \\
$M_\mathrm{core}(^{24}\mathrm{Mg})$ & $4.5 \times 10^{-2}$ & $3.5 \times 10^{-2}$ & $2.4 \times 10^{-2}$ & $1.1 \times 10^{-2}$ & $9.3 \times 10^{-3}$ & $4.3 \times 10^{-3}$ & $2.4 \times 10^{-2}$ & $2.2 \times 10^{-2}$ \\
$M_\mathrm{core}(^{16}\mathrm{O})$ & $1.4 \times 10^{-1}$ & $1.1 \times 10^{-1}$ & $8.0 \times 10^{-2}$ & $4.2 \times 10^{-2}$ & $3.1 \times 10^{-2}$ & $1.2 \times 10^{-2}$ & $8.0 \times 10^{-2}$ & $7.7 \times 10^{-2}$ \\
$M_\mathrm{core}(^{12}\mathrm{C})$ & $6.6 \times 10^{-3}$ & $4.4 \times 10^{-3}$ & $2.7 \times 10^{-3}$ & $8.8 \times 10^{-4}$ & $5.9 \times 10^{-3}$ & $7.4 \times 10^{-4}$ & $2.7 \times 10^{-3}$ & $3.3 \times 10^{-3}$ \\
\tableline
$M_\mathrm{sh}$ & 0.126 & 0.084 & 0.055 & 0.039 & 0.013 & 0.0035 & -- & 0.046\\
$M_\mathrm{sh}(^{56}\mathrm{Ni})$\tablenotemark{a} & $8.4 \times 10^{-4}$ & $1.1 \times 10^{-3}$ & $1.7 \times 10^{-3}$ & $4.4 \times 10^{-3}$ & $1.5 \times 10^{-3}$ & $5.7 \times 10^{-4}$ & -- & $1.1 \times 10^{-8}$\\
$M_\mathrm{sh}(^{52}\mathrm{Fe})$\tablenotemark{a} & $7.6 \times 10^{-3}$ & $7.0 \times 10^{-3}$ & $6.2 \times 10^{-3}$ & $3.5 \times 10^{-3}$ & $1.2 \times 10^{-3}$ & $2.0 \times 10^{-4}$ & -- & $6.1 \times 10^{-9}$\\
$M_\mathrm{sh}(^{48}\mathrm{Cr})$\tablenotemark{a} & $1.1 \times 10^{-2}$ & $7.8 \times 10^{-3}$ & $4.4 \times 10^{-3}$ & $2.2 \times 10^{-3}$ & $6.8 \times 10^{-4}$ & $1.5 \times 10^{-4}$ & -- & $7.9 \times 10^{-8}$\\
$M_\mathrm{sh}(^{44}\mathrm{Ti})$ & $7.9 \times 10^{-3}$ & $5.4 \times 10^{-3}$ & $3.4 \times 10^{-3}$ & $1.8 \times 10^{-3}$ & $4.9 \times 10^{-4}$ & $6.2 \times 10^{-5}$ & -- & $4.2 \times 10^{-5}$ \\
$M_\mathrm{sh}(^{40}\mathrm{Ca})$ & $4.7 \times 10^{-3}$ & $3.2 \times 10^{-3}$ & $2.2 \times 10^{-3}$ & $2.2 \times 10^{-3}$ & $6.8 \times 10^{-4}$ & $2.4 \times 10^{-4}$ & -- & $3.5 \times 10^{-3}$\\
$M_\mathrm{sh}(^{36}\mathrm{Ar})$ & $5.0 \times 10^{-3}$ & $3.2 \times 10^{-3}$ & $2.0 \times 10^{-3}$ & $1.6 \times 10^{-3}$ & $5.2 \times 10^{-4}$ & $1.3 \times 10^{-4}$ & -- & $2.7 \times 10^{-2}$\\
$M_\mathrm{sh}(^{32}\mathrm{S})$ & $2.2 \times 10^{-3}$ & $1.2 \times 10^{-3}$ & $7.8 \times 10^{-4}$ & $1.3 \times 10^{-3}$ & $4.3 \times 10^{-4}$ & $1.9 \times 10^{-4}$ & -- & $1.4 \times 10^{-2}$\\
$M_\mathrm{sh}(^{28}\mathrm{Si})$ & $4.8 \times 10^{-4}$ & $2.5 \times 10^{-4}$ & $1.4 \times 10^{-4}$ & $4.7 \times 10^{-4}$ & $1.6 \times 10^{-4}$ & $1.3 \times 10^{-4}$ & -- & $5.9 \times 10^{-4}$\\
$M_\mathrm{sh}(^{24}\mathrm{Mg})$ & $4.3 \times 10^{-5}$ & $2.3 \times 10^{-5}$ & $1.3 \times 10^{-5}$ & $1.0 \times 10^{-4}$ & $3.8 \times 10^{-5}$ & $3.6 \times 10^{-5}$ & -- & $3.0 \times 10^{-5}$\\
$M_\mathrm{sh}(^{16}\mathrm{O})$ & $3.2 \times 10^{-6}$ & $1.7 \times 10^{-6}$ & $1.9 \times 10^{-6}$ & $6.0 \times 10^{-5}$ & $2.1 \times 10^{-5}$ & $2.6 \times 10^{-5}$ & -- & $1.0 \times 10^{-5}$\\
$M_\mathrm{sh}(^{12}\mathrm{C})$ & $1.2 \times 10^{-3}$ & $5.3 \times 10^{-4}$ & $2.2 \times 10^{-4}$ & $7.9 \times 10^{-5}$ & $1.7 \times 10^{-5}$ & $1.6 \times 10^{-6}$ & -- & $3.2 \times 10^{-5}$\\
$M_\mathrm{sh}(^{4}\mathrm{He})$ & $8.3 \times 10^{-2}$ & $5.3 \times 10^{-2}$ & $3.3 \times 10^{-2}$ & $2.0 \times 10^{-2}$ & $6.9 \times 10^{-3}$ & $1.7 \times 10^{-3}$ & -- & $2.7 \times 10^{-4}$\\
\tableline
$\Delta m_{15}(B)$ / mag & 0.88 & 1.25 & 1.74 & 1.77 & 1.23 & 1.29 & 1.63 & 1.37\\
$t_\mathrm{max}(B)$ / days & 17.0 & 17.0 & 17.7 & 16.4 & 15.2 & 14.3 & 18.2 & 18.1\\
$M_\mathrm{B,max}$ / mag & -15.9 & -17.3 & -18.4 & -19.3 & -19.8 & -19.9 & -19.2 & -19.1\\
$M_\mathrm{V,max}$ / mag & -17.6 & -18.8 & -19.6 & -19.9 & -20.1 & -20.1 & -19.4 & -19.4\\
$M_\mathrm{R,max}$ / mag & -18.4 & -19.1 & -19.4 & -19.4 & -19.4 & -19.2 & -19.0 & -19.1\\
$M_\mathrm{I,max}$ / mag & -18.9 & -19.2 & -19.2 & -19.4 & -19.6 & -19.7 & -18.9 & -19.0\\
$(U-B)_\mathrm{Bmax}$ / mag & 0.58 & 0.41 & 0.50 & 0.19 & 0.08 & -0.26 & -0.04 & 0.35\\
$(B-V)_\mathrm{Bmax}$ / mag & 1.67 & 1.51 & 1.13 & 0.59 & 0.28 & 0.08 & 0.11 & 0.30\\
$(V-R)_\mathrm{Bmax}$ / mag & 0.74 & 0.36 & -0.13 & -0.46 & -0.67 & -0.82 & -0.30 & -0.28\\
$(V-I)_\mathrm{Bmax}$ / mag & 1.26 & 0.39 & -0.47 & -1.05 & -1.48 & -1.47 & -0.64 & -0.49\\
\enddata
\tablenotetext{a}{At maximum light $^{56}$Ni, $^{52}$Fe and $^{48}$Cr
will have mostly decayed to $^{56}$Co, $^{52}$Cr and a mixture of
$^{48}$V and $^{48}$Ti, respectively.}
\tablecomments{$M_\mathrm{tot}$,
$M_\mathrm{core}$, and $M_\mathrm{sh}$ are the masses of the WD,
the C/O core, and the helium shell, respectively. All masses are given
in units of the solar mass. $\Delta m_{15}(B)$ and $t_\mathrm{max}(B)$
refer to the decline parameter and the rise time to maximum light in
the angle-averaged $B$-band light curves, respectively. $M_\text{B,max}$,
$M_\text{V,max}$, $M_\text{R,max}$ and $M_\text{I,max}$ denote the angle-averaged
peak magnitudes at the true peaks in the given bands. Colours are quoted at
time [$t_\mathrm{max}(B)$] of $B$-band maximum.}
\end{deluxetable*}
\begin{figure*}
\centering
\plotone{figure1}
\caption{Final composition structure of selected species of Model 3
of \citet{Fink2010a}. The individual panels show the mass fractions
of He, O, Si, Ca, Ti and Ni (from left to right, respectively).
The model is radially symmetric about the $z$-axis.}
\label{fig:m03_composition}
\end{figure*}
In the initial helium shell the burning does not reach nuclear statistical
equilibrium due to the low densities. Thus it mainly produces iron-group
elements lighter than $^{56}$Ni such as titanium, chromium and iron --
including some amount of the radioactive isotopes $\mathrm{^{48}Cr}$ and
$\mathrm{^{52}Fe}$. Aside from a small mass of calcium, no significant
amounts of intermediate-mass elements are produced in the shell. However,
a large fraction of helium remains unburned.
The core detonation yields both iron-group and intermediate-mass elements,
the relative amounts of which depend crucially on the core density and thus
WD mass. Models with more massive WDs produce more iron-group material and
less intermediate-mass elements (similar to the explosions of the naked
sub-Chandrasekhar-mass WDs studied in \citealt{Sim2010a}). The most massive
Model 6 of the sequence of \citet{Fink2010a} produces almost only iron-group
material and hardly any intermediate-mass elements (the most important burning
products in the shell and core of the different models are listed in
Table~\ref{tab:modelparas}).
Due to the single-point ignition the models show strong ejecta asymmetries
which can be divided into two main categories. First, the helium shell
contains more iron-group material on the northern hemisphere as discussed
by \citet{Fink2010a}. Second, the ignition point of the secondary core
detonation is offset from the centre-of-mass of the model due to the
off-centre convergence of the shock waves from the helium detonation.
\section{Radiative transfer simulations}
\label{sec:rt}
To derive synthetic observables for the models, we performed radiative
transfer simulations with the time-dependent multi-dimensional Monte
Carlo radiative transfer code {\sc artis} described by \citet{Kromer2009a}
and \citet{Sim2007a}. Since the models produce significant amounts of
$\mathrm{^{48}Cr}$ and $\mathrm{^{52}Fe}$, we extended {\sc artis} to
take into account the energy released by the decay sequences $^{52}\mbox{Fe}\rightarrow{^{52}\mbox{Mn}}\rightarrow{^{52}\mbox{Cr}}$
and
$^{48}\mbox{Cr}\rightarrow{^{48}\mbox{V}}\rightarrow{^{48}\mbox{Ti}}$
in addition to the radioactive decays of $^{56}\mbox{Ni}\rightarrow{^{56}\mbox{Co}}$
and $^{56}\mbox{Co}\rightarrow{^{56}\mbox{Fe}}$ which form the primary
energy source of SNe Ia (\citealt{Truran1967a}, \citealt{Colgate1969a}).
For other radioactive nuclei which are synthesized during the explosion
in a non-negligible amount (e.g. $^{44}\mbox{Ti}$ in the shell) the life
times are much longer than those of the $^{56}$Ni decay-sequence. Thus
they can be neglected at early times when the decays of $^{56}$Ni and
$^{56}$Co power the light curves.
The total $\gamma$-ray energy $E_\mathrm{tot}$ that will be emitted from
$t=0$ to $t\rightarrow\infty$ in the decay chains we consider, will be
\begin{align}
E_{\mathrm{tot}} &=(E_{^{56}\mathrm{Ni}}+E_{^{56}\mathrm{Co}})\frac{M_{^{56}\mathrm{Ni}}}{m_{^{56}\mathrm{Ni}}}+ (E_{^{52}\mathrm{Fe}}+E_{^{52}\mathrm{Mn}})\frac{M_{^{52}\mathrm{Fe}}}{m_{^{52}\mathrm{Fe}}}\notag\\
& \quad +(E_{^{48}\mathrm{Cr}}+E_{^{48}\mathrm{V}})\frac{M_{^{48}\mathrm{Cr}}}{m_{^{48}\mathrm{Cr}}}.
\label{eq:decays}
\end{align}
Here $E_{\mathrm{^{56}Ni}}$, ($E_{^{56}\mathrm{Co}}$, $E_{^{52}\mathrm{Fe}}$,
$E_{^{52}\mathrm{Mn}}$, $E_{^{48}\mathrm{Cr}}$, $E_{^{48}\mathrm{V}}$)
is the mean energy emitted per decay of $^{56}$Ni, ($^{56}$Co,
$^{52}$Fe, $^{52}$Mn, $^{48}$Cr, $^{48}$V). Similarly,
$M_{\mathrm{^{56}Ni}}$ ($M_{\mathrm{^{52}Fe}}$, $M_{\mathrm{^{48}Cr}}$
) is the initial mass of $^{56}$Ni ($^{52}\mbox{Fe}$, $^{48}\mbox{Cr}$)
synthesized in the explosion and $m_{\mathrm{^{56}Ni}}$ ($m_{\mathrm{^{52}Fe}}$,
$m_{\mathrm{^{48}Cr}}$) the mass of the $^{56}$Ni ($^{52}\mbox{Fe}$,
$^{48}\mbox{Cr}$) atom.
Following \cite{Lucy2005a}, this energy is quantized into
$\mathcal{N}=E_{\mathrm{tot}}/\epsilon_{0}$ identical indivisible energy
``pellets'' of initial co-moving frame (cmf) energy $\epsilon_{0}$. The
pellets are first assigned to one of the decay sequences in proportion to
the amount of energy deposited in the different decay sequences (terms
on the right-hand-side of Equation~\ref{eq:decays}). Then they are distributed
on the grid according to the initial distribution of $^{56}$Ni, $^{52}$Fe
and $^{48}$Cr, as appropriate, and follow the homologous expansion until
they decay. Decay times are sampled in a two step process. If the pellet
was assigned to the $^{56}$Ni chain, for example, we first choose whether
it belongs to a decay of the parent nucleus $^{56}$Ni or the daughter
nucleus $^{56}$Co by sampling the probabilities
$E_{\mathrm{^{56}Ni}}/\left(E_{\mathrm{^{56}Ni}}+E_{\mathrm{^{56}Co}}\right)$
and
$E_{\mathrm{^{56}Co}}/\left(E_{\mathrm{^{56}Ni}}+E_{\mathrm{^{56}Co}}\right)$,
respectively. Finally an appropriate decay time is sampled
\begin{equation}
t_{\mathrm{decay}}\left(^{56}\mathrm{Ni}\right)=-\tau\left(^{56}\mathrm{Ni}\right)\log\left(z_{1}\right)
\end{equation}
\begin{equation}
t_{\mathrm{decay}}\left(^{56}\mathrm{Co}\right)=-\tau\left(^{56}\mathrm{Ni}\right)\log\left(z_{1}\right)-\tau\left(^{56}\mathrm{Co}\right)\log\left(z_{2}\right)
\end{equation}
from the mean life times of the $^{56}$Ni [$\tau\left(^{56}\mathrm{Ni}\right)=8.80\,\mbox{d}$]
and $^{56}$Co [$\tau\left(^{56}\mathrm{Co}\right)=113.7\,\mbox{d}$] nuclei. $z_i$ are
random numbers between 0 and 1. The $^{52}\mbox{Fe}$ [$\tau\left(^{52}\mathrm{Fe}\right)=0.4974\,\mbox{d}$,
$\tau\left(^{52}\mathrm{Mn}\right)=0.02114\,\mbox{d}$] and $^{48}\mbox{Cr}$
[$\tau\left(^{48}\mathrm{Cr}\right)=1.296\,\mbox{d}$, $\tau\left(^{48}\mathrm{V}\right)=23.04\,\mbox{d}$]
chains are treated in the same way.
Upon decay, a pellet transforms to a single $\gamma$-packet representing
a bundle of monochromatic-radiation of cmf energy $\epsilon_0$. The cmf
photon energy of the $\gamma$-packets is randomly sampled from the
relative probabilities of the $\gamma$-lines in the appropriate decay
of the selected decay sequence -- including annihilation lines due to
positron emission. Following \citet{Lucy2005a}, we assume that positrons
released by radioactive decays annihilate in situ, giving rise to the
emission of two 511 keV $\gamma$-ray photons. In doing so, we neglect the
kinetic energy released by stopping the positrons, any positron escape and
possible positronium formation which gives rise to the emission of continuum
photons (see discussion by \citealt{Milne2004a}). Thus, our prediction of the
511 keV line flux should be considered as an upper limit. The $\gamma$-packets
are then propagated through the ejecta as described by \cite{Kromer2009a}.
The $\gamma$-line data is taken from Table~1 of \citet{Ambwani1988a} for the
$^{56}$Ni decay-sequence and from \citet{Burrows2006a} for the $^{48}$Cr
decay-sequence, respectively. Owing to the comparatively short life times
of $^{52}$Fe and $^{52}$Mn, these nuclei have mostly already decayed to
their daughter nuclei when we start the radiative transfer simulation at
$\sim 1$ day. Since the ejecta at these early times are almost opaque to
$\gamma$-rays, the $\gamma$-packets released by the $^{52}$Fe decay chain
will be thermalized rapidly. Therefore we do not follow the propagation
of the $\gamma$-packets released by the $^{52}$Fe decay chain, but
immediately convert their energy to thermal kinetic energy ($k$-packets
in the framework of {\sc artis}).
The input models (composition, density, velocities) for the radiative
transfer simulations were derived using the tracer particles from
the hydrodynamics simulations (see \citealt{Fink2010a} for details
on the hydro setup and use of tracer particles for nucleosynthesis).
Since the tracer particles are Lagrangian, we use an approach similar
to the reconstruction of the density field in smooth particle
hydrodynamics (SPH) simulations to construct the input model from
the tracers. For the models described here, the density field was
obtained using the SPH method described by \citet{Dolag2009a}
(specifically, their equations 1 -- 3) adopting $N = 32$ for the SPH
smoothing-length normalisation factor.
For the centre ({\boldmath ${x_i}$}) of each grid cell ($i$) in the
model, the mass fractions ($X_{Z,i}$) of the elements considered
($Z = 1$ to 30) were reconstructed using
\begin{equation}
X_{Z,i} = \rho_{i} \sum_j W(| {\mbox{\boldmath $x_{i} - x_{j}$}} |, h_{j})X_{Z,j},
\end{equation}
where $\rho_{i}$ is the reconstructed mass density, $X_{Z,j}$ is the
mass-fraction of element $Z$ for tracer particle $j$ (which lies at
position {\boldmath $x_{j}$}), $h_{j}$ is the SPH particle smoothing
length and $W(x,h)$ the SPH kernel function (defined via equations 3
and 1 of \citealt{Dolag2009a}, respectively). The mass fractions of
the important radioactive isotopes ($^{56}$Ni, $^{56}$Co, $^{52}$Fe
and $^{48}$Cr) were reconstructed from the tracer particle yields
in exactly the same manner. The reconstruction was performed on an
$80\times160$ $(r,z)$-grid using the final state of the tracer
particles at the end of the simulations (which were run up to the
phase of homologous expansion).
For the radiative transfer simulation this 2D model is re-mapped to a
3D Cartesian grid of size $100^{3}$ which co-expands with the ejecta.
We then follow the radiative transfer from 2 to 120 days after the
explosion, discretized into 111 logarithmic time steps each of duration
$\Delta\ln\left(t\right)=0.037$. Using our detailed ionization treatment
and the cd23\_gf-5 atomic data set \citep{Kromer2009a}, we simulated
the propagation of $2\times10^{7}$ packets. To speed up the initial
phase of the calculation we made use of our initial grey approximation
for the first 30 time steps (adopting a parameterised grey opacity
for the highly optically thick cells in the centre of our simulation
volume). The first ten time steps were treated in local thermodynamic
equilibrium (LTE).
\section{Synthetic observables}
\label{sec:oo}
In this Section we present synthetic observables for the models of
\citet{Fink2010a}. First we consider angle-averaged ultraviolet, optical
and infrared light curves and explore the diversity within this set of
models and compare it to that of observed SNe Ia (Section~\ref{sub:oo_lcs}).
Then we investigate the colour evolution and spectra of these models
(Sections~\ref{sub:oo_colours} and \ref{sub:oo_spectra}).
In Section~\ref{sub:oo_los} we study the effects of the asymmetric ejecta
composition. Finally, in Section~\ref{sub:oo_gamma}, we briefly discuss
the $\gamma$-ray emission from these models.
\subsection{Broad-band light curves}
\label{sub:oo_lcs}
In Figure~\ref{fig:lightcurves_all} we show the angle-averaged
ultraviolet-optical-infrared ($UVOIR$) bolometric and $U$, $B$, $V$,
$R$, $I$, $J$, $H$, and $K$ band-limited light curves for the model
sequence of \citet{Fink2010a} as obtained from our radiative transfer
simulations. For comparison to observations we included photometric
data of SNe~2005cf \citep{Pastorello2007a}, 2004eo \citep{Pastorello2007b},
2001el \citep{Krisciunas2003a} and 1991bg \citep{Filippenko1992b,Leibundgut1993a}
in the figure. While SN~1991bg, the prototypical event of the subluminous
1991bg-like objects, marks the faint end of observed SNe Ia, SNe~2005cf,
2004eo and 2001el are representative of the spectroscopically normal objects.
\begin{figure*}
\centering
\plotone{figure2}
\caption{Angle-averaged $UVOIR$ bolometric and $U$,$B$,$V$,$R$,$I$,$J$,$H$,$K$
band-limited light curves for the model sequence of \citet{Fink2010a}
as indicated by the colour coding. For comparison photometrical data
of the spectroscopically normal SNe 2005cf (triangles; \citealt{Pastorello2007a}),
2004eo (upside-down triangles; \citealt{Pastorello2007b}) and 2001el (squares;
\citealt{Krisciunas2003a}) as well as for the subluminous SN 1991bg (circles;
\citealt{Filippenko1992b}, \citealt{Leibundgut1993a}) are shown.}
\label{fig:lightcurves_all}
\end{figure*}
\begin{figure*}
\centering
\plotone{figure3}
\caption{Angle-averaged colour curves of our models as indicated by the labels.
The different panels correspond to $B-V$, $V-R$ and $V-I$ colour (from left to
right). For comparison, the colours of the spectroscopically normal SNe 2005cf
(triangles; \citealt{Pastorello2007a}) and 2004eo (upside-down triangles;
\citealt{Pastorello2007b}) as well as for the the subluminous SN 1991bg (circles;
\citealt{Filippenko1992b}, \citealt{Leibundgut1993a}) are shown.}
\label{fig:colours_all}
\end{figure*}
Since our models form a sequence of increasing $^{56}\mathrm{Ni}$-mass,
the peak brightness of the synthetic light curves increases from Model 1
to Model 6: for $B$-band maximum, for example, we find a range from
$-15.9\,\mathrm{mag}$ for Model 1 to $-19.9\,\mathrm{mag}$ for Model 6
(values for the intermediate models and other bands are given in
Table~\ref{tab:modelparas}). This covers almost the full range of
peak brightnesses observed among SNe Ia \citep[e.g.][]{Hicken2009a}, excluding
only the very brightest events like SN~2003fg \citep{Howell2006a}
which have been suggested to derive from super-Chandrasekhar-mass
progenitor systems (see however \citealt{Hillebrandt2007a,Sim2007b}).
Despite their lower total mass, and thus lower overall opacity, our model
light curves do not show particularly fast evolution compared to
Chandrasekhar-mass models. In contrast, \citet{Hoeflich1996b} reported
a fast initial rise and broad peaks for their helium detonation models.
For the rise times from explosion to $B$-band maximum [$t_\mathrm{max}(B)$]
we find values between 17.0 and 17.7 days for our least massive models
(1 to 3, cf. Table~\ref{tab:modelparas}). This is in good agreement with
observational findings (e.g. \citealt{Hayden2010a} find rise times to
$B$-band maximum between 13 and 23 days with an average value of $17.38
\pm 0.17\,\mathrm{days}$). In contrast, the more massive Models (4 to 6)
show shorter $B$-band rise times with increasing mass (see Table~\ref{tab:modelparas}):
for Model 6, $t_\mathrm{max}(B)$ is only 14.3 days. The peak time is
mainly set by the diffusion time for photons to leak out from the
$^{56}$Ni-rich inner core. Owing to the larger densities
in the more massive models, nuclear burning produces $^{56}$Ni out to
much higher velocities than in the lower mass models (compare Figure~3
of \citealt{Fink2010a}). Moreover, the helium shell masses in our models
decrease for the heavier WDs. Thus, the mass (and therefore opacity) on top
of the $^{56}$Ni-rich inner core decreases with increasing WD mass and
photons start to escape earlier for the more massive models making their
light curves peak faster (compare also the rise times to bolometric
maximum in \citealt{Fink2010a}).
In addition, we note that the radioactive nuclides which are produced in
the helium shell burning have some influence on the initial rise phase
of the light curves (see Figure~8 of \citealt{Fink2010a}).
In the post-maximum decline phase, characterized by the decline
parameter $\Delta m_{15}(B)$ which gives the change in $B$-band
magnitude between maximum and 15 days thereafter, our models show
a peculiar characteristic. Observationally, brighter SNe Ia show
a trend of slower declining light curves (e.g. \citealt{Phillips1993a,Hicken2009a}).
In contrast, the model light curves decline faster along our
sequence from Model 1 to 4 despite increasing brightness
(cf. Table~\ref{tab:modelparas}).
Specifically, we find $\Delta m_{15}(B) \sim 0.88$ for Model 1, which
according to peak brightness would be classified as a subluminous explosion.
However, observationally these events are characterized by a fast decline
[$\Delta m_{15}(B) \sim 1.9$, e.g. \citealt{Garnavich2004a,Taubenberger2008a}].
In contrast, Model 4, which has a $B$-band peak magnitude typical for a
spectroscopically normal SN Ia, yields $\Delta m_{15}(B) \sim 1.77$
which is much faster than typically observed [$\Delta m_{15}(B)
\sim 1$, e.g. \citealt{Hicken2009a}].
Although $\Delta m_{15}(B)$ is lower for the brighter Models 5 and 6,
they still decline too fast compared to observed objects of corresponding
brightness. A similar trend for more rapidly declining light curves
from more massive models is also seen in the bolometric light
curves of Model 2 to 6 (cf. Table~4 of \citealt{Fink2010a}).
In contrast to earlier work by \citet{Hoeflich1996b}, who
studied sub-Chandrasekhar-mass models with significantly more massive
shells, we do not find particularly fast evolution in the post-maximum
decline of our light curves compared to those obtained for other types
of models. Applying our radiative transfer code {\sc artis} for example
to the well-known W7 model \citep{Nomoto1984a,Thielemann1986a}, which is
widely regarded as a good standard for SNe Ia, we find $\Delta m_{15}(B)
\sim 1.6$ when using the same atomic data set as adopted here. This
value is comparable to our fastest declining sub-Chandrasekhar-mass
model and also too fast compared to normal SNe~Ia.
In addition, we note that $\Delta m_{15}(B)$ is also quite sensitive
to the details of the radiative transfer treatment. Using an atomic
data set with $8\times10^6$ lines (the big\_gf-4 data set of \citealt{Kromer2009a}),
for example, yields $\Delta m_{15}(B)\sim1.75$ for W7. In contrast,
a simulation using the atomic data set adopted in this study but
applying a pure LTE treatment for the excitation/ionization state
of the plasma yields $\Delta m_{15}(B)\sim1.95$ for W7. This shows
that systematic uncertainties in the radiative transfer treatment can
affect $\Delta m_{15}(B)$ by several tenths of a magnitude (see also
the comparison of different radiative transfer codes in Figure~7 of
\citealt{Kromer2009a}). Thus, we argue that there is no evidence
that our sub-Chandrasekhar-mass models fade too fast compared to
other explosion models.
\subsection{Colour evolution}
\label{sub:oo_colours}
The most striking difference between our light curves and those of the
comparison objects in Figure~\ref{fig:lightcurves_all} concerns their
colour. To highlight this we show the angle-averaged time-evolution of
the $B-V$, $V-R$ and $V-I$ colours for all our models in Figure~\ref{fig:colours_all}
and compare them again to our fiducial SNe 2005cf, 2004eo, 2001el and 1991bg.
In $B-V$ all our models show positive colour indices for the whole
simulation period and a red peak at $\sim 40$ days after explosion.
Contrary to observed SNe Ia \citep{Lira1995a}, however, we find no
convergence of the different models at epochs after the red peak.
Instead, our models at all times form a sequence of increasingly redder
$B-V$ colour towards the fainter explosions. With the exception of Model
6, all our models are generally too red compared to spectroscopically
normal SNe~Ia. At maximum light, for example, we find $(B-V)$-values
of 1.67 and 0.28 for Model 1 and Model 5, respectively (values for the
other models are given in Table~\ref{tab:modelparas}). In contrast,
spectroscopically normal SNe Ia are characterized by $(B-V)_\mathrm{max}
\sim 0.0\,\mathrm{mag}$.
Subluminous 1991bg-like objects show a redder $B-V$ colour before the
red peak, reaching $B-V\sim 0.4\dots0.7\,\mathrm{mag}$ at $B$-band
maximum \citep{Taubenberger2008a}. Although Model 4 can
reproduce this, it is not a good fit to 1991bg-like objects, since it
is considerably too bright (compare Figure~\ref{fig:lightcurves_all}).
Our subluminous Models 1 and 2, on the other hand, are significantly redder
than observed ($B-V$ colours of 1.67 and 1.51 at maximum, respectively).
In contrast, \citet{Hoeflich1996b} and \citet{Hoeflich1996c} found too
blue colours at maximum light for their subluminous sub-Chandrasekhar-mass
models compared to 1991bg-like objects.
\begin{figure*}
\centering
\plotone{figure4}
\caption{Angle-averaged (thick black line) spectra at three days before
$B$-band maximum for all six of our models as indicated by the labels.
For comparison the blue line shows the de-redshifted and de-reddened
spectrum of SN~2004eo \citep{Pastorello2007b} at the corresponding
epoch. This was scaled such that its maximum matches the maximum of
the model spectrum. The colour coding indicates the element(s)
responsible for both
bound-bound emission and absorption of quanta in the Monte Carlo
simulation. The region below the synthetic spectrum is colour coded
to indicate the fraction of escaping quanta in each wavelength bin
which last interacted with a particular element (the associated atomic
numbers are illustrated in the colour bar). Similarly, the coloured
regions above the spectra indicate which elements were last responsible
for removing quanta from the wavelength bin (either by absorption
or scattering/fluorescence).}
\label{fig:maxspectra_all}
\end{figure*}
The origin of our red colours traces back to the extended layer of titanium
and chromium which is present in the helium shell ejecta of our models. This
will be discussed in more detail in Section~\ref{sub:oo_spectra}.
While the trend of increasingly redder colours for fainter models persists
in the $V-R$ index, our models are not systematically too red here. Model 2
and 3 populate about the right range for 1991bg-like and spectroscopically
normal SNe Ia, respectively. However, the details of their $V-R$ evolution,
especially the initial decline present in all the models, do not match the
observations.
A similar behaviour is found for the $V-I$ colour where again Model 2 and
3 lie closest to the observed colours of 1991bg-like and spectroscopically
normal SNe Ia, respectively. However, again the agreement is imperfect.
For Model 2 this is most obvious at the latest epochs, where 1991bg-like
objects start to become bluer while the model colour maintains a redward
evolution. Moreover, the model shows a secondary blue minimum, due to the
post-maximum plateau of the $V$-band light curve which is not observed in
1991bg-like objects. In Model 3 the initial decline and the rise to the
red peak are significantly different from the observed behaviour of
spectroscopically normal SNe Ia.
\subsection{Spectra}
\label{sub:oo_spectra}
The properties of our model light curves can be understood from
consideration of the synthetic spectra. Figure~\ref{fig:maxspectra_all}
shows the angle-averaged spectrum of all our models at three days before
$B$-band maximum. For comparison, we also show the spectrum of SN~2004eo
\citep{Pastorello2007b} at the corresponding epoch. The colour coding
below the synthetic spectrum indicates the fraction of escaping quanta
in each wavelength bin which were last emitted in bound-bound transitions
of a particular element. Similarly, the coloured regions above the spectra
indicate which elements were last responsible for removing quanta from a
wavelength bin by bound-bound absorption. This coding allows us to both
identify individual spectral features and track the effect of fluorescence
on the spectrum formation directly. The contributions of bound-free and
free-free emissions to the total flux are so small that they are not
discernable in the figure.
From this it is immediately obvious that the colours of our models
are due to fluorescence in titanium and chromium lines. Having a
wealth of strong lines in the UV and blue, even small amounts of these
elements effectively block radiation in the UV and blue and redistribute
it to redder wavelengths where the optical depths are smaller and the
radiation can escape. Compared to other explosion models, this effect
is particularly strong in the \citet{Fink2010a} models, since they
produce relatively large amounts of titanium and chromium in the outer
layers during the initial helium detonation (cf. Table~\ref{tab:modelparas}
and Figure~\ref{fig:m03_composition}; typical titanium and chromium yields
for other explosion models are on the order of the yields from the core
detonation). This also explains the trend for redder colours in the fainter
models: according to Table~\ref{tab:modelparas}, the production of titanium
and chromium in the shell increases continuously from Model 6 to Model 1.
Since this titanium and chromium layer is located at higher velocities
than most of the intermediate mass elements (cf. Figure~\ref{fig:m03_composition}),
redistribution by titanium and chromium also dilutes the absorption
features of intermediate-mass elements like silicon and sulphur by
reprocessing flux into the relevant wavelength regions. This can be
clearly seen in Figure~\ref{fig:maxspectra_all}. Although Models 1 and
2 produce the largest amounts of silicon of all our models (0.27 and
0.25 solar masses in the core, respectively), they show only a (very)
weak Si\,{\sc ii} $\lambda 6355$ feature since this wavelength region
is strongly polluted by flux redistribution from titanium and chromium.
The only feature of intermediate-mass elements which is clearly visible
in these models is the Ca\,{\sc ii} near-infrared (NIR) triplet at $\lambda\lambda
8498,8542,8662$. This feature remains unburied since calcium exists co-spatially
with titanium and chromium in the outer shell (Figure~\ref{fig:m03_composition}).
Of all our models, Model 3 [$M_\mathrm{core}(^{28}\mathrm{Si})=0.21\,M_\odot$]
produces the strongest Si\,{\sc ii} $\lambda 6355$ feature. For more
massive models the silicon yields from the core detonation drop
dramatically since, owing to the higher densities, a larger fraction
of the core is burned to iron-group elements (see Table~\ref{tab:modelparas}
and Figure~3 of \citealt{Fink2010a}). As expected from studies of pure
detonations of Chandrasekhar-mass WDs \citep{Arnett1971a}, the extreme
case of Model 6 (almost the Chandrasekhar mass) produces no significant
amounts of intermediate-mass elements. Thus the spectrum is totally
dominated by iron-group elements. Since it shows no indication of any
Si\,{\sc ii} $\lambda 6355$ feature, this model would not be classified
as a SN~Ia.
Since none of our models give a good match to the colours and line strengths
of observed SNe~Ia, we will not discuss line velocities in detail. We note,
however, that along the sequence from Model 1 to 5 there is a trend for
higher line velocities of intermediate mass elements. This is obvious in the
Si\,{\sc ii} $\lambda 6355$ feature of our models which is apparent in
Figure~\ref{fig:maxspectra_all}. Along the model sequence (1 to 5), this line
moves to shorter wavelengths compared to the observed Si\,{\sc ii} absorption
feature of SN~2004eo. This arises because the inner boundary of the region
rich in intermediate-mass elements moves to higher velocities with increasing
mass due to the more complete burning in the inner regions (compare Figure~3
of \citealt{Fink2010a}).
Finally, we note that none of our model spectra show any indication of
helium lines, despite our models having up to $\sim0.08\,M_\odot$ of
helium in their outer layers (cf. Table~\ref{tab:modelparas}). Since
helium has rather highly excited levels, this might simply be a
consequence of our approximate treatment of the plasma state which
neglects non-thermal excitation and ionization. We note, however, that
despite using the {\sc phoenix} code which does include a treatment of
non-thermal processes, \citet{Nugent1997a} also found no strong evidence
of helium lines for models with even more ($0.2\,M_\odot$) helium in the
outer shell.
\subsection{Line-of-sight dependence}
\label{sub:oo_los}
As discussed in Section~\ref{sec:models}, our models show strong ejecta
asymmetries due to their ignition in a single-point at the north pole
of the WD (cf. Figure~\ref{fig:m03_composition}). These asymmetries are
expected to have some influence on the observables along different
lines-of-sight. Since they are characteristically the same for all our
models, we first use Model 3 as an example to give a detailed discussion
of line-of-sight dependent spectra and light curves of this model
(Sections~\ref{sub:oo_los_spectra} and \ref{sub:oo_los_lightcurves}).
In Section~\ref{sub:oo_los_all} we then present the variations in peak
magnitudes and colours due to line-of-sight effects for all our models.
\subsubsection{Spectra}
\label{sub:oo_los_spectra}
To obtain line-of-sight dependent observables we bin the escaping photons
into a grid of ten equal solid-angle bins in $\mu=\cos\theta$ with $\theta$
being the angle between the line-of-sight and the $z$-axis of the model.
In Figure~\ref{fig:m03_losspectra} we show the maximum-light spectra of
Model 3 for three different directions, corresponding to lines-of-sight
close to the southern polar axis ($\mu=-0.9$), the northern polar axis ($\mu=
-0.9$) and equator-on (average of the $\mu=-0.1$ and 0.1 bins). While the
equator-on spectrum looks similar to the angle-averaged spectrum, the
spectra seen from the polar directions are significantly different from
each other as well as from the angle-averaged spectrum. For $\mu=0.9$,
an observer is looking through the extended layer of iron-group elements
in the outer shell of the northern-hemisphere (cf. Figure~\ref{fig:m03_composition}).
This blocks almost all the flux in the UV and blue wavelength range and
the redistribution of this flux leads to a very red spectrum. In contrast,
this layer of iron-group elements in the outer shell is far less extended
on the southern hemisphere, where the shell burning is less efficient.
Thus, an observer with $\mu=-0.9$ sees a bluer spectrum than equator-on.
Equator-on the extension of the layer of iron-group elements is somewhere
between these two extremes and the spectrum resembles the angle-averaged
case.
\begin{figure}
\centering
\plotone{figure5}
\caption{Line-of-sight dependent maximum-light spectra for Model 3.
To indicate the maximal effect spectra seen pole-on are plotted
and compared to a spectrum seen equator on. The corresponding
lines are identified by the colour coding. For comparison the
angle-averaged spectrum is shown as the dashed black line.}
\label{fig:m03_losspectra}
\end{figure}
\begin{figure*}
\centering
\plotone{figure6}
\caption{Selected line-of-sight dependent light curves of Model 3 as
indicated by the colour coding. For comparison angle-averaged
light curves (black dashed) and photometrical data of our fiducial
SNe 2005cf, 2004eo, 2001el and 1991bg (different symbols) are shown.}
\label{fig:m03_lightcurves}
\end{figure*}
A secondary effect results from the off-centre ignition of the core. Since
the core has been compressed less strongly on the northern hemisphere, the
core detonation yields more intermediate-mass elements on the northern
than on the southern hemisphere (cf. Figure~\ref{fig:m03_composition} and
the Discussion in Section~4.3 of \citealt{Fink2010a}). This leads to
stronger features of intermediate-mass elements in the spectrum seen along
the northern polar-axis and becomes particularly visible in the strength
of the Si\,{\sc ii} $6355\,\text{\AA}$ feature and the Ca\,{\sc ii} NIR-triplet
which become weaker from $\mu=0.9 \rightarrow -0.9$.
\newpage
\subsubsection{Lightcurves}
\label{sub:oo_los_lightcurves}
Figure~\ref{fig:m03_lightcurves} shows band-limited synthetic light
curves from our radiative transfer simulations for Model 3 as seen
equator-on and from the two polar directions. As already noted for
the maximum-light spectra, our model shows a trend for increasingly
red colours from $\mu=-0.9 \rightarrow 0.9$. Similarly, we find a
clear dependence of the light curve rise and decline times on the
line-of-sight. While the rise times increase along the sequence
$\mu=-0.9 \rightarrow 0.9$ (for $B$ band, for example, we find rise
times between 16.8 and 19.7 days for $\mu=-0.9$ and $\mu=0.9$,
respectively), the light curve declines more slowly [$\Delta m_{15}(B)=
1.91$ and $\Delta m_{15}(B)=1.29$ for $\mu=-0.9$ and $\mu=0.9$,
respectively]. Moreover, we find a clear trend for a weaker dependence
of the light curves on the line-of-sight at lower photon energies.
Thus, while $U$ and $B$ band show a variation of $\sim3$ and $\sim2$
magnitudes at maximum light, respectively, the variation in $V$ band
is already less than half a magnitude and in the NIR bands there is
virtually no viewing-angle effect.
All these effects are due to the asymmetry of the layer of iron-group
elements in the outer shell. Since this layer is more extended on the
northern hemisphere it causes additional line blocking and thus enhanced
fluorescence and photon trapping for inclinations close to $\mu=1$. This
explains the redder colours as well as the increasing rise and decreasing
decline times for $\mu=-0.9 \rightarrow 0.9$. For lower photon energies,
the asymmetry of the outer shell is less important, since the optical
depths are smaller and photons typically escape from deeper layers of
the ejecta. In the NIR bands the entire ejecta contribute to the emitted
photons and the viewing-angle
|
dependence of the light curves is very small.
This is illustrated in Figure~\ref{fig:m03_lastscat_snapshots} which shows
where photons of selected bands were last emitted before they escaped from
the supernova ejecta at different times. While $U$-band photons leak out
predominantly from the regions on the southern hemisphere, where the layer
of iron-group elements is least extended, $I$-band photons show no strong
preference for a particular region. In fact, even before maximum light
the whole ejecta contribute to $I$-band emission. $V$-band photons, in
contrast, show some preference for leaking out from the southern hemisphere
before and immediately after maximum light. From about 10 days after
maximum light this preference disappears. This is directly reflected
in the $V$-band light curve in Figure~\ref{fig:m03_lightcurves} which
becomes viewing-angle independent at about that time.
\begin{figure*}
\centering
\plotone{figure7}
\caption{Region of last emission for selected bands ($U$, $V$, $I$
from top to bottom) and different times (from left to right). The epochs
indicated in the bottom panels are given in days with respect to $B$-band
maximum. Dark regions contribute most to the flux escaping in the band.
The model is symmetric under rotation about the $z$-axis.}
\label{fig:m03_lastscat_snapshots}
\end{figure*}
Finally, we note that the strong line-of-sight dependence of our synthetic
observables poses an additional problem for the \citet{Fink2010a} models.
Even if some particular line-of-sight might compare more favourably to
observed SNe~Ia than others (as does for example the southern line-of-sight
for the light curves in Figure~\ref{fig:m03_lightcurves}), all lines-of-sight
should occur in nature -- including the most peculiar ones. Moreover,
the large dispersion in brightness at $B$-band maximum which we find in
our model is in conflict with observations.
\subsubsection{Other models}
\label{sub:oo_los_all}
Since the other models in the \citet{Fink2010a} sequence have the same
characteristic asymmetries, their light curves and spectra show a similar
viewing-angle dependence. However, the strength of this viewing-angle
dependence varies between the models, due to their different helium shell
masses. To demonstrate this, Figure~\ref{fig:los-dependence_all} shows
selected light curve properties of all models for the ten different
viewing directions within our uniform grid in $\mu$.
\begin{figure}
\centering
\includegraphics[width=8.5cm]{figure8}
\caption{Light curve properties of the \citet{Fink2010a} models for 10
different viewing directions (different models are indicated by the
colour coding). From top to bottom the panels show the rise-time to
$B$-band maximum [$t_\mathrm{max}(B)$], the decline in $B$-band between
maximum light and 15 days thereafter [$\Delta m_{15}(B)$], the $B$-band
peak magnitude ($M_\mathrm{B,max}$), the $B-V$ colour at $B$-band maximum
[$(B-V)_\mathrm{Bmax}$] and the $V-R$ colour at $B$-band maximum
[$(V-R)_\mathrm{Bmax}$], respectively. The lines connecting the different
data points are just to guide the eye.}
\label{fig:los-dependence_all}
\end{figure}
As expected, the viewing-angle dependence decreases for smaller helium
shell mass. Thus, we find a scatter of more than 2 magnitudes for the
brightness at $B$-band maximum between the different lines-of-sight in
Model 1 and 2, while Model 6 shows only a scatter of $\sim 0.3$ magnitudes.
Similar trends can be observed for the $B-V$ and $V-R$ colours at $B$-band
maximum. While Model 6 shows almost no dependence on the line-of-sight,
the less massive models show a very clear trend for redder colours towards
$\mu=0.9$ (as discussed for Model 3 in detail above).
For the rise times the situation is similar: while Models 1, 2 and 3
(relatively massive shells) show a clear trend for increasing $B$-band
rise times from $\mu=-0.9$ to $\mu=0.9$ due to the enhanced photon
trapping by optically thick lines of iron-group elements, we do not
observe a clear trend for Models 4, 5, and 6 (less massive shells).
Recall, that the average rise times increase from Model 1 to 3, while
Models 4 to 6 form a sequence of decreasing rise times. This was
already discussed in Section~\ref{sub:oo_lcs}.
The enhanced trapping of photons by the iron-group layer of the northern
hemisphere also affects the post-maximum decline rate in $B$ band. Thus,
we find decreasing values of $\Delta m_{15}(B)$ from $\mu=-0.9$ to
$\mu=0.9$. This trend persists for all our models, although it is
weaker for the models with the least massive shells.
\subsection{$\gamma$-ray emission}
\label{sub:oo_gamma}
Due to their peculiar composition including a mixture of the radioactive
isotopes $^{56}$Ni, $^{52}$Fe and $^{48}$Cr close to the surface,
$\gamma$-observations might provide an additional discriminant between
our sub-Chandrasekhar-mass models and more standard explosion models
which do not show radioactive isotopes close to the surface. To investigate
this, Figure~\ref{fig:gamma} shows $\gamma$-ray light curves and spectra
for Models 1, 3 and 6 of our sequence. Line identifications are given for
some important features in Table~\ref{tab:gammalineidentifiers}.
\begin{figure}
\centering
\includegraphics[width=8.5cm]{figure9}
\caption{$\gamma$-ray light curves (top panel) and spectra (lower panels)
for Models 1, 3 and 6 of \citet{Fink2010a} as indicated by
the colour coding. The spectra shown are for two different epochs which
correspond roughly to maximum light in $B$ band (20 days, middle panel)
and in $\gamma$-rays (60 days, bottom panel). Line features discussed
in the text are identified by labels (A -- F) in the middle panel (for their
identification, see Table~\ref{tab:gammalineidentifiers}). In addition, the $^{56}$Ni,
$^{56}$Co and $^{48}$V line systems are indicated. The line systems of $^{48}$Cr,
$^{52}$Fe and $^{52}$Mn are omitted, since their life times are
sufficiently short that these nuclei have already decayed at the
shown epochs. Note that at 20 days the line features in the spectra
are generally offset towards higher energies compared to the line
identifications, due to blue-shifted emission. At 60 days the emission
comes from regions of lower velocity and the offset disappears.
}
\label{fig:gamma}
\end{figure}
\begin{deluxetable*}{ccl}
\tabletypesize{\scriptsize}
\tablecaption{Identification of $\gamma$-lines in Figure~\ref{fig:gamma}. \label{tab:gammalineidentifiers}}
\tablehead{Identifier & Photon energy (MeV) & Source}
\startdata
A & 0.158 & $^{56}$Ni \\
B & 0.270 & $^{56}$Ni \\
C & 0.511 & Annihilation of positrons from $^{56}$Co and $^{48}$V\\
D & 0.750 & $^{56}$Ni \\
E & 0.812 & $^{56}$Ni \\
F & 1.562 & $^{56}$Ni \\
\enddata
\end{deluxetable*}
Broadly speaking, our $\gamma$-ray spectra are not dramatically
different from those obtained by \citet{Sim2008a} for parameterised
Chandrasekhar-mass models: they are dominated by strong emission lines,
mainly due to $^{56}$Co, and a continuum which results from Compton
scattering of line photons.
Models 1 and 3 have characteristically similar $\gamma$-ray light curves.
After an initially fast rise (lasting for about 5 and 10 days after the
explosion for Model 3 and 1, respectively) their light curves have a small
plateau before passing to a second rise to maximum light at about 60 days.
In contrast, Model 6 shows no early rise/plateau but reaches maximum at
about 60 days in one continuous rise. These differences result from the
significantly different masses of the helium shells of the models. Model~1
has a rather massive shell of $0.126\,M_\odot$ with $\sim 0.02\,M_\odot$
of radioactive isotopes. The fast initial rise of the light curve of Model
1 is caused by $\gamma$-photons which originate from this outer shell.
Due to the small optical depth in the outer shell, those $\gamma$-photons
start to stream freely at about 10 days and the emerging flux from the
outer shell decreases. At the same time, however, $\gamma$-photons from
the C/O core start to escape and keep the light curve rising until the
core also becomes transparent to $\gamma$-rays at about 60 days after which
the $\gamma$-photons start to stream freely.
For Model 3, we observe the same effect. However, due to its lower helium
shell mass ($M_\mathrm{sh}=0.055\,M_\odot$) $\gamma$-photons from the shell
escape even earlier (at about 5 days). Since the detonation of the C/O core
of this model produces much more $^{56}$Ni, the $^{56}$Ni-rich region is
far more extended than in Model 1. This means that $^{56}$Ni nuclei from the
C/O core are present in regions where the Compton optical depth is relatively
low, allowing their $\gamma$-photons to more easily escape. Thus, photons
from the core dominate the rise earlier than in Model 1. In the extreme
limit of Model 6, the shell ($M_\mathrm{sh}=0.0035\,M_\odot$) is completely
negligible and the massive $^{56}$Ni region in the C/O core extends so close
to the surface that the rise of the light curve is totally dominated by
$\gamma$-photons escaping from the C/O core.
A similar effect shows up in the early-time (20 days) $\gamma$-ray spectra
of our models. While Model 6 shows a clear indication of the 0.158 and
0.270 MeV lines of $^{56}$Ni, those are invisible for Model 1. Since the
Compton cross-section increases with decreasing photon energy, soft-energy
lines are most easily buried by photons being Compton down-scattered from
higher energies and can only be observed if $^{56}$Ni is present at low
optical depths \citep{Gomez1998a}. Thus the presence of the 0.158 and
0.270 MeV lines of $^{56}$Ni in Model~6 is a direct consequence of the
large extension of the $^{56}$Ni bubble in this model, compared to Model~1.
The harder $^{56}$Ni lines at 0.750, 0.812 and 1.562 MeV, in contrast,
are also visible in Model~1. However, they are weaker due to the smaller
mass of $^{56}$Ni synthesized in Model 1. Note that despite containing
$^{48}$Cr or $^{48}$V close to the surface, our models show no clear
features of these radioactive isotopes in their $\gamma$-ray spectra.
An interesting effect concerns the 511 keV annihilation line:
for Model 3 and 6 the strength of this line increases from 20 to 60 days
significantly, but it does not for Model 1. In Model 1, the 511 keV line
is dominated by positrons from $^{48}$V at 20 days. Being located in the
outer layers, the annihilation photons escape easily making the line strong.
At 60 days most of the $^{48}$V has already decayed. Then, the 511 keV
line results from the annihilation of $^{56}$Co positrons from the C/O
core. For Model 3 and 6, in contrast, the line is always dominated by
annihilation photons originating from $^{56}$Co positrons in the C/O
core. Due to the longer life time of $^{56}$Co and the longer time it
takes for photons to escape from the core, the strength of the 511 keV
line increases from 20 to 60 days.
\section{Discussion}
\label{sec:discussion}
In the last Section we presented synthetic observables for the
sub-Chandrasekhar-mass double detonation models with minimum helium
shell mass of \citet{Fink2010a}. Compared to observed SNe~Ia, these
models show some promising features:
\begin{enumerate}
\item They predict a wide range of brightnesses that covers the
whole range of observed SNe~Ia.
\item Their light curve rise and decline times are for most bands
in reasonable agreement with those observed.
\end{enumerate}
But despite these positive features the \citet{Fink2010a} models
cannot account for all the properties of observed SNe~Ia since they
have peculiar light curves and spectra:
\begin{enumerate}
\item The colours of the models are too red compared to observed
SNe~Ia. This is particularly obvious in the evolution of the
$B-V$ colour, where all the models are redder than
spectroscopically normal SNe~Ia for all epochs.
\item The model spectra cannot reproduce the strong features of
intermediate-mass elements typical of SNe~Ia at maximum light.
\end{enumerate}
In detail, there are further problems concerning the exact light curve
shapes and decline rates as well as a strong viewing-angle dependence
which is caused by the point-like ignition of our models.
\begin{figure*}
\centering
\plotone{figure10}
\caption{Selected line-of-sight dependent light curves of Model 3c as
indicated by the colour coding. For comparison angle-averaged
light curves (black dashed) and photometrical data of our fiducial
SNe 2005cf, 2004eo, 2001el and 1991bg (different symbols) are shown.}
\label{fig:m03_nohe_lightcurves}
\end{figure*}
We have argued that all these problems are mainly due to the burning
products of the helium shell. Moreover, \citet{Sim2010a}, have recently
shown that detonations of centrally ignited spherically symmetric naked
sub-Chandrasekhar-mass C/O WDs are capable of reproducing the observed
diversity of light curves and spectra of SNe~Ia -- at least to a similar
level of agreement as models of the more standard Chandrasekhar-mass
delayed detonations. This naturally leads us to speculate on whether the
helium shell in double detonation models could be altered in some way to
reduce its negative impact on the synthetic observables.
\subsection{Influence of the helium shell}
\label{sub:disc_nohemodel}
We first explicitly investigate the extent to which the shortcomings
and viewing-angle dependence of our models can indeed be attributed to
the helium shell. For that purpose we constructed (for Model 3) a toy
model which contains only the burning products of the detonation in
the initial C/O core but not those of the initial helium shell. For
the models of \citet{Fink2010a} this can be done in a straightforward
manner: since the models use two different sets of tracer particles to
simulate the nucleosynthetic yields of core and shell burning respectively
(see Section~3.3 of \citealt{Fink2010a}), we obtain such a model by
restricting our SPH-like reconstruction algorithm (Section~\ref{sec:rt})
to the core tracers. Properties of this ``core-only'' model (hereafter 3c)
are listed in Table~\ref{tab:modelparas}.
Figure~\ref{fig:m03_nohe_lightcurves} shows band-limited synthetic light
curves obtained from our radiative transfer simulations for this model
as seen equator-on and from the two polar directions. In contrast to the
light curves of Model 3 in Figure~\ref{fig:m03_lightcurves}, which are
strongly dependent on the viewing angle, the light curves of Model 3c show
only a moderate line-of-sight dependence. Thus, at maximum light we find
now only a variation of $\sim 0.5$ and $\sim 0.2$ magnitudes for $U$ and
$B$ band, respectively. For Model 3 these values were significantly larger
($\sim3$ and $\sim2$ magnitudes, respectively). Redder bands show no
significant line-of-sight dependence in Model 3c.
Moreover, the light curves of Model 3c give an excellent representation of
SN~2004eo in the $B$, $V$ and $R$ bands. $U$ and $I$ are not in perfect
agreement, but still reasonable compared to the agreement between other first
principles explosion models and observed SNe~Ia (e.g. \citealt{Kasen2009a}).
In particular, the colours of this toy model are now in good agreement
with observed SNe~Ia and not too red as it is the case for Model 3. In
the NIR bands, in contrast, the agreement is no better. However, the NIR
light curves are much more difficult to model accurately since they require
simulations with an extensive atomic data set to properly simulate flux
redistribution by fluorescence which strongly affects these bands \citep{Kasen2006a,Kromer2009a}.
Here, however, we have restricted ourselves for computational reasons to
a simplified atomic data set (cd23\_gf-5 of \citealt{Kromer2009a}) with
only $\sim 400,000$ lines. This has been shown to give reliable results in
the optical bands which, for our purposes, are the most important since
they are the most different between our toy Model 3c and Model 3 of
\citet{Fink2010a}. In the NIR bands, in contrast, Model 3c and Model 3 give
rather similar results (see Figure~\ref{fig:compare_m03_lightcurves}).
The good agreement between our toy Model 3c and observational data
does not only hold for band-limited light curves but also for individual
spectral features as can be seen from Figure~\ref{fig:m03_nohe_spectrum}
which shows a spectrum of Model 3c at 3 days before $B$-band maximum.
Compared to SN~2004eo, our toy model succeeds in reproducing the
characteristic spectral features of intermediate-mass elements in
SNe~Ia. This is highlighted by our colour coding. Moreover, it shows
an overall flux distribution which is in almost perfect agreement with
the observed spectrum of SN~2004eo and we see no strong flux redistribution
by titanium (compare with Model 3 in Figure~\ref{fig:maxspectra_all}).
\begin{figure}
\centering
\plotone{figure11}
\caption{Angle-averaged (thick black line) spectra at three days before
$B$-band maximum for Model 3c. For comparison the blue line shows the
de-redshifted and de-reddened spectrum of SN~2004eo \citep{Pastorello2007b}
at the corresponding epoch. Note, that the flux is here in physical
units and not scaled like in Figure~\ref{fig:maxspectra_all}. For a
description of the colour coding see Figure~\ref{fig:maxspectra_all}.}
\label{fig:m03_nohe_spectrum}
\end{figure}
This confirms our conclusion from Section~\ref{sub:oo_los} that the
peculiarities of our model spectra with respect to the observations
and their strong viewing-angle dependence are mainly due to the shell
material and its compositional asymmetries. It also shows that the
off-centre ignition of the secondary detonation in the C/O core causes
only a minor viewing-angle dependence which is on the order of the
observed variation between SNe~Ia.
\subsection{Prospects}
\label{sub:disc_modifiedmdoel}
In light of the discussion above, we are motivated to speculate on how the
influence of the helium shell might be reduced. In the sub-Chandrasekhar-mass
double detonation scenario, the helium shell cannot be removed entirely since
it is required to trigger the detonation. Also the helium shell mass adopted
in the \citet{Fink2010a} models is already the minimum that might be expected
to detonate \citep{Bildsten2007a}.
In Section~\ref{sec:oo}, however, we have argued that the differences
between our model spectra and observations are not a consequence of the
helium itself but of its particular burning products, namely titanium
and chromium produced in the outer layers. The yields of these elements
are affected by details of the nucleosynthesis in the shell.
The degree of burning in the shell material (and thus its final composition)
can be affected by the initial abundance of heavy nuclei (e.g.\ $^{12}$C)
which in turn depend strongly on triple-$\alpha$ reactions during previous
hydrostatic burning and dredge-up phases from the core \citep{Shen2009a}.
Since the time-scale for $\alpha$-captures behind the detonation
shock front is significantly shorter than that of triple-$\alpha$ reactions,
such seed-nuclei can limit the $\alpha$-chain before reaching nuclear
statistical equilibrium. If, for example, in a shell consisting of a mixture
of $^{4}$He and $^{12}$C the number ratio of free $\alpha$-particles to
$^{12}$C-nuclei on average is less than 6 (corresponding to a mass ratio
of 2), the $\alpha$-chain will end at $^{36}$Ar. Thus, it is possible that
more intermediate-mass elements and less titanium and chromium may be produced.
Therefore it is interesting to consider how the burning of the helium might
be different from that found by \citet{Fink2010a} for different initial
compositions of the shell. A full study of this goes beyond the scope of
this work and will be published in a follow-up study. Here, we illustrate
the possibility of obtaining better agreement with data for just one
example.
\begin{figure*}
\centering
\plotone{figure12}
\caption{Angle-averaged $UVOIR$ bolometric and $U$,$B$,$V$,$R$,$I$,$J$,$H$,$K$
band-limited light curves for Model 3, 3c and 3m of our model sequence
as indicated by the colour coding (compare Table~\ref{tab:modelparas}
for details on the models). For comparison angle-averaged
light curves (black dashed) and photometrical data of our fiducial
SNe 2005cf, 2004eo, 2001el and 1991bg (different symbols) are shown.}
\label{fig:compare_m03_lightcurves}
\end{figure*}
In the \citet{Fink2010a} models it was initially assumed that the shell
consisted of pure helium. To demonstrate the sensitivity to the initial
composition of the shell, we set up another toy model. For this ``modified''
model (hereafter 3m), we homogeneously polluted the shell of Model 3 with
34\% (by mass) of $^{12}$C and repeated the hydrodynamics and nucleosynthesis
calculation (in the same way as described by \citealt{Fink2010a})\footnote{
We note, that the base temperature of the shell in Model 3m was decreased
to $4\times10^8\,\mathrm{K}$ (compared to $6.7\times10^8\,\mathrm{K}$ in
Model 3) to suppress further triple-$\alpha$ burning in the shell. As a
consequence the density and also the shell mass changes slightly compared
to the original Model 3 (cf. Table~\ref{tab:modelparas}).}.
We found that a core detonation was still triggered but the different
shell burning led to a substantial reduction of the mass of $^{44}$Ti,
$^{48}$Cr and $^{52}$Fe in the shell (nucleosynthetic yields for core
and shell of the model are given in Table~\ref{tab:modelparas}).
Since the detonation tables of \citet{Fink2010a} are only valid for
pure helium, Model 3m is not fully self-consistent. Nevertheless, it
is a useful toy model to explore the basic effect of a modified shell
composition.
Figure~\ref{fig:compare_m03_lightcurves} compares the angle-averaged
band-limited light curves of this modified model (3m), to those of Model 3
of \citet{Fink2010a} and our core-only toy model (3c). As can be seen,
the modified Model 3m produces light curves very similar to those of
Model 3c despite having about the same shell mass as Model 3.
The most obvious difference between our modified and core-only models
occurs in the $U$ band: the titanium in the outer layer of Model 3m causes
some line blocking, leading generally to a dimmer $U$-band magnitude than
for Model 3c which has no outer layer. Compared to Model 3, with its large
titanium mass in the shell, however, this effect is much weaker. The
$B$ band, which is strongly affected in Model 3, shows no significant
titanium absorption for Model 3m. Another slight difference between Model
3m and 3c occurs after the first peak in the $I$ band. Comparing the light
curves of Model 3m to SN~2004eo, we find qualitatively similar agreement
as for Model 3c.
This generally also holds for the angle-averaged spectrum at 3 days
before $B$-band maximum, shown in Figure~\ref{fig:m03_modified_spectrum}.
Compared to Model 3c, where the agreement was almost perfect, there are
some minor shortcomings. But the model is dramatically improved compared
to Model 3 of \citet{Fink2010a}. The small differences between Model 3c
and 3m are again mostly due to the titanium in the outer layers which leads
to pronounced absorption troughs bluewards of the Ca\,{\sc ii} H and K
lines and redwards of $4,000\,\text{\AA}$. This suggests that Model 3m
still over-produced titanium in the shell. Interestingly, the enhanced
calcium abundance in the outer layers (cf. Table~\ref{tab:modelparas})
leads to a stronger Ca\,{\sc ii} NIR triplet, bringing the model in better
agreement with the spectrum of SN~2004eo at the corresponding epoch.
Therefore some \emph{calcium} in the outer shell is an \emph{improvement}
over Model 3c. This suggests that a slight further reduction in the
degree of burning so that titanium is further suppressed in favour of
calcium (just one step down the $\alpha$-chain) could lead to very good
agreement.
\begin{figure}
\centering
\plotone{figure13}
\caption{Angle-averaged (thick black line) spectra at three days before
$B$-band maximum for Model 3m. For comparison the blue line shows the
de-redshifted and de-reddened spectrum of SN~2004eo \citep{Pastorello2007b}
at the corresponding epoch. Note, that the flux is here in physical
units and not scaled like in Figure~\ref{fig:maxspectra_all}. For a
description of the colour coding see Figure~\ref{fig:maxspectra_all}.}
\label{fig:m03_modified_spectrum}
\end{figure}
In summary, polluting the initial helium shell of Model 3m with $^{12}$C
significantly improved the agreement between our synthetic spectra and
light curves and those observed for SNe~Ia making this model a promising
progenitor candidate for SNe~Ia. We stress again that this improvement
results only from the change in the composition of the burning products
of the helium shell which contains much less titanium and chromium for
this model (the total shell mass stays about the same). Given that the
initial composition of the helium shell depends on several processes
including details of the accretion physics and hydrostatic burning phases
that might precede the detonation or possible dredge-up of material from
the C/O core, this leaves some scope to find sub-Chandrasekhar-mass
double detonation models which give reasonable agreement with observed
SNe~Ia. This, however, must be investigated by future follow-up studies
that more fully explore the influence of the initial composition of the
helium shell on the burning products and link the initial composition
of the helium shell directly to the evolution of progenitor models.
Moreover, we note that different ignition geometries might also lead
to better agreement with observational data. In particular, more
symmetric ignition geometries, e.g. ignition in an equatorial ring
or simultaneous ignition in multiple points (as studied by \citealt{Fink2007a}
for the case of more massive helium shells), are likely to alleviate
the strong viewing-angle dependence found for the point-like ignition
of the \citet{Fink2010a} models.
Our results also highlight the strong sensitivity of the radiative transfer
to particular elements/ions (in our case titanium and chromium which
represent only a tiny fraction of the ejecta mass yet dominate our
conclusions). This emphasizes the need for a better description of
nuclear reaction rates and continued study of the radiative transfer
processes (and atomic data) in order to quantify more fully the systematic
uncertainties which arise due to the complexity of spectrum formation
in supernovae.
In particular, we note that almost all the flux redistribution done by
titanium and chromium in our models is due to their singly ionized states.
Since the current ionization treatment of {\sc artis} neglects non-thermal
processes (see \citealt{Kromer2009a} for more details), we cannot
exclude that the actual ionization state in the helium shell ejecta would
be higher due to non-thermal ionization from the radioactive isotopes
produced during the helium burning. This could also significantly improve
the agreement between our models and observational data, as a numerical
experiment with an artificially enforced higher ionization state for titanium
and chromium has shown.
\section{Conclusion}
\label{sec:conclusions}
In this paper we presented synthetic observables for the sub-Chandrasekhar-mass
double detonation models of \citet{Fink2010a}.
We found that these models predict light curves which rise and fade on
time-scales typical of SNe~Ia. Moreover, they produce a large range of
brightnesses which covers the whole range of observed SNe~Ia. However,
they do not account for all the properties of observed SNe~Ia since they
have peculiar spectra and light curves. In particular, their $B-V$ colours
are generally too red compared to observed SNe~Ia. This is in contrast to
the results of earlier work on models with more massive helium shells
\citep{Hoeflich1996b,Hoeflich1996c,Nugent1997a}. In addition, our model
light curves and spectra show an unreasonably strong viewing-angle
dependence due to the point-like ignition of the \citet{Fink2010a}
models and the resulting ejecta asymmetries.
Detonation of a pure helium shell leads to a layer containing iron-group
elements like titanium and chromium around the core ejecta. These
elements have a vast number of optically thick lines in the UV and
blue part of the spectrum making them very effective in blocking the
flux in these wavelength regions and redistributing it to the red.
We used a toy model to show that this layer of titanium and chromium
causes the peculiar red colours of our light curves and also the
peculiar spectral features. Moreover, we found that this toy
model reproduces the observed properties of SNe~Ia remarkably well.
The toy model also showed that the strong viewing-angle dependence
of our models results from the compositional asymmetry in the helium
shell ejecta and not from the off-centre ignition of the C/O core.
We stress, that the additional energy release in the shell, due to the
production of radioactive nuclides during the helium burning, is relatively
inconsequential for our models -- even at $\gamma$-ray energies the
signatures of the surface $^{48}$Cr and $^{52}$Fe are not apparent.
Instead, in the optical/UV the shell has a strong signature but this
is primarily due to the additional opacity in the outer layers which
affects the transport of energy from the core to the surface. We
conclude that, if the double detonation sub-Chandrasekhar-mass model
valid for normal SNe~Ia, the properties of the post-burning helium
shell material need to be different from those in the \citet{Fink2010a}
models.
Since \citet{Fink2010a} considered the limit of the least massive helium
shells which might ignite a detonation in the helium shell, their models
represent the most optimistic case for reducing the influence of
the shell material by simply reducing the shell mass. However, we argue
that the mass of the helium shell ejecta is not the main problem but
rather the peculiar composition including comparably large masses of
titanium and chromium. We illustrated this using a second toy model,
where the initial composition of the helium shell was polluted with 34\%
(by mass) of $^{12}$C. By providing additional seed-nuclei for
$\alpha$-captures, this leads to burning products with lower atomic number
(i.e.\ intermediate-mass elements rather than mainly iron-group elements).
Spectra and light curves of this model which has about the same shell mass
as Model 3 of \citet{Fink2010a} show comparably good agreement to observed
SNe~Ia as the shell-less toy model.
Taking into account all these results, we argue that these systems might
yet be promising candidates for SN~Ia progenitors. Much more work will
be needed to properly investigate this possibility. Besides a more
detailed description of the excitation/ionization state in the radiative
transfer modelling which includes non-thermal effects, we need a better
understanding of the initial composition of the helium shell and the
incomplete burning processes which take place in this material to reach
reliable predictions of the burning products of the helium shell.
Although only a tiny fraction of the mass, the post-burning composition
of the shell material is critical to assessing the viability of the
sub-Chandrasekhar-mass double detonation scenario.
\section*{Acknowledgments}
We thank R.~Pakmor and S.~Taubenberger for many helpful comments and our
referee, David Branch, for a supportive report. The simulations presented
here were carried out at the Computer Center of the Max Planck Society in
Garching, Germany. This work was supported by the Deutsche
Forschungsgemeinschaft via the Transregional Collaborative Research Center
TRR 33 ``The Dark Universe'', the Excellence Cluster EXC153 ``Origin and
Structure of the Universe'' and the Emmy Noether Program (RO 3676/1-1).
\bibliographystyle{apj}
|
\section{Introduction}
We are interested in the numerical discretization of linear convection diffusion equations in bounded domain with anisotropic diffusion and mixed Dirichlet-Neumann boundary conditions. More precisely, our aim is the preservation of the large time behavior of solutions at the discrete level. At the continuous level, the behavior of solutions in the large can be quantified thanks to so-called (relative) entropies. These global quantities depend on time and involve the solutions to the evolution and the stationary equations. They usually control a distance between the solution and the steady state. In dissipative models, thanks to appropriate functional inequalities of Poincaré or convex Sobolev type \cite{arnold_markowich_unterreiter_2001}, a quantitative time decay estimate of the entropy may be established. At the discrete level the challenges lie in the preservation of the dissipation of the entropy and the derivation of discrete counterparts of the functional inequalities. In the present contribution, we address both of these issues.
Let $T>0$ be a time horizon, $\Omega$ be a polygonal connected open bounded subset of $\mathbb R^2$ and $Q_T = \Omega\times (0,T)$. The boundary $\Gamma = \partial\Omega$ is divided in two parts $\Gamma = \Gamma^D\cup\Gamma^N$, denoted $\Gamma^D$ and $\Gamma^N$ which will be endowed with respectively non-homogeneous Dirichlet and no-flux boundary conditions. In the following we are interested in the numerical approximation of the solution $u\equiv u(\bm{x},t)$ of
\begin{subequations}\label{FPmodel}
\begin{align}
&\displaystyle\partial_t u\,+\,{ \rm div} {\bm J}\ =\ 0 \quad\mbox{in}\quad Q_T\,,\label{FPmodel:cons}\\
&\displaystyle {\bm J}\ =\ -{\bm \varLambda}(\nabla u \,+\, u \nabla V)\quad\mbox{in}\quad Q_T\,,\label{FPmodel:flux}\\
&\displaystyle{\bm J}\cdot {\bm n}\ =\ 0\quad\mbox{on}\quad \Gamma^N\times (0,T)\,,\label{FPmodel:CLneum}\\
&\displaystyle u\ =\ u^D\quad\mbox{in}\quad \Gamma^D\times (0,T)\,,\label{FPmodel:CLdir}\\
&\displaystyle u(\cdot,0)\ =\ u_0\quad\mbox{in}\quad\Omega\,,\label{FPmodel:CI}
\end{align}
\end{subequations}
where ${\bm n} $ denotes the outward unit normal to $ \partial \Omega$. We make the following assumptions on the data.
\begin{enumerate}\label{hyp}
\item[(A1)] The initial data $u_0$ is square-integrable and
non-negative, i.e., $u_0\in L^2(\Omega)$ and $u_0 \geq 0$.
In the pure Neumann case $\Gamma^D = \emptyset$, we further
assume that the initial data is non-trivial, i.e.
\begin{equation}\label{eq:M1}
M_1 = \int_{\Omega} u_0 {\rm d}{{\bm x}} >0.
\end{equation}
\item[(A2)] The exterior potential $V$ does not depend on time
and belongs to $C^1({\overline\Omega},\mathbb R)$.
\item[(A3)] If $\Gamma^D \neq \emptyset$, the boundary data $u^D$ corresponds to a thermal Gibbs equilibrium, i.e.
\begin{equation}\label{eq:thermal_eq_data}
u^D({\bm x}) = \rho e^{-V({\bm x})}, \qquad \forall {\bm x} \in \Gamma^D
\end{equation}
for some $\rho>0$. As a consequence, $\log u^D + V$ is constant on $\Gamma^D$.
\item[(A4)] The anisotropy tensor ${\bm \varLambda}$ is supposed
to be bounded (${\bm \varLambda} \in
L^{\infty}(\Omega)^{2\times2}$), symmetric and uniformly
elliptic. There exists $\lambda^M\geq\lambda_m>0$ such that
\begin{equation}\label{ellipticity}
\lambda_m\vert {\bm v}\vert^2\leq {\bm \varLambda}({\bm x}){\bm v}\cdot {\bm v}\leq \lambda^M\vert {\bm v}\vert^2 \quad \mbox{ for all ${\bm v}\in\mathbb R^2$ and almost all ${\bm x}\in \Omega$.}
\end{equation}
\end{enumerate}
The large time behavior of solutions to Fokker-Planck equations with isotropic diffusion (namely \eqref{FPmodel:cons}-\eqref{FPmodel:flux} with ${\bm \varLambda}={\bm I}$) has been widely studied by Carrillo, Toscani and collaborators, see \cite{carrillo_toscani_1998,Toscani_99,carrillo_etal_2001} thanks to relative entropy techniques. In these papers, the exponential decay of the entropy is established in the whole space $\Omega=\mathbb R^d$ or for special cases of boundary conditions ensuring that the steady-state $u^{\infty}$ is a Gibbs equilibrium (or thermal equilibrium), which means $u^{\infty}=\lambda e^V$, with $\lambda\in \mathbb R_+$. The case of more general Dirichlet boundary conditions and anisotropic diffusion, leading to different steady states has been recently dealt with by Bodineau {\em et al.} in \cite{bodineau_2014_lyapunov}. The method is still based on relative entropy techniques.
When designing numerical schemes for the convection diffusion equations \eqref{FPmodel}, it is crucial to ensure that the scheme has a similar large time behavior as the continuous model. Indeed, it ensures the reliability of the numerical approximation in the large. Also it upgrades local-in-time quantitative convergence estimates to global-in-time estimates (see Li and Liu \cite{li_2018_large}).
{The preservation of the long-time behavior at the discrete level starts with a structure-preserving design of the numerical scheme. This has been widely investigated for Fokker-Planck type models (see, non-exhaustively, \cite{scharfetter_1969_large, ilin_1969_difference, chang1970practical, larsen_1985_discretization, buet_2010_chang, liu_2012_entropy,cances_guichard_2017,CCHK, bailo2018fully, bianchini_2018_truly, pareschi2018structure, gosse_2020_aliasing}). However the sole preservation of a stationary state, entropy inequality or well-balanced structure is not sufficient to rigorously derive explicit rates of convergence to equilibrium. Recently there has been an effort made on the obtention quantitative estimates with explicit decay rates by the mean of discrete functional inequalities \cite{bessemoulin_chainais_2017,filbet_herda_2017,chainais_herda, li_2018_large,dujardin2018coercivity,bessemoulin2018hypocoercivity}. In this direction, the case of multi-dimensional anisotropic diffusion and general meshes has essentially not been dealt with yet, which leads us to the present contribution.}
In \cite{chainais_herda}, Chainais-Hillairet and Herda prove that a family of TPFA (Two-Point Flux Approximation) finite volume schemes for \eqref{FPmodel} with ${\bm \varLambda}={\bm I}$ satisfies the exponential decay towards the associated discrete equilibrium. This family of B-schemes includes the classical centered scheme, upwind scheme and Scharfetter-Gummel scheme (\cite{scharfetter_1969_large}). Let us mention that the Scharfetter-Gummel scheme is the only B-scheme of the family that preserves Gibbs/thermal equilibrium. Unfortunately, the B-schemes are based on a two-point flux approximation and they can only be used on restricted meshes. In order to deal with almost general meshes and with anisotropic tensors, Canc\`es and Guichard propose and analyze a nonlinear VAG scheme for the approximation of some generalizations of \eqref{FPmodel} in \cite{cances_guichard_2017}. In \cite{CCHK}, Canc\`es, Chainais-Hillairet and Krell establish the convergence of a free-energy diminishing discrete duality finite volume (DDFV) scheme for \eqref{FPmodel} with $\Gamma^D=\emptyset$. Some numerical experiments show the exponential decay of the numerical scheme towards the Gibbs/thermal equilibrium. In the present contribution, we establish this result theoretically.
{In order to prove our main results on the large-time behavior of nonlinear finite volume schemes, namely Theorem~\ref{th:expdec_TPFA_N} and Theorem~\ref{th:expdec_TPFA_DN} for TPFA schemes and Theorem~\ref{theo:expdec_DDFV} for DDFV schemes, we rely upon new discrete Poincaré-Wirtinger, Beckner and logarithmic-Sobolev inequalities that are established in Theorem~\ref{theo:func_ineq}. For previously existing results on discrete adaptation of functional inequalities (Poincaré, Poincaré-Wirtinger, Poincaré-Sobolev and Gagliardo-Nirenberg-Sobolev) for finite volume schemes we refer to the work by Bessemoulin-Chatard {\em et al.} \cite{bessemoulin_chainais_filbet_2015} and references therein. Concerning convex Sobolev inequalities (Beckner and log-Sobolev), we refer to Chainais-Hillairet {\em et al.} \cite{chainais_jungel_schuchnigg_2016} and Bessemoulin-Chatard and J\"ungel \cite{MBC_AJ_2014}. In the previous papers the reference measure in the inequality, which is related to the steady state of a corresponding convection-diffusion equation, is uniform. Recently there were occurrences of new discrete functional inequalities of Poincaré-Wirtinger type associated with discretizations of nontrivial reference measures (essentially Gaussian), see Dujardin {\em et al.} \cite{dujardin2018coercivity}, Bessemoulin-Chatard {\em et al.} \cite{bessemoulin2018hypocoercivity} and Li and Liu \cite{li_2018_large}. In the present contribution, we deal with the case of discretizations of any absolutely continuous positive measure (in the bounded domain $\Omega$) with density bounded from above and away from~$0$.}
\medskip
\noindent{\bf Outline of the paper.} As a first step, we focus on nonlinear TPFA finite volume schemes for \eqref{FPmodel} in the isotropic case. These schemes can be seen as the reduction of the nonlinear DDFV scheme of \cite{CCHK} to some specific meshes. In Section~\ref{sec:TPFA}, we present the schemes and we establish a discrete entropy/dissipation property and some \emph{a priori} estimates satisfied by a solution to the scheme. They permit to establish the existence of a solution to the scheme. Then, Section \ref{sec:TPFA_LTB} is devoted to the study of the long time behavior of the nonlinear TPFA schemes: we establish the exponential decay towards equilibrium.
In Section~\ref{sec:DDFV}, we consider the nonlinear DDFV scheme introduced in \cite{CCHK} for the anisotropic case and almost general meshes and we also establish its exponential decay towards equilibrium. Then, in Section~\ref{sec:logSob}, we prove the various functional inequalities which are crucial in the proof of the exponential decay in the case of no-flux boundary conditions. Finally, Section~\ref{sec:num_exp} is dedicated to numerical experiments.
\section{The nonlinear two-point flux approximation (TPFA) finite volume schemes}\label{sec:TPFA}
In this section, we introduce a particular family of finite volume schemes for \eqref{FPmodel} in the isotropic case ${\bm \varLambda}={\bm I}$ . They are based on a nonlinear two-point discretization of the following reformulation of the flux
\[
{\bm J}\ =\ -u\,\nabla (\log u +V)\,.
\]
In the following, we start by presenting the schemes. Then, we establish some \emph{a priori} estimates, which finally lead to the existence of a solution to the scheme.
\subsection{Presentation of the schemes}
Let us first introduce the notations describing the mesh. The mesh $\mathcal M=(\mathcal T,\mathcal E,\mathcal P)$ of the domain $\Omega$ is given by a family $\mathcal T$ of open polygonal control volumes, a family $\mathcal E$ of edges, and a family $\mathcal P=({\bm x}_{K})_{K\in\mathcal T}$ of points such that ${\bm x}_K\in K$ for all $K\in\mathcal T$. As it is classical for TPFA finite volume discretizations including diffusive terms, we assume that the mesh is admissible in the sense of \cite[Definition 9.1]{EGHbook}. It implies that the straight line between two neighboring centers of cells $({\bm x}_{K},{\bm x}_{L})$ is orthogonal to the edge $\sigma=K|L$.\\
In the set of edges $\mathcal E$, we distinguish the interior edges $\sigma=K|L\in \E_{\text{int}}$ and the boundary edges $\sigma\in\E_{\text{ext}}$. Within the exterior edges, we distinguish the Dirichlet boundary edges included in $\Gamma^{D}$ from the Neumann (no-flux) boundary edges included in $\Gamma^{N}$: $\E_{\text{ext}}=\E_{\text{ext}}^{D}\cup\E_{\text{ext}}^{N}$. For a control volume $K\in\mathcal T$, we define $\E_{K}$ the set of its edges, which is also split into $\E_{K}=\E_{K,int}\cup\E_{K,ext}^D\cup\E_{K,ext}^N$. For each edge $\sigma\in\mathcal E$, there exists at least one cell $K\in\mathcal T$ such that $\sigma\in\E_{K}$.
Moreover, we define ${\bm x}_{\sigma}$ as the center of $\sigma$ for all $\sigma\in\mathcal E$.
In the sequel, $d(\cdot,\cdot)$ denotes the {Euclidean} distance in $\mathbb R^2$ and $m(\cdot)$ denotes {both the Lebesgue measure and the $1$-dimensional Hausdorff
measure in $\mathbb R^2$}. For all $K\in\mathcal T$ and $\sigma\in\mathcal E$, we set ${\rm m}_{{{K}}}=m(K)$ and ${\rm m}(\sigma)=m(\sigma)$. For all $\sigma\in\mathcal E$, we define ${\rm d}_{\sig}=d({\bm x}_{K},{\bm x}_{L})$ if $\sigma=K|L\in\E_{\text{int}}$ and ${\rm d}_{\sig}=d({\bm x}_{K},\sigma)$, {which denotes the Euclidean distance between $x_K$ and its orthogonal projection on the line containing $\sigma$,} if $\sigma\in\E_{\text{ext}}$, with $\sigma\in\E_{K}$. Then the transmissibility coefficient is defined by $\tau_{\sigma}={\rm m}(\sigma)/{\rm d}_{\sig}$, for all $\sigma\in\mathcal E$.
We assume that the mesh satisfies the following regularity constraint:
\begin{equation}\label{regmesh}
\text{There exists }\zeta >0\text{ such that }d({\bm x}_{K},\sigma)\geq \zeta\,{\rm d}_{\sig},\ \text{for all }K\in\mathcal T\text{ and }\sigma\in\E_{K}\,.
\end{equation}
Let $\Delta t>0$ be the time step. We define $N_{T}$ the integer part of $T/\Delta t$ and $t^{n}=n\Delta t$ for all $0\leq n\leq N_{T}$. The size of the mesh is defined by $\text{size}(\mathcal T)=\max_{K\in\mathcal T}\text{diam}(K)$, {where $\text{diam}(K) = \sup_{x,y\in K}d(x,y)$} and we denote by $\delta=\max(\Delta t,\text{size}(\mathcal T))$ the size of the space--time discretization.
A finite volume scheme for a conservation law with unknown $u$ provides a vector ${\bm u}_{\mathcal T}=(u_{K})_{K\in\mathcal T}\in\mathbb R^{\theta}$ (with $\theta=\text{Card}(\mathcal T)$) of approximate values.
However, since there are Dirichlet conditions on a part of the boundary, we also need to define approximate values for $u$ at the corresponding boundary edges: ${\bm u}_{\mathcal E^{D}}=(u_{\sigma})_{\sigma\in\E_{\text{ext}}^D}\in\mathbb R^{\theta^{D}}$ (with $\theta^{D}=\text{Card}(\E_{\text{ext}}^D)$). Therefore, the vector containing the approximate values both in the control volumes and at the Dirichlet boundary edges is denoted by ${\bm u}=({\bm u}_{\mathcal T},{\bm u}_{\mathcal E^{D}})$.
For all $K\in\mathcal T$ and $\sigma\in\mathcal E_{\text{ext}}^D$, we introduce
$V_K=V({\bm x}_K)$ and $V_{\sigma}= V({\bm x}_\sigma)$,
and the associated ${\bm V}=({\bm V}_\mathcal T,{\bm V}_{\mathcal E^D})$. The boundary data $u^D$ is discretized by $u_\sigma=u^D({\bm x}_\sigma)$ for all $\sigma \in \mathcal E_{\text{ext}}^D$, so that $u_\sigma = \rho e^{-V_\sigma}$ according to (A3).
For any vector ${\bm u}=({\bm u}_{\mathcal T},{\bm u}_{\mathcal E^{D}})$, we define the neighbor unknown for all $K\in\mathcal T$ and all $\sigma\in\E_{K}$ to be
\begin{equation}\label{uKsig}
u_{K,\sigma}=\left\{\begin{array}{ll}
u_{L}& \text{ if } \sigma=K|L\in\E_{K,int},\\
u_{\sigma}& \text{ if } \sigma\in\E_{K,ext}^D,\\
u_{K} & \text{ if }\sigma\in\E_{K,ext}^N.
\end{array}\right.
\end{equation}
We also define the difference operators, for all $K\in\mathcal T$ and $\sigma\in\mathcal E_K$ by
\begin{equation*}
D_{K,\sigma}{\bm u}=u_{K,\sigma}-u_{K}\,, \qquad D_{\sigma}{\bm u}=|D_{K,\sigma}u|\,.
\end{equation*}
{Observe that $D_{\sigma}{\bm u}$ does not depend on the control volume $K$ but only on the edge $\sigma$. Indeed, if $\sigma = K|L$ then $|D_{K,\sigma}{\bm u}|= |D_{L,\sigma}{\bm u}|$, if $\sigma\in\mathcal E_{\text{ext}}^N$ then $|D_{K,\sigma}{\bm u}|=0$ and if $\sigma\in\mathcal E_{\text{ext}}^D$ there is a unique $K$ such that $\sigma\in\mathcal E_K$.}
The family of schemes we consider in this section writes as:
\begin{subequations}\label{scheme}
\begin{align}
&{\rm m}_{{{K}}}\,\frac{u_K^{n+1}-u_K^n}{\Delta t} \,+\,\sum_{\sigma\in \mathcal E_K} {\mathcal F}_{K,\sigma}^{n+1}\ =\ 0\quad\text{for all }K\in\mathcal T\text{ and }n\geq 0\,,\label{scheme_glob}\\[.5em]
& {\mathcal F}_{K,\sigma}^{n+1}\ =\ -\tau_{\sigma} {\overline u}_{\sigma}^{n+1}\,D_{K,\sigma}(\log {\bm u}^{n+1} \,+\, {\bm V}) \quad \text{for all }K\in\mathcal T\,,\ \sigma \in\mathcal E_K\text{ and }n\geq 0\,,\label{scheme_flux}\\[.5em]
&u_\sigma^{n+1}\ =\ u_\sigma^D\quad\text{for all }\sigma \in \mathcal E_{\text{ext}}^D\text{ and }n\geq 0,\label{scheme_CLdir}\\[.5em]
&u_K^0=\frac{1}{{\rm m}_{{{K}}} }\int_K u_0({\bm x}) \,\mathrm{d}{\bm x}\quad\text{for all } K\in\mathcal T.\label{scheme_CI}
\end{align}
\end{subequations}
Let us first remark that the definition \eqref{uKsig} ensures that the numerical fluxes ${\mathcal F}_{K,\sigma}^{n+1}$ defined by \eqref{scheme_flux} vanish on the Neumann boundary edges.
It remains to define the values ${\overline u}_{\sigma}^{n+1}$ for the interior edges and the Dirichlet boundary edges. We define ${\overline u}_{\sigma}^{n+1}$ as a ``mean value'' of $u_K^{n+1}$ and $u_L^{n+1}$ if $\sigma=K|L$ or a mean value of $u_K^{n+1}$ and $u_\sigma^D$ if $\sigma\in\mathcal E_K^D$. More precisely, we set
\begin{equation}\label{def:ubarsig}
{\overline u}_{\sigma}^{n+1}=\left\{\begin{array}{ll}
r(u_K^{n+1},u_L^{n+1})& \text{ if } \sigma=K|L\in\E_{K,int},\\[2.mm]
r(u_K^{n+1},u_\sigma^{D})& \text{ if } \sigma\in\E_{K,ext}^D,
\end{array}
\right.
\end{equation}
where $r:(0,+\infty)^2\to (0,+\infty)$ satisfies the following properties.
\begin{subequations}\label{hyp:g}
\begin{align}
& \mbox{$r$ is monotonically increasing with respect to both variables;}\label{hyp:g:monotony}\\
& \mbox{$r(x,x)=x$ for all $x\in (0,+\infty)$ and $r(x,y)=r(y,x)$ for all $(x,y)\in (0,+\infty)^2$;}\label{hyp:g:cons+cons}\\
&\mbox{$r(\lambda x,\lambda y)=\lambda r(x,y)$ for all $\lambda >0$ and all $(x,y)\in (0,+\infty)^2$;}\label{hyp:g:homogeneity}\\
&\mbox{$\displaystyle\frac{x-y}{\log x-\log y} \leq r(x,y)\leq \max(x,y)$ for all $(x,y)\in (0,+\infty)^2$, $x\neq y$.\label{hyp:g:bounds}}
\end{align}
\end{subequations}
Let us emphasize that one has for all $x,y>0$
\begin{equation}\label{eq:ineg_r}
\frac{x-y}{\log x-\log y}\ \leq\ \left(\frac{\sqrt{x}+\sqrt{y}}{2}\right)^2\ \leq\ \frac{x+y}{2}\leq \max(x,y)\,,
\end{equation}
and that each function appearing in the last sequence of inequalities satisfies the properties \eqref{hyp:g}.
\subsection{Steady state of the scheme}
We say that ${\bm u}^\infty=({\bm u}_{\mathcal T}^\infty,{\bm u}_{\mathcal E^{D}}^\infty)$ is a steady state of the scheme \eqref{scheme} if it satisfies
\begin{equation}\label{scheme_stat_glob}
\sum_{\sigma\in \mathcal E_K} {\mathcal F}_{K,\sigma}^{\infty}\ =\ 0\quad\text{for all }K\in\mathcal T\,,
\end{equation}
with the steady flux defined for all $K\in\mathcal T$ and $\sigma \in\mathcal E_K$ as
\begin{equation}\label{scheme_stat_flux}
{\mathcal F}_{K,\sigma}^{\infty}\ =\ -\tau_{\sigma} {\overline u}_{\sigma}^{\infty}\,D_{K,\sigma}(\log {\bm u}^{\infty} \,+\, {\bm V})\,,
\end{equation}
as well as the boundary/compatibility conditions
\begin{equation}\
\left\{
\begin{array}{ll}\label{stat_boundary_compat}
\displaystyle u_\sigma^{\infty}\ =\ u_\sigma^D\quad\text{for all }\sigma \in \mathcal E_{\text{ext}}^D\,,&\text{ if }\mathcal E_{\text{ext}}^D\neq\emptyset\,,\\[.5em]
\displaystyle\sum_{K\in\mathcal T} {\rm m}_{{{K}}} u_K^\infty=\displaystyle\sum_{K\in\mathcal T} {\rm m}_{{{K}}} u_K^0=\int_\Omega u_0\ =:\ M^1\,,&\text{ if }\mathcal E_{\text{ext}}^D=\emptyset\,.
\end{array}
\right.
\end{equation}
In the case $\mathcal E_{\text{ext}}^D=\emptyset$, namely with full no-flux boundary conditions, the condition \eqref{stat_boundary_compat} is imposed to ensure uniqueness of the steady state and compatibility with the conservation of mass which is satisfied by the scheme. Indeed, one has
\begin{equation}\label{eq:masscons}
\displaystyle\sum_{K\in\mathcal T} {\rm m}_{{{K}}} u_K^n=\displaystyle\sum_{K\in\mathcal T} {\rm m}_{{{K}}} u_K^0=M^1,\quad \forall n\geq 0.
\end{equation}
The steady state associated to the scheme \eqref{scheme}
is given by
\begin{equation}\label{Neumann:steadystate}
u_K^{\infty}\ =\ \rho e^{-V_K}\,,\ \mbox{ with }\ \rho\ =\ M^1\,\left(\displaystyle\sum_{K\in\mathcal T} {\rm m}_{{{K}}} e^{-V_K}\right)^{-1}.
\end{equation}
In the case $\mathcal E_{\text{ext}}^D\neq\emptyset$, Assumption (A3) enforces the boundary conditions to be at thermal equilibrium, which means that there is a constant $\alpha^D$ such that for all $\sigma\in\mathcal E_{\text{ext}}^D$,
\begin{equation}\label{hyp:theq}
\log u_\sigma^D \,+\,V_\sigma\ =\ \alpha^D\,.
\end{equation}
Under this assumption, the steady-state has a similar form as in the case of pure no-flux boundary conditions. It is defined by
\begin{equation}\label{theq:steadystate}
u_K^{\infty}\ =\ \rho\, e^{-V_K}\,,\ \mbox{ with }\ \rho\ =\ \exp {\alpha^D}.
\end{equation}
Let us remark that in both cases, as $V\in C^1(\overline{\Omega},\mathbb R)$, the discrete steady state is bounded by above and below. There are $M^{\infty}, m^{\infty}>0$ such that for all $K\in \mathcal T$
\begin{equation}\label{bound:steadystate}
m^{\infty}\,\leq\, u_K^{\infty}\,\leq\, M^{\infty}\,.
\end{equation}
\subsection{Discrete entropy estimates}
Let $\Phi\in C^1(\mathbb R,\mathbb R)$ be a convex function satisfying $\Phi(1)=\Phi'(1)=0$.
We consider the discrete relative $\Phi$-entropy defined by
\begin{equation}\label{def:entropy}
{\mathbb E}_{\Phi}^n\ =\ \displaystyle\sum_{K\in\mathcal T}{\rm m}_{{{K}}} u_K^{\infty} \Phi\left(\displaystyle\frac{u_K^{n}}{u_K^{\infty}}\right)\,\quad \forall n\geq 0.
\end{equation}
We show in the next proposition that if the discrete equilibrium is a Gibbs/thermal equilibrium, the scheme dissipates the discrete relative $\Phi$-entropies along time.
\begin{prop}\label{prop:entropydissipation}
Let us assume that either $\mathcal E_{\text{ext}}^D=\emptyset$ or $\mathcal E_{\text{ext}}^D\neq\emptyset$ with \eqref{hyp:theq}. We also assume that the scheme \eqref{scheme}-\eqref{def:ubarsig} has a solution $({\bm u}^n)_{n\geq 0}$ which is positive at each time step, namely $u_K^n>0$ for all $K\in\mathcal T$ and $n\geq 0$. Then the discrete relative $\Phi$-entropies defined by \eqref{def:entropy} are dissipated along time. Namely, for all $n\geq 0$ one has
\begin{equation}\label{ineq:entropydissipation}
\displaystyle\frac{{\mathbb E}_{\Phi}^{n+1}-{\mathbb E}_{\Phi}^n}{\Delta t} \,+\,{\mathbb I}_\Phi^{n+1} \ \leq\ 0\,,
\end{equation}
with
\begin{equation}\label{def:dissipation}
{\mathbb I}_{\Phi}^{n+1}\,=\,\sum_{\sigma\in\mathcal E_{\text{int}}\cup \mathcal E_{\text{ext}}^D} \tau_{\sigma} {\overline u}_{\sigma}^{n+1}\left(D_{K,\sigma} \log \frac{{\bm u}^{n+1}}{{\bm u}^{\infty}}\right)\left(D_{K,\sigma} \Phi'\left(\frac{{\bm u}^{n+1}}{{\bm u}^{\infty}}\right)\right)\ \geq\ 0.
\end{equation}
\end{prop}
\begin{proof}
Regardless of the hypothesis on $\mathcal E_{\text{ext}}^D$, the steady-state can be written as $u_K^{\infty}=\rho e^{-V_K}$ with $\rho\in (0,+\infty)$, as shown in \eqref{Neumann:steadystate} and \eqref{theq:steadystate}. Therefore, the numerical fluxes defined by \eqref{scheme_flux} rewrite as
\[
{\mathcal F}_{K,\sigma}^{n+1}\ =\ -\tau_{\sigma} {\overline u}_{\sigma}^{n+1} D_{K,\sigma}\left(\log ({\bm u}^{n+1}/{\bm u}^\infty)\right)\,,
\]
for all $K\in\mathcal T$, $\sigma \in \mathcal E_K$ and $n\geq 0$. Besides, due to the convexity of $\Phi$, one has
\[
\displaystyle{\mathbb E}_{\Phi}^{n+1}-{\mathbb E}_{\Phi}^n\ \leq\ \displaystyle\sum_{K\in\mathcal T} {\rm m}_{{{K}}} (u_K^{n+1}-u_K^n) \Phi'(u_K^{n+1}/u_K^{\infty})\,.
\]
Then, multiplying the scheme \eqref{scheme_glob} by $\Phi'(u_K^{n+1}/u_K^{\infty})$, summing over $K\in\mathcal T$ and applying a discrete integration by parts yields the expected result. Finally, from the monotonicity of the functions $\log$ and $\Phi'$ we infer that ${\mathbb I}_{\Phi}^{n+1}$ is non-negative.
\end{proof}
The first consequence of Proposition \ref{prop:entropydissipation} is the decay of the relative $\Phi$-entropy, so that
\begin{equation}\label{entropybound}
{\mathbb E}_{\Phi}^n\ \leq\ {\mathbb E}_{\Phi}^0, \quad \text{for all}\ n\geq 0\,.
\end{equation}
Then, we deduce some uniform $L^{\infty}$-bounds on the solution to the scheme \eqref{scheme}.
\begin{prop}\label{prop:Linfbounds}
Under the assumptions of Proposition~\ref{prop:entropydissipation}, one has
\begin{equation}\label{est:Linfbounds}
m^{\infty}\min \left(1,\displaystyle\min_{K\in\mathcal T} \frac{u_K^0}{u_K^{\infty}}\right)\leq u_K^n\leq M^{\infty}\max \left(1,\displaystyle\max_{K\in\mathcal T} \frac{u_K^0}{u_K^{\infty}}\right)\,,
\end{equation}
for all $K\in\mathcal T$ and $n\geq 0$.
\end{prop}
\begin{proof}
The proof is similar to the proof of Lemma 4.1 in \cite{filbet_herda_2017}. It is a direct consequence of \eqref{entropybound} applied with specific choices for the function $\Phi$. Indeed, just use $\Phi(x)=(x-M)^+$ and $\Phi(x)=(x-m)^-$ with $M=\max(1,\displaystyle\mathrm{max}_{K} u_K^0/u_K^{\infty})$ and $m=\min(1,\displaystyle\mathrm{min}_{K} u_K^0/u_K^{\infty})$ so that in both cases $0\leq{\mathbb E}_{\Phi}^n\leq{\mathbb E}_{\Phi}^0=0$ for all $n\geq 0$, which leads respectively to the upper bound and the lower bound in \eqref{est:Linfbounds}.
\end{proof}
\subsection{Existence of a solution to the scheme}
The numerical scheme \eqref{scheme}-\eqref{def:ubarsig} amounts at each time step to solve a nonlinear system of equations. The existence of a solution to the scheme is stated in Proposition~\ref{theo:existence}. It is a direct consequence of the \emph{a priori} $L^{\infty}$-estimates given in Proposition \ref{prop:Linfbounds}. The proof relies on a topological degree argument/a Leray-Schauder's fixed point theorem \cite{leray_schauder_1934, deimling_1985_nonlinear, droniou_2018_gradient}. It will be omitted here.
\begin{prop}\label{theo:existence}
Let us assume that either $\mathcal E_{\text{ext}}^D=\emptyset$ or $\mathcal E_{\text{ext}}^D\neq\emptyset$ with \eqref{hyp:theq}. We also assume that the initial condition satisfies \eqref{hyp:ci}. Then, the scheme \eqref{scheme}-\eqref{def:ubarsig} has a solution $({\bm u}^n)$ for all $n\geq 0$, which satisfies the uniform $L^{\infty}$-bounds \eqref{est:Linfbounds}.
\end{prop}
\begin{rema}
The lower bound in \eqref{est:Linfbounds} is positive if there is a positive constant $m_0$ such that
\begin{equation}\label{hyp:ci}
u_K^0\,\geq\, m_0\,>\,0\,,\ \mbox{for all}\ K\in\mathcal T,
\end{equation}
which is not necessarily ensured by the assumption (A1) on the initial data $u_0$. If the initial data $u_0$ has some vanishing zones, such that
\eqref{hyp:ci} is not satisfied, it is still possible to obtain a positive lower bound at each time step $n\geq 1$. But this bound will depend on the discretization parameters. Instead of the bound of the entropy, the proof uses the control on the dissipation of entropy also provided by Proposition \ref{prop:entropydissipation}. We refer to \cite[Lemma 3.5]{CCHK} for the details of the proof in this case.
This weaker estimate is sufficient to show existence of a solution ${\bm u}^n$ to the scheme.
\end{rema}
\section{Large time behavior of the nonlinear TPFA finite volume schemes}\label{sec:TPFA_LTB}
In this section, we establish the exponential decay of the solution $({\bm u}^n)_{n\geq 0}$ to the scheme \eqref{scheme}-\eqref{def:ubarsig}, discretizing \eqref{FPmodel} in the particular case $\bm{\varLambda} = \bm{I}$, towards the thermal equilibrium ${{\bm u}}^\infty$. To proceed, we first prove the exponential decay of some relative entropies ${\mathbb E}_\Phi^n$ towards $0$. We shall focus on the Boltzmann-Gibbs entropy generated by
\begin{equation}\label{eq:phi1}
\Phi_1(s)\ =\ s\log s -s +1\,,
\end{equation}
and the Tsallis entropies generated by
\begin{equation}\label{eq:phip}
\Phi_p(s)\ =\ \frac{s^p-ps}{p-1}+1\,,
\end{equation}
for $p\in(1,2]$. The methodology consists in establishing a so-called entropy-entropy dissipation inequality. More precisely, one wants to show the existence of some $\nu>0$ such that
\begin{equation}\label{ineq:EI}
{\mathbb I}_\Phi^{n+1}\ \geq\ \nu {\mathbb E}_\Phi^{n+1},\quad \forall n\geq 0.
\end{equation}
This is done thanks to discrete functional inequalities. In the case of complete Neumann (no-flux) boundary conditions we need new inequalities that are proved in Section~\ref{sec:logSob}. Depending on the parameter $p$, we will use a logarithmic Sobolev inequality ($p=1$), a Beckner inequality ($p\in(1,2)$) or a Poincaré-Wirtinger inequality ($p=2$). In the case of mixed Dirichlet-Neumann boundary conditions, we require a more classical discrete Poincaré inequality.
Once one obtains \eqref{ineq:EI}, due to the entropy/entropy dissipation inequality \eqref{ineq:entropydissipation}, we get that
$
{\mathbb E}_\Phi^{n}\leq(1+\nu \Delta t)^{-n} {\mathbb E}_\Phi^{0}$ for all $n\geq0$.
Thus we deduce the following weaker but maybe more explicit bound. For any $k>0$, if $\Delta t\leq k$, then
$
{\mathbb E}_\Phi^{n}\ \leq\ e^{-{\tilde\nu} t^n}{\mathbb E}_\Phi^{0}
$
where the rate is given by $\tilde\nu=\log (1+\nu k)/k$.
\subsection{The case of Neumann boundary conditions}
In this section we show the exponential decay towards the thermal equilibrium in the case of Neumann (no-flux) boundary conditions.
\begin{theo}\label{th:expdec_TPFA_N}
Let us assume that $\mathcal E_{\text{ext}}^D=\emptyset$. Then for all $p\in[1,2]$, there exists $\nu_p$ depending only on the domain $\Omega$, the regularity of the mesh $\zeta$, the mass of the initial condition $u_0$ (only in the case $p=1$) and the potential $V$ ({\em via} the steady state ${{\bm u}}^\infty$), such that,
\begin{equation}\label{expdec_Ep}
{\mathbb E}_p^{n}\ \leq\ (1\,+\,\nu_p \Delta t)^{-n}\, {\mathbb E}_p^{0}\,,\quad \forall n\geq 0\,.
\end{equation}
Thus for any $k>0$, if $\Delta t\leq k$, one has for all $n\geq0$ that ${\mathbb E}_p^{n}\leq e^{-{\tilde \nu_p} t^n}{\mathbb E}_p^{0}$ with ${\tilde\nu}_p=\log (1+\nu_p k)/k$.
\end{theo}
\begin{proof}
By definition \eqref{def:dissipation}, the discrete entropy dissipation is given by
\[
{\mathbb I}_p^{n+1}=\displaystyle\sum_{\sigma\in\mathcal E_{\text{int}}}\tau_\sigma {\overline u}_{\sigma}^{n+1} \left(D_\sigma \log({\bm u}^{n+1}/{\bm u}^{\infty})\right)\left(D_\sigma \Phi_p'({\bm u}^{n+1}/{\bm u}^{\infty})\right)\,,
\]
for all $n\geq0$. It can be seen as the discrete counterpart of
\[
\int_\Omega u\nabla \log (u/u^{\infty})\cdot\nabla \Phi_p'(u/u^{\infty}) \mathrm{d}{\bm x} = \frac 4 p \int_\Omega u^{\infty}|\nabla (u/u^{\infty})^{p/2}|^2 \mathrm{d}{\bm x}.
\]
Let us introduce a discrete counterpart of this last quantity. For all $n\geq 0$, let
\[
{\widehat {\mathbb I}}_p^{n+1}\,=\,\frac{4}{p}\displaystyle\sum_{\sigma\in\mathcal E_{\text{int}}}\tau_\sigma {\overline u}_{\sigma}^{\infty} \left(D_\sigma ({\bm u}^{n+1}/{\bm u}^{\infty})^{p/2}\right)^2\,,
\]
with
\[
{\overline u}_{\sigma}^{\infty}\,=\,\min (u_K^{\infty},u_L^{\infty})\ \mbox{for}\ \sigma=K|L\,.
\]
Let us prove now that
\begin{equation}\label{ineg:I_Ihat}
{\widehat {\mathbb I}}_p^{n+1}\leq {{\mathbb I}}_p^{n+1},\quad \forall n\geq 0.
\end{equation}
The proof is based on two elementary inequalities. Let $x,y>0$. The first inequality is
\[
4\vert \sqrt{x}-\sqrt{y}\vert^2\leq (x-y)(\log x-\log y).
\]
The second one is given by
\[
(\alpha+\beta)^2(y^\alpha-x^\alpha)(y^\beta-x^\beta)\geq 4\alpha\beta\left( y^{(\alpha+\beta)/2}-x^{(\alpha+\beta)/2}\right)^2\]
and holds for all $\alpha,\beta >0$. We are interested in the case $\alpha=p-1$ and $\beta =1$. The reader may find a proof in \cite[Lemma 19]{chainais_jungel_schuchnigg_2016}. Altogether, it yields that for all $p\in[1,2]$, one has
\begin{equation}\label{eq:ineg_func}
\frac{4}{p}(x^{p/2}-y^{p/2})^2\ \leq\ (x-y)(\Phi_p'(x)-\Phi_p'(y))\,.
\end{equation}
As $r$ satisfies \eqref{hyp:g:bounds}, it implies that
\[
\frac{4}{p}(x^{p/2}-y^{p/2})^2\leq r(x,y)(\log x-\log y)(\Phi_p'(x)-\Phi_p'(y))
\]
for all $x,y>0$. Therefore, for all edge $\sigma\in\mathcal E_{\text{int}}$ with $\sigma =K|L$, we have
\begin{equation} \label{I_Ihat_1}
\frac{4}{p}\left(D_{\sigma}\left(\frac{{\bm u}^{n+1}}{{\bm u}^{\infty}}\right)^{p/2}\right)^2\leq r\left(\frac{u_K^{n+1}}{u_K^{\infty}},\frac{u_L^{n+1}}{u_L^{\infty}}\right)\left( D_{\sigma} \log \left(\frac{{\bm u}^{n+1}}{{\bm u}^{\infty}}\right)\right)\left( D_{\sigma} \Phi_p'\left(\frac{{\bm u}^{n+1}}{{\bm u}^{\infty}}\right)\right).
\end{equation}
But thanks to the homogeneity \eqref{hyp:g:homogeneity} of $r$, we have
\begin{equation} \label{I_Ihat_2}
{\overline u}_{\sigma}^{\infty}r\left(\frac{u_K^{n+1}}{u_K^{\infty}},\frac{u_L^{n+1}}{u_L^{\infty}}\right)=
r\left({\overline u}_{\sigma}^{\infty}\frac{u_K^{n+1}}{u_K^{\infty}},{\overline u}_{\sigma}^{\infty}\frac{u_L^{n+1}}{u_L^{\infty}}\right)\leq r(u_K^{n+1},u_L^{n+1}),
\end{equation}
because of the definition of ${\overline u}_{\sigma}^{\infty}$. We then deduce \eqref{ineg:I_Ihat} from \eqref{I_Ihat_1} and \eqref{I_Ihat_2}.
In order to establish \eqref{ineq:EI}, we just need to prove that ${\widehat{\mathbb I}}_p^{n+1}\geq \nu_p\,{\mathbb E}_p^{n+1}$ for all $n\geq 0$. This relation is a consequence of the discrete log-Sobolev and Beckner inequalities stated in Proposition \ref{prop:func_ineq}. Indeed, let us apply \eqref{ineg:logSob_2} to ${\bm v}={\bm u}^{n+1}$ and ${\bm v}^{\infty}={\bm u}^{\infty}$. We get
$
{\mathbb E}_1^{n+1} \leq C_{LS} \left(M^\infty\,M^1\right)^{\frac{1}{2}}
{\widehat {\mathbb I}}_1^{n+1}/(\zeta^2m^\infty).
$
It yields
\[
\nu_1= \frac{\zeta^2}{C_{LS}}\,\frac{m^\infty}{(M^\infty\,M^1)^{\frac{1}{2}}}\,.
\]
Similarly, by applying \eqref{ineg:Beck_2} one gets the desired inequality with
\[
\nu_p = (p-1)\,\frac{\zeta}{C_{B}}\,\frac{m^\infty}{M^\infty}\,.
\]
It concludes the proof.
\end{proof}
\begin{cor}
Under the assumptions of Theorem~\ref{th:expdec_TPFA_N}, one has
\begin{equation}\label{expdec_L2_N}
\sum_{K\in\mathcal T}m_K\,|u_K^n-u_K^\infty|^2\ \leq\ \mathbb{E}_2^0\,M^\infty\,e^{-\tilde{\nu}_2\,t^n}
\end{equation}
and
\begin{equation}\label{expdec_L1_N}
\left(\sum_{K\in\mathcal T}m_K\,|u_K^n-u_K^\infty|\right)^2\ \leq\ 2\,\mathbb{E}_1^0\,M^1\,e^{-\tilde{\nu}_1\,t^n}\,.
\end{equation}
\end{cor}
\begin{proof}
The decay \eqref{expdec_L2_N} in $L^2$-norm (\emph{resp.} \eqref{expdec_L1_N} in $L^1$-norm ) is just a consequence of \eqref{expdec_Ep} and the Cauchy-Schwarz inequality (\emph{resp.} the Csiszár-Kullback inequality, see Lemma~\ref{lem:CK}).
\end{proof}
\subsection{The case of Dirichlet-Neumann boundary conditions}
In this section we show the exponential decay towards the thermal equilibrium in the case of mixed Dirichlet-Neumann boundary conditions.
\begin{theo}\label{th:expdec_TPFA_DN}
Let us assume that $\mathcal E_{\text{ext}}^D\neq\emptyset$. Then, for all $p\in (1,2]$, there exists $\kappa_p$ depending only on $p$, $\Omega$, $\Gamma^D$, $\zeta$, the boundary condition $u^D$ and the potential $V$, such that, for any $k>0$, if $\Delta t\leq k$,
\begin{equation}\label{expdec_Ep_DN}
{\mathbb E}_p^{n}\ \leq\ (1\,+\,\kappa_p \Delta t)^{-n}\, {\mathbb E}_p^{0}\,,\quad \forall n\geq 0\,.
\end{equation}
Thus for any $k>0$, if $\Delta t\leq k$, one has for all $n\geq0$ that ${\mathbb E}_p^{n}\leq e^{-{\tilde \kappa_p} t^n}{\mathbb E}_p^{0}$ with ${\tilde\kappa}_p=\log (1+\kappa_p k)/k$.
\end{theo}
\begin{proof}
The proof begins in the same fashion as in the case of Neumann boundary conditions. The expressions of the dissipation slightly change, as some boundary terms are taken into account. However with the same arguments one still has ${\widehat {\mathbb I}}_p^{n+1}\leq {{\mathbb I}}_p^{n+1}$ with
\[
{\mathbb I}_p^{n+1}=\displaystyle\sum_{\sigma\in\mathcal E_{\text{int}}\cup \mathcal E_{\text{ext}}^D}\tau_\sigma {\overline u}_{\sigma}^{n+1} \left(D_\sigma \log({\bm u}^{n+1}/{\bm u}^{\infty})\right)\left(D_\sigma \Phi_p'({\bm u}^{n+1}/{\bm u}^{\infty})\right)\,,
\]
and
\[
{\widehat {\mathbb I}}_p^{n+1}\,=\,\frac{4}{p}\displaystyle\sum_{\sigma\in\mathcal E_{\text{int}}\cup \mathcal E_{\text{ext}}^D}\tau_\sigma {\overline u}_{\sigma}^{\infty} \left(D_\sigma ({\bm u}^{n+1}/{\bm u}^{\infty})^{p/2}\right)^2\,.
\]
Then the proof differs as we are going to use a different functional inequality in order to establish a relation between ${\mathbb E}_p^{n+1}$ and ${\mathbb I}_p^{n+1}$ of the form \eqref{ineq:EI}. Indeed, we apply a discrete Poincar\'e inequality (see for instance \cite[Theorem 4.3]{bessemoulin_chainais_filbet_2015}). It ensures the existence of a constant $C_P$ depending only on $\Gamma^D$ and $\Omega$, such that
$$
\displaystyle\sum_{K\in\mathcal T} {\rm m}_{{{K}}} \left(\left(\frac{u_K^n}{u_K^\infty}\right)^{\frac{p}{2}}-1\right)^2\leq \frac{(C_{P})^2}{\zeta}\displaystyle\sum_{\sigma\in\mathcal E_{\text{int}}\cup \mathcal E_{\text{ext}}^D}\tau_\sigma \left(D_\sigma \left(\frac{{\bm u}^{n+1}}{{\bm u}^{\infty}}\right)^{\frac{p}{2}}\right)^2.
$$
Therefore, using the bounds \eqref{bound:steadystate}, we obtain:
$$
{\mathbb I}_p^{n+1}\geq \frac{4}{p}\frac{\zeta}{(C_P)^2}\frac{m^\infty}{(M^{\infty})^p}\displaystyle\sum_{K\in\mathcal T} {\rm m}_{{{K}}} \left( (u_K^{n+1})^{\frac{p}{2}}- (u_K^{\infty})^{\frac{p}{2}}\right)^2.
$$
But, for all $p\in (1,2]$, we have the following inequality, whose proof is left to the reader,
$$
(x^{p/2}-y^{p/2})^2\geq x^p-y^p-py^{p-1}(x-y)\,,
$$
for all $x,y>0$. It yields
$$
\displaystyle\sum_{K\in\mathcal T} {\rm m}_{{{K}}} \left( (u_K^{n+1})^{\frac{p}{2}}- (u_K^{\infty})^{\frac{p}{2}}\right)^2 \geq (p-1) (m^\infty)^{p-1}{\mathbb E}_p^{n+1}
$$
and finally ${\mathbb I}_p^{n+1}\geq \kappa_p {\mathbb E}_p^{n+1}$ with
\begin{equation}\label{eq:kappap}
\kappa_p=\frac{4(p-1)}{p}\frac{\zeta}{(C_P)^2}\left(\frac{m^\infty}{M^{\infty}}\right)^p
\end{equation}
and it concludes the proof of Theorem \ref{th:expdec_TPFA_DN}.
\end{proof}
\begin{cor}
Under the assumptions of Theorem~\ref{th:expdec_TPFA_DN}, one has
\begin{equation}\label{expdec_L2_DN}
\sum_{K\in\mathcal T}m_K\,|u_K^n-u_K^\infty|^2\ \leq\ \mathbb{E}_2^0\,M^\infty\,e^{-\tilde{\kappa}_2\,t^n}\,.
\end{equation}
\end{cor}
\begin{rema}
The restriction $p>1$ in Theorem~\ref{th:expdec_TPFA_DN} does not prevent the entropy $\mathbb{E}_1^n$ from decaying exponentially fast in time. Indeed it trivially does since $\Phi_1\leq\Phi_2$ and thus $\mathbb{E}_1^n\ \leq\ \mathbb{E}_2^n$, so that
\[
\mathbb{E}_1^n\ \leq\ \mathbb{E}_2^0 (1\,+\,\kappa_2 \Delta t)^{-n}\,.
\]
However this estimate is not as sharp as \eqref{expdec_Ep_DN}. Indeed, the difference lies in the fact that unlike \eqref{expdec_Ep_DN}, the latter estimate is not saturated at $n=0$. In the same way one could show that any sub-quadratic $\Phi$-entropy decays at least as fast as $\mathbb{E}_2^n$. The same observation suggests that the degeneracy of $\kappa_p$ (and $\nu_p$) when $p\to1$ are only technical.
\end{rema}
\begin{rema}
It is unclear which functional inequality should be used in the case $p=1$ with Dirichlet-Neumann boundary conditions. This was already noticed in \cite[Remark 3.1]{bodineau_2014_lyapunov}.
\end{rema}
\section{Large time behavior of discrete duality finite volume (DDFV) schemes}\label{sec:DDFV}
\subsection{Meshes and set of unknowns}
In order to introduce the DDFV scheme from \cite{CCHK}, we need to introduce three different meshes -- the primal mesh,
the dual mesh and the diamond mesh -- and some associated notations.
The primal mesh denoted $\overline{\mathcal M}$ is composed of the interior primal mesh $\mathcal M$ (a partition of $\Omega$ with polygonal control volumes) and the set $\partial\mathcal M$ of boundary edges seen as degenerate control volumes.
For all $K\in \overline{\mathcal M}$, we define ${\bm x}_K$ the center of $K$.
To any vertex ${\bm x}_{\ke}$ of the primal mesh satisfying ${\bm x}_{\ke}\in \Omega$, we associate a polygonal control volume ${ {K}^*}$ defined by connecting all the centers of the primal cells sharing ${\bm x}_{\ke}$ as vertex. The set of such control volumes is the interior dual mesh denoted $\mathcal M^*$. To any vertex ${\bm x}_{\ke}\in \partial \Omega$, we define a polygonal control volume ${ {K}^*}$ by connecting the centers of gravity of the interior primal cells and the midpoints of the boundary edges sharing ${\bm x}_{\ke}$ as vertex. The set of such control volumes is the boundary dual mesh, denoted $\partial\mathcal M^*$. Finally, the dual mesh is $\Mie\cup\dr\Mie$, denoted by $\overline{\mathfrak{M}^*}$.
{An illustration in the case of a triangular primal mesh is provided in Figure~\ref{fig_mesh}.}
\begin{figure}[htb]
\begin{tabular}{cc}
\begin{tikzpicture}[scale=0.6]
\draw[line width=1pt, color=purple] (1,1)--(1,7)--(7,7)--(7,1)--cycle;
\draw[line width=1pt,densely dashed] (1,1)--(2.5,3)--(3.5,1)--(5.5,3.5)--(7,1);
\draw[line width=1pt,densely dashed] (1,7)--(3.,5)--(4.5,7)--(5.5,3.5)--(7,7);
\draw[line width=1pt,densely dashed] (7,4)--(5.5,3.5)--(2.5,3)--(1,4.5)--(3.,5)--(2.5,3);
\draw[line width=1pt,densely dashed] (3.,5)--(5.5,3.5);
\draw[line width=1pt,densely dashed] (-1,5.5)--(0,5.5)--(0,4.5)--(-1,4.5)--cycle;
\draw[line width=1pt, color=purple] (-1.,3.5)--(0,3.5);
\node[left] at (-1.2,5){\color{black} $\mathcal M$};
\node[left] at (-1.2,3.5){$\partial\mathcal M$};
\end{tikzpicture}&
\begin{tikzpicture}[scale=0.6]
\draw[line width=1pt, color=purple] (1,1)--(1,7)--(7,7)--(7,1)--cycle;
\draw[line width=.5pt,densely dashed] (1,1)--(2.5,3)--(3.5,1)--(5.5,3.5)--(7,1);
\draw[line width=.5pt,densely dashed] (1,7)--(3.,5)--(4.5,7)--(5.5,3.5)--(7,7);
\draw[line width=.5pt,densely dashed] (7,4)--(5.5,3.5)--(2.5,3)--(1,4.5)--(3.,5)--(2.5,3);
\draw[line width=.5pt,densely dashed] (3.,5)--(5.5,3.5);
\draw[line width=1.pt,color=blue](1.5,2.83)--(2.17,4.17)--(3.67,3.83)--(3.83,2.5)--(2.33,1.67)--cycle;
\draw[line width=1pt,color=blue](2.25,1)--(2.33,1.67);
\draw[line width=1pt,color=blue](3.83,2.5)--(5.33,1.83);
\draw[line width=1pt,color=blue](5.25,1)--(5.33,1.83)--(6.5,2.83)--(7,2.5);
\draw[line width=1pt,color=blue](1,2.75)--(1.5,2.83);
\draw[line width=1pt,color=blue](3.67,3.83)--(4.33,5.17)--(5.66,5.83)--(6.5,4.83)--(6.5,2.83);
\draw[line width=1pt,color=blue](7,5.5)--(6.5,4.83);
\draw[line width=1pt,color=blue](5.66,5.83)--(5.75,7);
\draw[line width=1pt,color=blue](1,5.75)--(1.67,5.5)--(2.83,6.33)--(2.75,7);
\draw[line width=1pt,color=blue](1.67,5.5)--(2.17,4.17);
\draw[line width=1pt,color=blue](2.83,6.33)--(4.33,5.17);
\filldraw[color=blue,pattern=dots,pattern color=blue](1.5,2.83)--(2.17,4.17)--(3.67,3.83)--(3.83,2.5)--(2.33,1.67)--cycle;
\filldraw[color=blue,pattern=dots,pattern color=blue](3.83,2.5)--(5.33,1.83)--(6.5,2.83)--(6.5,4.83)--(5.66,5.83)--(4.33,5.17)--(3.67,3.83)--cycle;
\filldraw[color=blue,pattern=dots,pattern color=blue](2.17,4.17)--(3.67,3.83)--(4.33,5.17)--(2.83,6.33)--(1.67,5.5)--(2.17,4.17)--cycle;
\filldraw[color=blue,pattern=dots,pattern color=blue,line width=1pt](8,5.5)--(9,5.5)--(9,4.5)--(8,4.5)--cycle;
\draw[color=blue,line width=1pt](8,4)--(9,4)--(9,3)--(8,3);
\draw[color=purple,line width=1pt](8,4)--(8,3);
\node[right] at (9.2,5){\color{black} $\mathcal M^*$};
\node[right] at (9.2,3.5){$\partial\mathcal M^*$};
\end{tikzpicture}
\end{tabular}
\caption{{An example of primal and dual meshes}}\label{fig_mesh}
\end{figure}
%
For all neighboring primal cells ${{K}}$ and ${{L}}$, we assume that $\partial{{K}}\cap\partial{{L}}$ is a segment, corresponding to an edge of the mesh $\mathcal M$,
denoted by $\sigma={{K}}\vert{{L}}$. Let $\mathcal{E}$ be the set of such edges. We similarly define the set $\mathcal{E}^*$ of the edges of the dual mesh.
For each couple $(\sigma,\sigma^*)\in\mathcal{E}\times\mathcal{E}^*$ such that $\sigma={{K}}\vert{{L}}=({\bm x}_{\ke},{\bm x}_{{{L}^*}})$ and $\sigma^*={ {K}^*}\vert{{L}^*}=({\bm x}_{{{K}}},{\bm x}_{{{L}}})$,
we define the quadrilateral diamond cell ${\mathcal{D}_{\sigma,\sigma^*}}$ whose diagonals are $\sigma$ and $\sigma^*$.
If $\sigma\in \mathcal{E}\cap\partial\Omega$,
we note that the diamond degenerates into a triangle. The set of the diamond cells defines the diamond mesh $\mathfrak{D}$.
It is a partition of $\Omega$ as shown on Figure~\ref{fig_mesh_diamond}.
We can rewrite $\mathfrak{D}=\mathfrak{D}^{\text{ext}}\cup \mathfrak{D}^{\text{int}}$ where $\mathfrak{D}^{\text{ext}}$ is the set of all the boundary diamonds and $\mathfrak{D}^{\text{int}}$ the set of all the interior diamonds.
Finally, the DDFV mesh is made of $\mathcal T=(\overline{\mathcal M},\overline{\mathfrak{M}^*})$ and $\mathfrak{D}$.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}[scale=0.65]
\draw[line width=1pt, color=purple] (1,1)--(1,7)--(7,7)--(7,1)--cycle;
\draw[line width=.5pt,densely dashed] (1,1)--(2.5,3)--(3.5,1)--(5.5,3.5)--(7,1);
\draw[line width=.5pt,densely dashed](1,7)--(3.,5)--(4.5,7)--(5.5,3.5)--(7,7);
\draw[line width=.5pt,densely dashed] (7,4)--(5.5,3.5)--(2.5,3)--(1,4.5)--(3.,5)--(2.5,3);
\draw[line width=.5pt,densely dashed] (3.,5)--(5.5,3.5);
%
\draw[densely dotted,line width=2pt,color=red](3.5,1)--(2.33,1.67)--(2.5,3)--(3.83,2.5)--cycle;
\draw[densely dotted,line width=2pt,color=red](2.5,3)--(1.5,2.83)--(1,4.5)--(2.17,4.17)--cycle;
\draw[densely dotted,line width=2pt,color=red](1,7)--(1.67,5.5)--(3,5)--(2.83,6.33)--cycle;
\draw[densely dotted,line width=2pt,color=red](4.5,7)--(5.66,5.83)--(5.5,3.5)--(4.33,5.17)--cycle;
\draw[densely dotted,line width=2pt,color=red](6.5,4.83)--(7,4)--(6.5,2.83)--(5.5,3.5)--cycle;
\draw[densely dotted,line width=2pt,color=red](2.17,4.17)--(3,5)--(3.67,3.83)--(5.5,3.5)--(5.33,1.83)--(7,1);
\draw[densely dotted,line width=2pt,color=red](3.5,1)--(5.33,1.83);
\draw[densely dotted,line width=2pt,color=red](5.5,3.5)--(3.83,2.5);
\draw[densely dotted,line width=2pt,color=red](3.67,3.83)--(2.5,3);
\draw[densely dotted,line width=2pt,color=red](2.33,1.67)--(1,1)--(1.5,2.83);
\draw[densely dotted,line width=2pt,color=red](5.66,5.83)--(7,7)--(6.5,4.83);
\draw[densely dotted,line width=2pt,color=red](4.5,7)--(2.83,6.33);
\draw[densely dotted,line width=2pt,color=red](1,4.5)--(1.67,5.5);
\draw[densely dotted,line width=2pt,color=red](3,5)--(4.33,5.17);
\draw[densely dotted,line width=2pt,color=red](7,1)--(6.5,2.83);
\draw[densely dotted,line width=2pt,color=red](8.5,5.5)--(9,5.)--(8.5,4.5)--(8.,5.)--cycle;
\node[right] at (9.2,5){\color{black} $\mathfrak{D}$};
\end{tikzpicture}
\end{center}
\caption{{An example of diamond mesh $\mathfrak{D}$}}\label{fig_mesh_diamond}
\end{figure}
For a diamond ${\mathcal{D}_{\sigma,\sigma^*}}$, whose vertices are $({\bm x}_{{{K}}},{\bm x}_{\ke},{\bm x}_{{{L}}},{\bm x}_{{{L}^*}})$,
we define:
${\bm x}_{\mathcal{D}}$ the center of the diamond cell ${\mathcal{D}}$, ${\rm m}(\sigma)$ the length of the primal edge $ \sigma$,
${\rm m}_{\sige}$ the length of the dual edge $\sigma^*$, {$d_{\mathcal{D}} = \sup_{x,y\in{\mathcal{D}}}d(x,y)$} the diameter of ${\mathcal{D}}$, $\alpha_{\mathcal{D}}$ the angle between $({\bm x}_{{{K}}},{\bm x}_{{{L}}})$ and $({\bm x}_{\ke},{\bm x}_{{{L}^*}})$.
We will also use two direct basis $(\boldsymbol{{\boldsymbol{\tau}}}_{{\ke\!\!\!,{{L}^*}}},{\bm{n}}_{\sig K})$ and $(\boldsymbol{{\bm{n}}}_{\sige\ke},\boldsymbol{{\boldsymbol{\tau}}}_{{K\!,L}})$, where ${\bm{n}}_{\sig K}$ is the unit normal to $\sigma$, outward ${{K}}$, $\boldsymbol{{\bm{n}}}_{\sige\ke}$ is the unit normal to $\sigma^*$, outward ${ {K}^*}$, $\boldsymbol{{\boldsymbol{\tau}}}_{{\ke\!\!\!,{{L}^*}}}$ is the unit tangent vector to $\sigma$, oriented from ${ {K}^*}$ to ${{L}^*}$, $\boldsymbol{{\boldsymbol{\tau}}}_{{K\!,L}}$ is the unit tangent vector to $\sigma^*$, oriented from ${{K}}$ to ${{L}}$. All these notations are presented on Figure~\ref{fig_diamonds}.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}[scale=1.]
\node[rectangle,scale=0.8,fill=black!50] (xle) at (0,0) {};
\node[circle,draw,scale=0.5,fill=black!5] (xl) at (2,1.3) {};
\node[rectangle,scale=0.8,fill=black!50] (xke) at (0,4) {};
\node[circle,draw,scale=0.5,fill=black!5] (xk) at (-2,2.3) {};
\draw[line width=1pt,densely dashed] (xle)--(xke);
\draw[line width=1pt,color=blue] (xk)--(xl);
\draw[densely dotted,line width=2pt,color=red] (xk)--(xke)--(xl)--(xle)--(xk);
\node[yshift=-8pt] at (xle){${\bm x}_{{{L}^*}}$};
\node[yshift=8pt] at (xke){${\bm x}_{\ke}$};
\node[xshift=10pt] at (xl){${\bm x}_{{{L}}}$};
\node[xshift=-10pt] at (xk){${\bm x}_{{{K}}}$};
\draw[->,line width=1pt] (-2+3.*0.4,2.3-3.*0.1)--(-2+4.8*0.4,2.3-4.8*0.1);
\draw[->,line width=1pt] (-2+3.*0.4,2.3-3.*0.1)--(-2+3.*0.4-1.8*0.1,2.3-3.*0.1-1.8*0.4);
\draw[->,line width=1pt] (0,2.7)--(0,2.);
\draw[->,line width=1pt] (0,2.7)--(.7,2.7);
\node[right] at (0,2.3){$\boldsymbol{{\boldsymbol{\tau}}}_{{\ke\!\!\!,{{L}^*}}}$};
\node[right] at (0.,2.95){${\bm{n}}_{\sig K}$};
\node at (-0.5,2.2){$\boldsymbol{{\boldsymbol{\tau}}}_{{K\!,L}}$};
\node at (-0.45,1.25){$\boldsymbol{{\bm{n}}}_{\sige\ke}$};
\draw[line width=1pt,densely dashed] (2.2,4)--(2.8,4);
\node[right,xshift=8] at (2.8,4){$\sigma={{K}}\vert{{L}}$, edge of the primal mesh};
\draw[line width=1pt, color=blue] (2.2,3.5)--(2.8,3.5);
\node[right,xshift=8] at (2.8,3.5){$\sigma^*={ {K}^*}\vert{{L}^*}$, edge of the dual mesh};
\draw[densely dotted,line width=2pt,color=red] (2.2,3)--(2.8,3);
\node[right,xshift=8] at (2.8,3.){Diamond ${\mathcal{D}_{\sigma,\sigma^*}}$};
\node[rectangle,scale=0.8,fill=black!50] at (2.8,2.5) {};
\node[circle,draw,scale=0.5,fill=black!5] at (2.8,2.) {};
\node[right,xshift=8] at (2.8,2.5){Vertices of the primal mesh};
\node[right,xshift=8] at (2.8,2.){Centers of the primal mesh};
\node [right]at (0,1.5) {${\
|
bm x}_{\mathcal{D}}$};
\node at (0,1.8) {$\bullet$};
\node[rectangle,scale=0.8,fill=black!50] (xle2) at (11,0) {};
\node[circle,draw,scale=0.5,fill=black!5] (xl2) at (11,2) {};
\node[rectangle,scale=0.8,fill=black!50] (xke2) at (11,4) {};
\node[circle,draw,scale=0.5,fill=black!5] (xk2) at (9.5,2.3) {};
\draw[line width=1pt,color=blue] (xk2)--(xl2);
\draw[densely dotted,line width=2pt, color=red] (xle2)--(xk2)--(xke2)--(xle2);
\draw[line width=1pt,densely dashed] (xle2)--(xke2);
\node[yshift=-8] at (xle2){${\bm x}_{{{L}^*}}$};
\node[yshift=8] at (xke2){${\bm x}_{\ke}$};
\node[xshift=10] at (xl2){${\bm x}_{{{L}}}$};
\node[xshift=-10] at (xk2){${\bm x}_{{{K}}}$};
\end{tikzpicture}
\end{center}
\caption{Definition of the diamonds ${\mathcal{D}_{\sigma,\sigma^*}}$ and related notations.}\label{fig_diamonds}
\end{figure}
For each primal cell $K\in {\overline \mathcal M}$ ({\em resp.} dual cell
${ {K}^*}\in {\overline \mathfrak{M}^*}$), we define ${\rm m}_K$ the measure of $K$, $\mathcal{E}_K$ the set of the edges of $K$
(it coincides with the edge $\sigma=K$ if $K\in\partial\mathcal M$), $\mathfrak{D}_K$ the set of diamonds ${\mathcal{D}_{\sigma,\sigma^*}}\in\mathfrak{D}$ such that ${m}({\mathcal{D}_{\sigma,\sigma^*}}\cap K)>0$,
and {$d_K = \sup_{x,y\in K}d(x,y)$} the diameter of $K$ ({\em resp.} ${\rm m}_{ {K}^*}$, $\mathcal{E}_{ {K}^*}$, $\mathfrak{D}_{ {K}^*}$, $d_{ {K}^*}$).
Denoting by ${\rm m}_{\Ds}$ the $2$-dimensional Lebesgue measure of ${\mathcal{D}}$, one has
\begin{equation}\label{eq:m_Dd}
{\rm m}_{\Ds} = \frac12 {\rm m}(\sigma){\rm m}_{\sige}\sin(\alpha_{\mathcal{D}}), \qquad \forall {\mathcal{D}} = {\mathcal{D}}_{ \sigma, \sigma^\ast}\in \mathfrak{D}.
\end{equation}
We assume some regularity of the mesh as presented in \cite{CCHK}. Therefore, we define two local regularity factors $\theta_{\mathcal{D}}, {\tilde \theta}_{\mathcal{D}}$ of the diamond cell ${\mathcal{D}}={\mathcal{D}}_{ \sigma,\sigma^*}\in\mathfrak{D}$ by
$$
\theta_{\mathcal{D}} = \frac1{2\sin(\alpha_{\mathcal{D}})}\left(\frac{{\rm m}(\sigma)}{{\rm m}_{\sige}} + \frac{{\rm m}_{\sige}}{{\rm m}(\sigma)}\right),
\
\tilde \theta_{\mathcal{D}} = \max \left( \max_{\stackrel{K \in \mathcal M,}{ {\rm m}_{{\mathcal{D}} \cap K}>0}} \frac{{\rm m}_{\Ds}}{{\rm m}_{{\mathcal{D}} \cap K}}\, ; \,\max_{\stackrel{K^* \in \mathcal M^*,}{ {\rm m}_{{\mathcal{D}} \cap K^*}>0}} \frac{{\rm m}_{\Ds}}{{\rm m}_{{\mathcal{D}} \cap K^*}} \right)
$$
and we assume that there exists ${ \Theta} \ge 1$ such that
\begin{equation}\label{eq:Theta}
1 \leq \theta_{\mathcal{D}}, \tilde \theta_{\mathcal{D}} \leq {\Theta}, \qquad \forall {\mathcal{D}} \in \mathfrak{D}.
\end{equation}
In particular, this implies that
\begin{equation}\label{hypsinalpha}
\sin(\alpha_{\mathcal{D}}) \ge \Theta^{-1}, \qquad \forall {\mathcal{D}} \in \mathfrak{D}.
\end{equation}
Let us introduce the sets of discrete unknowns. $\mathbb R^\mathcal T$ is the linear space of scalar fields constant on the primal and dual cells and $\mathfrak{M}^*$ and $(\mathbb R^2)^\mathfrak{D}$ is the linear space of vector fields constant on the diamonds. We have
\begin{eqnarray*}
u_{\mathcal T}\in \mathbb R^\mathcal T& \Longleftrightarrow& u_{\mathcal T}=\left(\left(u_K\right)_{K\in{\overline \mathcal M}},\left(u_{ {K}^*}\right)_{{ {K}^*}\in{\overline \mathfrak{M}^*}}\right)\\
{{\bm \xi}}_\mathfrak{D}\in (\mathbb R^2)^\mathfrak{D} & \Longleftrightarrow&{{\bm \xi}}_\mathfrak{D}=\left({{\bm \xi}}_{\mathcal{D}}\right)_{{\mathcal{D}}\in\mathfrak{D}}\,.
\end{eqnarray*}
Then, we define the positive semi-definite bilinear form $\llbracket\cdot,\cdot\rrbracket_\mathcal T $
on $\mathbb R^\mathcal T $ and the scalar product $(\cdot,\cdot)_{{\bm \varLambda}, \mathfrak{D}} $ on $(\mathbb R^2)^\mathfrak{D}$ by
$$
\begin{aligned}
&\displaystyle\left\llbracketv_{\mathcal T},u_{\mathcal T}\right\rrbracket_\mathcal T&&=&&\frac{1}{2}\left(\sum_{{{K}}\in\M}{\rm m}_{{{K}}}u_{{{K}}} v_K+\sum_{\ke\in{\overline\Mie}} {\rm m}_{\ke}u_{\ke} v_{ {K}^*}
\right), \ \ \ \forall u_{\mathcal T},v_{\mathcal T}\in \mathbb R^\mathcal T,\\
&\displaystyle\left({\bm \xi}_\mathfrak{D},{\bm \phi}_\mathfrak{D}\right)_{{\bm \varLambda}, \mathfrak{D}}&&=&&\sum_{\Ds\in\DD}{\rm m}_{\Ds}\
{\bm \xi}_{\mathcal{D}}\cdot {\bm \varLambda}^{\mathcal{D}} {\bm \phi}_{\mathcal{D}},\ \ \ \forall {\bm \xi}_\mathfrak{D},{\bm \phi}_\mathfrak{D}\in(\mathbb R^2)^\mathfrak{D},
\end{aligned}
$$
where
$${\bm \varLambda}^{\mathcal{D}} = \frac1{{\rm m}_{\Ds}} \int_{{\mathcal{D}}} {\bm \varLambda}({\bm x}) \,\mathrm{d}{\bm x}, \qquad \forall {\mathcal{D}} \in \mathfrak{D}.$$
We denote by $\|\cdot\|_{{\bm \varLambda},\mathfrak{D}}$ the Euclidean norm associated to the scalar product $\left(\cdot, \cdot\right)_{{\bm \varLambda},\mathfrak{D}}$, i.e.,
$$
\left\| {\bm \xi}_\mathfrak{D} \right\|_{{\bm \varLambda},\mathfrak{D}}^2 = \left({\bm \xi}_\mathfrak{D}, {\bm \xi}_\mathfrak{D} \right)_{{\bm \varLambda},\mathfrak{D}}, \qquad \forall {\bm \xi}_\mathfrak{D} \in (\mathbb R^2)^\mathfrak{D}.
$$
Let us remark that, due to the ellipticity (A4) of ${\bm\varLambda}$, we have
\begin{equation}\label{ineg:normesD}
\left\| {\bm \xi}_\mathfrak{D} \right\|_{{\bm \varLambda},\mathfrak{D}}^2\geq \lambda_m\left\| {\bm \xi}_\mathfrak{D} \right\|_{{\bm I},\mathfrak{D}}^2 \mbox{ with } \left\Vert {\bm \xi}_\mathfrak{D} \right\Vert_{{\bm I},\mathfrak{D}}^2=\displaystyle\sum_{\Ds\in\DD} {\rm m}_{\Ds} \vert {\bm \xi}_{\mathcal{D}}\vert^2.
\end{equation}
\subsection{The nonlinear DDFV scheme: presentation and \emph{a priori} estimates}
\subsubsection*{Discrete operators}
The DDFV method is based on a discrete duality formula which links a discrete gradient operator to a discrete divergence operator, as shown in \cite{DO_05}.
In this paper we don't need to introduce the discrete divergence. We just define the discrete gradient. It has been introduced in \cite{CVV_99} and developed in \cite{DO_05}; it is a mapping from $\mathbb R^\mathcal T$ to $(\mathbb R^2)^\mathfrak{D}$ defined
by $\nabla^{\DD } u_{\mathcal T} =\displaystyle\left(\nabla^{\Ds} u_{\mathcal T}\right)_{{\mathcal{D}}\in\mathfrak{D}}$ for all $u_{\mathcal T}\in \mathbb R^\mathcal T$,
where
\[
\nabla^{\Ds} u_{\mathcal T} =\frac{1}{\sin(\alpha_{\mathcal{D}})}\left(\frac{u_{{{L}}}-u_{{{K}}}}{{\rm m}_{\sige}}{\bm{n}}_{\sig K}+\frac{u_{{{L}^*}}-u_{\ke}}{{\rm m}(\sigma)}\boldsymbol{{\bm{n}}}_{\sige\ke}\right), \quad \forall {\mathcal{D}}\in\mathfrak{D}.
\]
Using \eqref{eq:m_Dd}, the discrete gradient can be equivalently written:
\[
\nabla^{\Ds} u_{\mathcal T} =\frac{1}{2{\rm m}_{\Ds}}\left({\rm m}(\sigma)(u_{{{L}}}-u_{{{K}}}){\bm{n}}_{\sig K}+{\rm m}_{\sige}(u_{{{L}^*}}-u_{\ke})\boldsymbol{{\bm{n}}}_{\sige\ke}\right), \quad \forall {\mathcal{D}}\in\mathfrak{D}.
\]
For $u_{\mathcal T}\in\mathbb R^\mathcal T$ and ${\mathcal{D}}\in\mathfrak{D}$, we can define $\delta^{\mathcal{D}} u_{\mathcal T}$ by
$$
\delta^{\mathcal{D}} u_{\mathcal T}=\left(\begin{array}{c} u_K-u_L\\ u_{\ke}-u_{{{L}^*}}\end{array}\right).
$$
Then, we can write
$$
(\nabla^{\DD } u_{\mathcal T},\nabla^{\DD } v_\mathcal T)_{{\bm \varLambda}, \mathfrak{D}}=\sum_{\Ds\in\DD} \delta^{\mathcal{D}} u_{\mathcal T}\cdot {\mathbb A}^{{\mathcal{D}}}\delta^{\mathcal{D}} v_\mathcal T,
$$
where the local matrices ${\mathbb A}^{{\mathcal{D}}}$ are defined by
$
{\mathbb A}^{\mathcal{D}}=\begin{pmatrix}
A^{\mathcal{D}}_{ \sigma, \sigma} & A^{\mathcal{D}}_{ \sigma,\sigma^*} \\
A^{\mathcal{D}}_{ \sigma,\sigma^*} & A^{\mathcal{D}}_{\sigma^*,\sigma^*}
\end{pmatrix},
$
with
$$
\begin{aligned}
A^{\mathcal{D}}_{ \sigma, \sigma}&= \frac{1}{4 {\rm m}_{\Ds}}{\rm m}(\sigma)^2 ({\bm \varLambda}^{\mathcal{D}} \boldsymbol{{\bm{n}}}_{{{K}}, \sigma}\cdot \boldsymbol{{\bm{n}}}_{{{K}}, \sigma}),\\
A^{\mathcal{D}}_{ \sigma,\sigma^*}& =\frac{1}{4 {\rm m}_{\Ds}}{\rm m}(\sigma) {{\rm m}_{\sige}} ({\bm \varLambda}^{\mathcal{D}} \boldsymbol{{\bm{n}}}_{{{K}}, \sigma}\cdot \boldsymbol{{\bm{n}}}_{{ {K}^*},\sigma^*}),\\
A^{\mathcal{D}}_{\sigma^*,\sigma^*}& =\frac{1}{4 {\rm m}_{\Ds}}{\rm m}_{\sige}^2 ({\bm \varLambda}^{\mathcal{D}} \boldsymbol{{\bm{n}}}_{{ {K}^*},\sigma^*}\cdot \boldsymbol{{\bm{n}}}_{{ {K}^*},\sigma^*}).
\end{aligned}
$$
We also introduce a reconstruction operator on diamonds $r^\mathfrak{D}$. It is a mapping from $\mathbb R^\mathcal T$ to $\mathbb R^\mathfrak{D}$ defined for all $u_{\mathcal T}\in\mathbb R^\mathcal T$ by $r^\mathfrak{D}[u_{\mathcal T}] =\left(r^{\mathcal{D}}(u_{\mathcal T})\right)_{{\mathcal{D}}\in\mathfrak{D}}$.
For ${\mathcal{D}}\in\mathfrak{D}$, whose vertices are $x_K$, $x_L$, $x_{ {K}^*}$, $x_{{L}^*}$, we define
\begin{equation}\label{eq:rD}
r^{\mathcal{D}}(u_{\mathcal T})=f(r(u_{{{K}}},u_{{{L}}}),r(u_{\ke},u_{{{L}^*}})),
\end{equation}
where $r$ satisfies the properties \eqref{hyp:g} and $f$ is either defined by $f(x,y)=\max (x,y)$ or by $f(x,y)=(x+y)/2$.
\subsubsection*{Definition of the scheme}
Let us first define the discrete initial condition $u_{\mathcal T}^0$ by taking the mean values of $u_0$ on the primal and the dual meshes. For all $K \in \mathcal M$ and $K^\ast \in \overline{\mathcal M^\ast}$, we set
\[
u_K^0 \,=\, \frac1{{\rm m}_{{{K}}}} \int_K u_0\,, \quad u_{K^\ast}^0 \,=\, \frac1{{\rm m}_{\ke}} \int_{{\rm m}_{{{K}}}} u_0\,, \quad u_{\partial\mathcal M}^0\,=\,0\,.
\]
The exterior potential $V$ is discretized by taking its nodal values on the primal and dual cells, namely
$V_K \,=\, V({\bm x}_K)$ and $V_{K^\ast} \,=\, V({\bm x}_{K^\ast})$
for all $K \in \overline\mathcal M$ and $K^\ast \in \overline{\mathcal M^\ast}$.
We can now define the nonlinear DDFV scheme, as it is introduced in \cite{CCHK} but without stabilization term. Indeed, while the stabilization term seems crucial for the proof of convergence of the scheme, the numerical experiments show that it has no significant influence on the behavior of the scheme. The scheme is the following: for all $n\geq 0$, we look for $u_{\mathcal T}^{n+1}\in (\mathbb R_+^\ast)^\mathcal T$ solution to the variational formulation:
\begin{subequations}\label{schemeDDFV}
\begin{align}
&\Bigl\llbracket\displaystyle\frac{u_{\mathcal T}^{n+1}-u_{\mathcal T}^n}{\Delta t}, \psi_\mathcal T\Bigl\rrbracket_\mathcal T+T_{\mathfrak{D}}(u_{\mathcal T}^{n+1}; g_\mathcal T^{n+1},\psi_\mathcal T) =0,\quad \forall \psi_\mathcal T\in\mathbb R^\mathcal T,\label{sch_formcompacte}\\
& T_{\mathfrak{D}}(u_{\mathcal T}^{n+1}; g_\mathcal T^{n+1},\psi_\mathcal T)=\sum_{\Ds\in\DD} r^{\mathcal{D}}(u_{\mathcal T}^{n+1}) \, \delta^{\mathcal{D}} g_\mathcal T^{n+1}\cdot {\mathbb A}^{{\mathcal{D}}}\delta^{\mathcal{D}} \psi_\mathcal T,\label{sch_defTd}\\
& g_\mathcal T^{n+1}=\log (u_{\mathcal T}^{n+1})+V_\mathcal T.\label{sch_defgt}
\end{align}
\end{subequations}
\subsubsection*{Conservation of mass and steady-state} By choosing
successively $\psi_\mathcal T=((1)_{K\in{\overline\mathcal M}},(0)_{{ {K}^*}\in\overline \mathfrak{M}^*})$ and $\psi_\mathcal T=((0)_{K\in{\overline\mathcal M}},(1)_{{ {K}^*}\in\overline\mathfrak{M}^*})$ as test functions in \eqref{schemeDDFV}, we obtain that the mass is conserved on the primal mesh and on the dual mesh: that is for all $n\geq0$ one has
\[
\displaystyle\sum_{{{K}}\in\M}{\rm m}_{{{K}}} u_{{{K}}}^n =\sum_{{{K}}\in\M}{\rm m}_{{{K}}} u_{{{K}}}^0=M^1 =
\displaystyle\sum_{\ke\in{\overline\Mie}}{\rm m}_{\ke} u_{\ke}^n =\sum_{\ke\in{\overline\Mie}}{\rm m}_{\ke} u_{\ke}^0,
\qquad \forall n \geq 0.
\]
The steady-state $u_{\mathcal T}^{\infty}$ associated to the scheme \eqref{schemeDDFV} is defined for all $ K\in\mathcal M$ and ${ {K}^*} \in{\overline \mathfrak{M}^*}$ by
$$
\begin{aligned}
u_K^{\infty}&=\rho e^{-V_K}\,,\quad\mbox{with}\ \rho = M^1\left(\sum_{{{K}}\in\M} {\rm m}_{{{K}}} e^{-V_K}\right)^{-1}\,,\\
u_{ {K}^*}^{\infty}&=\rho^\ast e^{-V_{ {K}^*}}\,,\quad\mbox{with}\ \rho^\ast =M^1\left(\sum_{\ke\in{\overline\Mie}} {\rm m}_{\ke} e^{-V_{ {K}^*}}\right)^{-1}\,.
\end{aligned}
$$
Let us remark that, as for the TPFA scheme, there exists $m^\infty>0$ and $M^{\infty}>0$ (and we keep the same notations) such that for all $K\in\mathcal M$ and ${ {K}^*}\in{\overline\mathfrak{M}^*}$
\begin{equation}\label{bound:steadystate_DDFV}
m^\infty\leq u_K^\infty\,,\ u_{ {K}^*}^\infty \leq M^{\infty}\,.
\end{equation}
\subsubsection*{Entropy-dissipation estimate and existence result}
As for the TPFA scheme, we introduce the discrete relative entropy $({\mathbb E}_{1,\mathcal T}^n)_{n\geq 0}$ obtained with the Gibbs entropy $\Phi_1$. It is defined by
$$
{\mathbb E}_{1,\mathcal T}^n=\llbracket u_{\mathcal T}^{\infty}\Phi_1(u_{\mathcal T}^n/u_{\mathcal T}^\infty),{1}_\mathcal T\rrbracket=\llbracket u_{\mathcal T}^{n}\log(u_{\mathcal T}^n/u_{\mathcal T}^\infty),{1}_\mathcal T\rrbracket,\quad \forall n\geq 0.
$$
where the second equality comes from the conservation of mass on each mesh. The discrete entropy dissipation is defined by
$$
{\mathbb I}_{1,\mathcal T}^{n+1} =T_{\mathfrak{D}}(u_{\mathcal T}^{n+1}; g_\mathcal T^{n+1},g_\mathcal T^{n+1}), \quad \forall n\geq 0.
$$
We notice that the definition of the steady-state implies that $\delta^{\mathcal{D}} g_\mathcal T^{n+1}=\delta^{\mathcal{D}} \log(\displaystyleu_{\mathcal T}^{n+1}/u_{\mathcal T}^{\infty})$ for all ${\mathcal{D}}\in\mathfrak{D}$. Therefore
${\mathbb I}_{1,\mathcal T}^{n+1}$ rewrites for all $n\geq0$ as
$$
{\mathbb I}_{1,\mathcal T}^{n+1}=\sum_{\Ds\in\DD} r^{\mathcal{D}}(u_{\mathcal T}^{n+1}) \, \delta^{\mathcal{D}} \log(u_{\mathcal T}^{n+1}/u_{\mathcal T}^{\infty})\cdot {\mathbb A}^{{\mathcal{D}}}\delta^{\mathcal{D}} \log(u_{\mathcal T}^{n+1}/u_{\mathcal T}^{\infty})\,.
$$
We can now state the result of existence of a discrete positive solution to the scheme \eqref{schemeDDFV}, with the discrete entropy-entropy estimate.
\begin{prop}\label{theo:ex_DDFV}
For all $n\geq 0$, there exists a solution $u_{\mathcal T}^{n+1}\in (\mathbb R_+^\ast)^\mathcal T$ to the nonlinear system \eqref{schemeDDFV} that satisfies the discrete entropy/entropy dissipation estimate:
\begin{equation}\label{entropy_dissip_DDFV}
\displaystyle\frac{{\mathbb E}_{1,\mathcal T}^{n+1}-{\mathbb E}_{1,\mathcal T}^{n}}{\Delta t}+ {\mathbb I}_{1,\mathcal T}^{n+1}\leq 0,\mbox{ for all } n\geq 0.
\end{equation}
\end{prop}
The proof is a direct adaptation of the proof of \cite[Theorem 2.2]{CCHK} to the case without stabilization and with a discrete relative entropy (the entropy defined in \cite{CCHK} is a general entropy, not relative to the steady-state, which differs from ${\mathbb E}_{1,\mathcal T}^n$ from a constant).
Let us just mention that \eqref{entropy_dissip_DDFV} is obtained by using $\psi_\mathcal T=\log(u_{\mathcal T}^{n+1}/u_{\mathcal T}^\infty)$ as a test function in \eqref{sch_formcompacte}.
{\begin{rema}
Let us precise that while an entropy inequality still holds in the case of the DDFV scheme with stabilization term as in \cite{CCHK}, the long time analysis performed in the next section does not seem to adapt to the stabilized scheme. Indeed, with the penalization term, the conservation of mass on the primal and dual meshes is not satisfied anymore which prevents the use of the discrete log-Sobolev inequality of Section~\ref{sec:logSob}.
Nevertheless, let us recall that the stabilization term is mainly introduced to overcome a technical issue related to the identification of the limit in the convergence proof. At the practical level of numerical experiments, no noticeable difference has been observed between the behavior of the scheme with stabilization and its present version, both at the level of long time behavior and convergence. We refer to \cite{CCHK} for more details.
\end{rema}
}
\subsection{Analysis of the long time behavior}
It remains to establish the exponential decay towards $0$ of the discrete relative entropy $({\mathbb E}_{1,\mathcal T}^{n})_{n\geq 0}$. As for the TPFA finite volume scheme, it is based on a relation between the discrete entropy and the discrete entropy dissipation of the form \eqref{ineq:EI}. And this inequality is a consequence of a discrete log-Sobolev inequality which is established in Proposition~\ref{prop:logSobDDFV}.
\begin{theo}\label{theo:expdec_DDFV}
Let us assume that $\mathcal E_{\text{ext}}^D=\emptyset$. Then, there exists $\nu$ depending only on the domain $\Omega$, the regularity of the mesh $\Theta$, the initial condition $u_0$, the exterior potential $V$ and the anisotropy tensor $\varLambda$ {\em via} $\lambda_m$ and $\lambda^M$, such that
\begin{equation}\label{expdec_DDFV}
{\mathbb E}_{1,\mathcal T}^n\ \leq\ (1\,+\,\nu \Delta t)^{-n}\,{\mathbb E}_{1,\mathcal T}^0\,,\quad\forall n\geq 0\,.
\end{equation}
Thus for any $k>0$, if $\Delta t\leq k$, one has for all $n\geq0$ that ${\mathbb E}_{1,\mathcal T}^n\leq e^{-{\tilde \nu} t^n}{\mathbb E}_{1,\mathcal T}^0$ with ${\tilde\nu}=\log (1+\nu k)/k$.
\end{theo}
\begin{proof}
In the following, $C$ will denote any positive constant depending only on $\Omega$, $\Theta$, $\lambda_m$ and $\lambda^M$. As for the TPFA finite volume scheme, we start with introducing a discrete counterpart of $4 u^\infty|\nabla ( u/u^\infty)^{1/2}|^2$ in the DDFV framework. We define for all $n\geq0$
\[
{\widehat{\mathbb I}}_{1,\mathcal T}^{n+1}=4\sum_{\Ds\in\DD} {\bar u}_{\mathcal{D}}^\infty \delta^{\mathcal{D}} \sqrt{ u_{\mathcal T}^{n+1}/u_{\mathcal T}^\infty}\cdot {\mathbb A}^{\mathcal{D}} \delta^{\mathcal{D}}\sqrt{ u_{\mathcal T}^{n+1}/u_{\mathcal T}^\infty}
\]
with
\[
{\bar u}_{\mathcal{D}}^\infty=\min(u_K^\infty, u_L^\infty,u_{ {K}^*}^\infty,u_{{L}^*}^\infty)\,.
\]
In a first step, we compare ${{\mathbb I}}_{1,\mathcal T}^{n+1}$ to ${\widehat{\mathbb I}}_{1,\mathcal T}^{n+1}$. For all ${\mathcal{D}}\in\mathfrak{D}$, we introduce the diagonal matrix ${\mathbb B}^{\mathcal{D}}$, whose diagonal coefficients are $B_{ \sigma, \sigma}^{\mathcal{D}}= |A_{ \sigma, \sigma}^{\mathcal{D}}|+|A_{ \sigma,\sigma^*}^{\mathcal{D}}|$ and $B_{\sigma^*,\sigma^*}^{\mathcal{D}}= |A_{\sigma^*,\sigma^*}^{\mathcal{D}}|+|A_{ \sigma,\sigma^*}^{\mathcal{D}}|$. Then it is shown in \cite{CCHK} that for all ${\mathcal{D}}\in\mathfrak{D}$, there holds
\[
{\bm w}\cdot {\mathbb A}^{\mathcal{D}}{\bm w} \,\leq\, {\bm w}\cdot {\mathbb B}^{\mathcal{D}}{\bm w}\,\leq\, C\,{\bm w}\cdot {\mathbb A}^{\mathcal{D}}{\bm w},\quad \forall {\bm w}\in\mathbb R^2\,.
\]
Therefore, on one hand,
\[
{{\mathbb I}}_{1,\mathcal T}^{n+1}\geq C \sum_{\Ds\in\DD} r^{\mathcal{D}}(u_{\mathcal T}^{n+1}) \, \delta^{\mathcal{D}} \log(u_{\mathcal T}^{n+1}/u_{\mathcal T}^{\infty})\cdot {\mathbb B}^{{\mathcal{D}}}\delta^{\mathcal{D}} \log(u_{\mathcal T}^{n+1}/u_{\mathcal T}^{\infty})
\]
and on the other hand
\[
{\widehat{\mathbb I}}_{1,\mathcal T}^{n+1}\leq 4\sum_{\Ds\in\DD} {\bar u}_{\mathcal{D}}^\infty \delta^{\mathcal{D}} \sqrt{u_{\mathcal T}^{n+1}/u_{\mathcal T}^\infty}\cdot {\mathbb B}^{\mathcal{D}} \delta^{\mathcal{D}}\sqrt{u_{\mathcal T}^{n+1}/u_{\mathcal T}^\infty}\,.
\]
Besides, as ${\mathbb B}^{\mathcal{D}}$ is a diagonal matrix, for all ${\mathcal{D}}\in\mathfrak{D}$ we have
\[
\delta^{\mathcal{D}}\sqrt{ \displaystyle\frac{u_{\mathcal T}^{n+1}}{u_{\mathcal T}^\infty}}\cdot {\mathbb B}^{\mathcal{D}} \delta^{\mathcal{D}}\sqrt{ \displaystyle\frac{u_{\mathcal T}^{n+1}}{u_{\mathcal T}^\infty}}\ =\ B_{ \sigma, \sigma}^{\mathcal{D}} \left(\sqrt{\displaystyle \frac{u_K^{n+1}}{u_K^\infty}}-
\sqrt{\displaystyle \frac{u_L^{n+1}}{u_L^\infty}}\right)^2+B_{\sigma^*,\sigma^*}^{\mathcal{D}} \left(\sqrt{\displaystyle \frac{u_{ {K}^*}^{n+1}}{u_{ {K}^*}^\infty}}-
\sqrt{\displaystyle \frac{u_{{L}^*}^{n+1}}{u_{{L}^*}^\infty}}\right)^2.
\]
Adapting the inequalities \eqref{I_Ihat_1} and \eqref{I_Ihat_2} on the primal and the dual mesh, we obtain, thanks to the definition of ${\bar u}_{\mathcal{D}}^\infty$,
\begin{multline*}
4{\bar u}_{\mathcal{D}}^\infty \delta^{\mathcal{D}} \sqrt{ \displaystyle\frac{u_{\mathcal T}^{n+1}}{u_{\mathcal T}^\infty}}\
|
the screen. This is distinct from Hawking's area theorem, which applies to causal horizons. Holographic screens exist generically, including in our universe, and regardless of the presence of black hole or de Sitter horizons.
Our proof, like Hawking's for his theorem, relied on the Null Energy Condition, that the stress tensor satisfies $T_{ab} k^a k^b\geq 0$ for any null vector $k^a$. As noted earlier, this condition can be violated in quantum field theory. Thus, it is natural to propose that the quantum-corrected area would satisfy a more robust monotonicity property. We called this conjecture the generalized second law for cosmology~\cite{BouEng15c}:
\begin{equation}
d S_\text{gen} >0.
\end{equation}
(The strict inequality arises from a genericity assumption that simplifies the analysis.)
This is the first rigorous and covariant statement of a Second Law in cosmology. It does not, for example, rely on symmetries like homogeneity to subdivide a portion of the universe and consider its comoving entropy. Such a definition would at best be approximate and would not apply in general spacetimes.
Yet, our Second Law remains somewhat mysterious, in that we know of no thermodynamic argument for why it should hold. This is in marked contrast to Bekenstein's GSL, which was arrived at not by quantum-correcting Hawking's area theorem, but by demanding that the laws of thermodynamics hold in the exterior of black holes. (See Sections~\ref{sec-bhe}, \ref{sec-gsl} above.) It would be nice to fill this gap.
\subsection{Quantum Focussing Conjecture}
A consequence of Einstein's equation, Raychaudhuri's equation
\begin{equation}
\theta' = -\frac{1}{2}\theta^2 - \varsigma_{ab}\varsigma^{ab} - 8\pi G\, T_{ab}k^a k^b
\label{eq-raych}
\end{equation}
describes the evolution of a family (congruence) of null geodesics that form a null hypersurface~\cite{HawEll,Wald}. Here $\theta$ is the expansion, i.e., the logarithmic derivative of a small area element transported along the congruence. $\theta$ is defined in terms of the trace of the null extrinsic curvature, whose symmetric part gives the shear, $\varsigma_{ab}$. (We consider surface-orthogonal null congruences so there is no antisymmetric part or ``twist.'') The prime refers to a derivative with respect to the affine parameter along the geodesics.
Assuming the null energy condition, $T_{ab}k^a k^b\geq 0$, Eq.~(\ref{eq-raych}) implies
\begin{equation}
\theta'\leq 0~.
\label{eq-classicalfocus}
\end{equation}
This is the classical focussing property of light: matter focusses but never anti-focusses light-rays. For example, initially parallel light-rays grazing the Sun are bent inwards, not outwards by the Sun's gravity.
If the null energy condition is violated, for example by Casimir energy or near a black hole horizon, then light-rays can antifocus, i.e., one can have $\theta'>0$. One can ask, however, whether a quantum-corrected version of Eq.~(\ref{eq-classicalfocus}) remains valid.
To find an appropriate formulation~\cite{BouFis15a}, we appealed to Bekenstein's generalized entropy, interpreting it as a quantum-corrected area $A_Q = 4G \hbar S_\text{gen}$. For any two-dimensional surface $\Sigma$ that divides the world into two sides, we defined the ``quantum expansion'' $\Theta$ analogous to $\theta$, as the response of the generalized entropy of $\Sigma$ to local deformations of the $\Sigma$ along an orthogonal light-ray.\footnote{Strictly, $\Theta$ is a functional derivative of the generalized entropy and so depends on $\Sigma$ globally. It reduces to the ordinary derivative in Eq.~(\ref{eq-classicalfocus}) as $G\hbar\to 0$, which has no such dependence.} It was then natural to conjecture that
\begin{equation}
\Theta'\leq 0~.
\label{eq-qfc}
\end{equation}
This is the {\em Quantum Focussing Conjecture} (QFC). See~\cite{BouFis15a} for precise definitions.
Some evidence for the validity of this conjecture comes from the fact that it reproduces other well-tested or proven statements as limiting cases. For example, if $\Sigma$ has $\Theta\leq 0$ initially, then Eq.~(\ref{eq-qfc}) implies the generalized covariant entropy bound~\cite{CEB1,FMW}, Eq.~(\ref{eq-gceb}).
In particular, the QFC thus implies the Bekenstein bound~\cite{Bek81} in the strong form of Eq.~(\ref{eq-tight})~\cite{Bou02}.
The formulation of the QFC in terms of $S_\text{gen}$ provides a more rigorous statement of both bounds: the generalized entropy of the initial surface is greater than that of the final surface. (There exists another quantum version of the covariant bound which is defined only in the weakly gravitating limit~\cite{BouCas14a,BouCas14b}. In this regime, both versions can be applied, and there appears to be no logical relation between them. It would be nice to understand this better. Perhaps there exists a stronger statement that implies both?)
A further consequence of the QFC will be discussed in the next subsection.
\subsection{Quantum Null Energy Condition}
\label{sec-qnec
|
}
The QFC led to a new result in quantum field theory, the {\em Quantum Null Energy Condition}~\cite{BouFis15a,BouFis15b}. The QNEC states that the local energy density of matter is bounded from below by an information-theoretic quantity:
\begin{equation}
T_{ab} k^a k^b \geq \frac{\hbar}{2\pi} S_\text{out}'' ~.
\label{eq-qnec}
\end{equation}
Here $T_{ab}$ is the stress tensor at a point $p$. We have implicitly chosen a surface $\Sigma$ that contains $p$ and divides the world into two parts, such that the classical expansion and shear at $\Sigma$ vanish.\footnote{An example of this is any infinite spatial plane in Minkowski space containing $p$. One can also consider more general settings~\cite{BouFis15b,Lei17,AkeCha17}.} The null vector $k^a$ is orthogonal to $\Sigma$ at $p$, and $S_\text{out}$ is the von Neumann entropy of the quantum fields on one side of $\Sigma$. The double-prime refers to a second functional derivative (with a Dirac delta-function stripped off), with respect to deformations of $\Sigma$ in the direction of $k^a$ at $p$.
The QNEC, Eq.~(\ref{eq-qnec}), follows immediately from the QFC by combining Eqs.~(\ref{eq-sg}), (\ref{eq-raych}), and~(\ref{eq-qfc}), and taking the limit as $G\to 0$. By construction, the initial classical expansion and shear vanish at $\Sigma$. By Eq.~(\ref{eq-raych}), the expansion will be $O(G)$ and proportional to the stress tensor on the null hypersurface orthogonal to $\Sigma$. Thus the classical piece, in this special setting, is of the same order as the quantum correction arising from the entropy. After cancelling out $G$, this results in Eq.~(\ref{eq-qnec}).
The QNEC is the first lower bound on the local stress tensor in quantum field theory. It implies the Averaged Null Energy Condition~\cite{WalYur91} on the integrated stress tensor, so the QNEC is stronger than the ANEC. Operationally, the QNEC can be thought of in terms of an observer who examines a signal or region of space up to a boundary. We can ask how the information gained by the observer would increase if they deformed the boundary to include a greater region. The QNEC tells us that the energy density of the signal limits the second derivative of the information content under this deformation.
We can integrate the QNEC twice, to obtain a precise version of Bekenstein's bound~\cite{Bek81}. Thus the QNEC is a stronger, differential version of the bound. It captures in the most local way possible the relation between energy, spacetime, and information that underlies both the Bekenstein bound and the holographic entropy bounds.\footnote{For interacting theories there is a precise sense in which this relation is actually an equality, $ T_{ab} k^a k^b = (\hbar/2\pi) S_\text{out}''$, namely when $\Sigma$ is deformed twice at the same point~\cite{LeiLev18}. The inequality in Eq.~(\ref{eq-qnec}) thus arises entirely from the off-diagonal contributions to the second functional derivative, when $\Sigma$ is defomed at two different points.}
The QNEC is purely a statement about quantum field theory, and it can be proven in that setting. This was done for free theories in Ref.~\cite{BouFis15b}, for interacting theories with a holographic dual in Ref.~\cite{KoeLei15}, and for general interacting theories in Ref.~\cite{BalFau17}. The field theory proofs are rather elaborate, totaling around 100 pages. By contrast, as noted earlier, it is trivial to obtain the QNEC from a conjecture about semiclassical gravity, the QFC.
This takes us back to the controversy surrounding the Bekenstein bound (Sec.~\ref{sec-controversy}). Unruh and Wald~\cite{UnrWal83} were right when they argued that quantum field theory already has properties sufficient to protect the GSL; in this sense, the Bekenstein bound was not ``needed.'' But the properties of QFT that guarantee this are somewhat obscure and conspiratorial from the QFT viewpoint. They can be much more directly understood as being required by the compatibility of field theory with gravity. This contrast is far more apparent today, as quantum gravity considerations have yielded more precise, provable new results in QFT, such as Eq.~(\ref{eq-qnec}) and~\cite{Cas08,Wal11,BouCas14a,BouCas14b}.
This exposes a major new facet of the unity of Nature, one that Bekenstein first glimpsed. Quantum gravity not only governs extreme regimes such as the beginning of the universe or the singularity inside a black hole. It also governs the information content of relativistic quantum fields at low energies. It will be important to probe this regime directly, and we will not have to wait for a Planck collider to do it.
\noindent {\bf Acknowledgments}~~~ This is a contribution to the {\em Jacob Bekenstein Memorial Volume}, to be published by World Scientific. I thank Slava Mukhanov and Eliezer Rabinovici for their encouragement and their patience. I am grateful to Stefan Leichenauer, Bob Wald and Aron Wall for detailed comments. My research is supported in part by the Berkeley Center for Theoretical Physics, by the National Science Foundation (award number PHY-1521446), and by the U.S.\ Department of Energy under contract DE-AC02-05CH11231 and award {DE-SC0019380}.
|
\section{Introduction}
The melting-like transition in finite clusters consisting of a small number of
atoms
is of fundamental interest as clusters are
often produced in a disordered "liquid" state,\cite{Ell95} and it is also
relevant to applications of clusters. For example, the catalytic activity
of small platinum clusters depends critically on their melting temperatures.
\cite{Wan98} Recent experimental advances reveal some
details of the melting-like transition but, at the same time, show new and
interesting features. Martin \cite{Mar96} determined the
cluster size dependence of the melting temperature T$_m$ of large
sodium clusters, composed of thousands of atoms, by observing the vanishing of
the atomic shell structure in the mass spectra upon heating. It was
concluded that T$_m$ grows with cluster size, but the results did not
extrapolate yet to the T$_m$ of the bulk. Peters {\em et al.} \cite{Pet98}
performed X-ray diffraction experiments on large (50 nm)
Pb clusters and observed the
occurrence of surface melting before homogeneous melting. Electron diffraction
\cite{Mai99} may also help in detecting a surface melting stage.
\cite{Wan98} Haberland and coworkers \cite{Sch97} have studied the
variation with temperature of the photofragmentation spectra of Na$_N$
(N=50--200) clusters, and have deduced the melting temperatures of
the clusters. They find that for some cluster sizes the
melting temperature is a local maximum not in exact correspondence with
either electronic or atomic shell closing numbers, but
bracketed by the two, suggesting that both effects are relevant to
the melting process.
A number of computer simulations of melting in small metallic and nonmetallic
clusters has been reported, the majority of which employed phenomenological
interatomic potentials.\cite{Jel86,Cle98,Cal99} The use of such parameterized
potentials allows the consideration of long dynamical trajectories for large
clusters.\cite{Cle98,Cal99} {\em Ab initio} methods,\cite{Car85}
which have also been
used, accurately treat the electronic structure of the cluster,
but are much more expensive computationally and are usually
restricted to the study of small clusters for short dynamical trajectories.
\cite{Rot91} Recently, Rytk\"onen {\em et al.} \cite{Ryt98} have performed
{\em ab initio} molecular dynamics (aiMD) simulations of the melting of a
sodium cluster with 40 atoms, but such a large cluster required the
use of a fast heating rate. These aiMD treatments use the Kohn-Sham
(KS) form\cite{Koh65} of density functional theory (DFT), and
orthogonalization of the one-electron KS orbitals is the limiting step in
their performance. However, DFT shows that the total energy of
the electronic system can be expressed in terms of just the
electronic density,\cite{Hoh64} and orbital-free (OF) versions of the aiMD
technique based
on the electron density have been developed and employed, both in solid
state \cite{Pea93,Gov99} and cluster \cite{Sha94,Gov95,Bla97,Agu99}
applications. These OF methods scale linearly with the system size allowing
the study of larger clusters for longer simulation times
than typical aiMD simulations. However, quantum shell effects
are neglected, so that features associated with electronic shell
closings are not reproduced.
Previously,\cite{Agu99} we have used the orbital-free molecular
dynamics (OFMD) method to study the melting process in small sodium clusters,
Na$_8$ and Na$_{20}$, clusters outside the range covered by Haberland's
photofragmentation
experiments.\cite{Sch97} Here, we report constant energy OFMD
simulations in a study of the melting-like transition in larger clusters,
Na$_{55}$, Na$_{92}$ and Na$_{142}$, which are within the size range
covered in those experiments, and for which a full {\em ab initio} treatment
of their thermal properties would be impractical. Even for the OFMD method
those large clusters represent a very substantial computational effort.
The aim of our work is
to study the mechanisms by which the melting-like transition proceeds in
these large clusters.
In the next section we briefly present some technical details of the method.
The results are presented and discussed in section III and, finally,
section IV summarizes our main conclusions.
\section{Theory}
The orbital-free molecular dynamics method is a Car-Parrinello total
energy scheme\cite{Car85}
which uses an explicit kinetic-energy functional of the
electron density, and has the electron
density as the dynamical variable, as opposed to the KS single particle
wavefunctions. In contrast to simulations which use empirical interatomic
potentials, the detailed electronic structure and the electronic contribution
to the energy and the forces on the ions are recalculated efficiently every
atomic time-step. The main features of the energy functional and
the calculational scheme have been described at length in previous work,
\cite{Pea93,Sha94,Bla97,Agu99} and details of our method are as
described by Aguado et al.\cite{Agu99} In brief, the electronic kinetic
energy functional of the electron density, $n(\vec r)$, corresponds to
the gradient expansion around the homogeneous limit through second order
\cite{Hoh64,Mar83,Yan86,Per92}
\begin{equation}
T_s[n] =
T^{TF}[n] + {\frac{1}{9}} T^W[n],
\end{equation}
where the first term is the Thomas-Fermi functional
\begin{equation}
T^{TF}[n] = \frac{3}{10}(3\pi^2)^{2/3}\int n(\vec r)^{5/3}d\vec r,
\end{equation}
and the second is the lowest order gradient correction, where T$^W$,
the von Weizs\"acker term, is given by
\begin{equation}
T^{W}[n] = \frac{1}{8}\int \frac{\mid \nabla n(\vec r) \mid^2}{n(\vec r)}d\vec r.
\end{equation}
The local density approximation is used for exchange and correlation.\cite
{Per81,Cep80} In the external field acting on the electrons,
$V_{ext}(\vec r) = \sum_n v(\vec r -\vec R_n)$, we take $v$ to be
the local pseudopotential of Fiolhais {\em et al.}, \cite{Fio95} which
reproduces well the properties of bulk sodium and has been shown to have good
transferability to sodium clusters.\cite{Nog96}
The cluster is placed in a unit cell of a cubic superlattice,
and the set of plane waves periodic in the superlattice
is used as a basis set to expand the valence density.
Following Car and Parrinello,\cite{Car85} the coefficients
of that expansion are regarded as generalized coordinates of a set of
fictitious classical particles, and the corresponding Lagrange equations of
motion for the ions and the electron density distribution are solved as
described in Ref. \onlinecite{Agu99}.
The calculations for Na$_{92}$ and Na$_{142}$ used a supercell of edge
71 a.u. and the energy cut-off in the plane wave expansion of the
density was 8 Ryd. For Na$_{55}$, the cell edge was 64 a.u. and the
energy cut-off 10 Ryd. In all cases, a 64$\times$64$\times$64 grid was used.
Previous tests \cite{Agu99} indicate that the cut-offs used give good
convergence of bond lengths and binding energies. The fictitious
mass associated with the electron density coefficients
ranged between 1.0$\times$10$^8$ and 3.3$\times$10$^8$ a.u.,
and the equations of motion were integrated using the Verlet
algorithm \cite{Ver65} for both electrons and ions with a time step
ranging from $\Delta$t = 0.73 $\times$ 10$^{-15}$ sec. for the simulations
performed at the lowest temperatures, to $\Delta$t = 0.34 $\times$
10$^{-15}$ sec. for those at the highest ones. These choices resulted in
a conservation of the total energy better than 0.1 \%.
The first step of the simulations
was the determination of low temperature isomers for each of the three cluster
sizes. For such large clusters it is very difficult to find
the global minimum because the number of different local
minima increases exponentially with the number of atoms in the
cluster. Instead, one has to adopt structures that are likely to have the
main characteristics of the ground state.
We have assumed icosahedral growth. Thus, for Na$_{142}$ we removed
five atoms from the surface of a 147 atom three-shell perfect icosahedron.
For Na$_{92}$, we constructed an icosahedral isomer by following the
growing sequence described by Montejano-Carrizales {\em et al,} \cite{Mon96}
and for Na$_{55}$ we took a perfect two-shell icosahedron.
We have also used dynamical simulated annealing,\cite{Car85} to generate
low temperature isomers, but this procedure always led to amorphous structures
for Na$_{92}$ and Na$_{142}$, and to a nearly icosahedral structure for
Na$_{55}$.
Several molecular dynamics simulation runs at different constant energies
were performed in order to obtain the caloric curve for each icosahedral
isomer.
The initial positions of the atoms for the first run were taken by slightly
deforming the equilibrium low temperature geometry of the isomer.
The final configuration of each run served as the starting geometry for the
next run at a different energy. The initial velocities for every new run were
obtained by scaling the final velocities of the preceding run. The total
simulation times varied between 8 and 18 ps for each run at constant energy.
A number of indicators to locate the melting-like transition were employed.
Namely, the specific heat defined by \cite{Sug91}
\begin{equation}
C_v = [N - N(1-\frac{2}{3N-6})<E_{kin}>_t ~ <E_{kin}^{-1}>_t]^{-1},
\end{equation}
where N is the number of atoms and $<>_t$ indicates the average along a
trajectory; the diffusion coefficient,
\begin{equation}
D = \frac{1}{6}\frac{d}{dt}<r^2(t)>,
\end{equation}
which is obtained from the long time
behaviour of the mean square displacement
$<r^2(t)> = \frac{1}{Nn_t}\sum_{j=1}^{n_t}\sum_{i=1}^N [\vec R_i(t_{0_j} + t) -
\vec R_i(t_{0_j})]^2$, where $n_t$ is the number of time origins, $t_{0_j}$,
considered along a trajectory;
the time evolution of the distance between each atom
and the center of mass of the cluster
\begin{equation}
r_i(t) = \mid \vec R_i(t) - \vec R_{cm}(t)\mid,
\end{equation}
and finally, the radial atomic density,
averaged over a whole dynamical trajectory,
\begin{equation}
\rho (r) = \frac{dN_{at}(r)/dr}{4\pi r^2},
\end{equation}
where $dN_{at}(r)$ is the number of
atoms at distances from the center of mass between r and r + dr.
\section{Results}
The lowest energy structure of sodium clusters of medium size is not known.
DFT calculations for Na$_{55}$ performed by K\"ummel {\em et al.}\cite{Kum99}
using an approximate structural model (CAPS model, where the total
pseudopotential of the ionic skeleton is cylindrically averaged)\cite{Mon95}
give a structure close to icosahedral. Near-threshold photoionization mass
spectrometry experiments suggest icosahedral structures for large sodium
clusters with more than 1400 atoms,\cite{Mar91} so incomplete icosahedral
structures are plausible candidates for Na$_{92}$ and Na$_{142}$. For
this reason we have adopted for
Na$_{142}$ an isomer obtained by removing five atoms from a
perfect three-shell icosahedron. The icosahedral growing sequence in nickel
clusters has been studied by Montejano-Carrizales {\em et al,}\cite{Mon96}
who have shown that the 12 vertices of the outermost shell are the
last sites to be occupied. Assuming the same growth sequence for sodium
clusters, we have removed five atoms from the vertex positions of Na$_{147}$,
testing all possibilities, and have then relaxed the resulting
structures. In the most stable structure thus formed, the five vacancies
form a pentagon. For Na$_{92}$, we have adopted the umbrella growing model
of Montejano-Carrizales {\em et al.} \cite{Mon96} The resulting structure
corresponds to three complete umbrellas capping a
Na$_{55}$ icosahedron. Low-temperature dynamical
trajectories verify that these structures are indeed stable isomers of
Na$_{92}$ and Na$_{142}$. The icosahedral isomers are more stable than the
lowest energy amorphous isomers which were found by simulated annealing
(0.017 eV/atom and 0.020 eV/atom for Na$_{92}$ and Na$_{142}$ respectively).
Calvo and Spiegelmann have studied sodium
clusters in the same size range, using pair potential and tight-binding (TB)
calculations, \cite{Cal99} and have also predicted
icosahedral structures for Na$_{55}$,
Na$_{93}$, Na$_{139}$ and Na$_{147}$.
For each icosahedral cluster we have calculated the so-called caloric curve
which is the internal temperature as a function of the total energy, where
the internal temperature is defined as the average of the ionic kinetic
energy \cite{Sug91}.
A thermal phase transition
is indicated in the caloric curve by a change of slope, the slope being the
specific heat; the height of the step gives an estimate of the latent heat
of fusion.
However, melting processes are more easily recognised as peaks in the specific
heat as a function of temperature. These have been
calculated directly from eq. (4)
and are shown together with the caloric curves
in figures 1--3. The specific heat peaks occur at the same
temperatures as the slope changes of the caloric curve giving
us confidence in the convergence of our results as the two quantities have
been obtained in different ways.
The specific heat curve for Na$_{142}$ (fig. 1) displays two main
peaks at T$_s \approx$ 240 K and T$_m \approx$ 270 K, suggesting a two
step melting process, but these are so close
that only one slope change in the caloric curve can be distinguished.
Our analysis below shows that homogeneous melting occurs at T$_m
\approx$ 270 K
in excellent agreement with the experiments of
Haberland and coworkers,\cite{Sch97} who give an approximate value of 280 K.
The latent heat of fusion estimated from the step at T$_m$ in the caloric curve
is q$_m \approx$ 15 meV/atom, again in good agreement
with the experimental value of $\sim$ 14 meV/atom. However, the
premelting stage at T=T$_s$ is not detected in the experiments, but our results
are not inconsistent with this because the two calculated
peaks in the specific heat are close together and the height of the first
peak is much smaller than that of the second; consequently they could be difficult to
distinguish experimentally. Calvo and Spiegelmann \cite{Cal99}
have performed Monte Carlo (MC) simulations using a semiempirical
many-atom potential, and the lowest-energy isomer they found for Na$_{139}$ was
also an incomplete three-shell icosahedron, in this case with 8 surface vacancies.
They also report two close peaks in the specific heat curve indicating a
two-step melting process, with T$_s \approx$ 210 K and T$_m \approx$ 230 K.
They concluded that these two temperatures become closer as the cluster size
increases, so that for clusters with more than about 100 atoms
there is effectively just one peak in the specific heat and a single-step melting.
Tight binding (TB) molecular dynamics calculations
were performed by the same authors.\cite{Cal99} Although the
melting temperatures were found to be different from
those obtained with the semiempirical potentials (TB tends to overestimate the
experimental values, while empirical potentials tend to underestimate them), the
qualitative picture of melting in two close steps was the same.
The results for Na$_{92}$ are shown in fig. 2.
Two-step melting is again observed, with a small
prepeak in the specific heat at
T$_s \approx$ 130 K and a large peak, corresponding to
homogeneous melting, at T$_m \approx$ 240 K. In this
case T$_s$ and T$_m$ are well separated, but the first peak is much smaller
than the second, which could again account for the absence of the prepeak in
the experiments. The calculated temperature and latent heat for the homogeneous
melting stage, T$_m \approx$ 240 K and q$_m \approx$ 8 meV/atom, are again
in excellent agreement with the experimental values,\cite{Sch97} 250 K and
7 meV/atom respectively. Calvo and Spiegelmann\cite{Cal99} arrive at similar
conclusions based on MC
simulations for Na$_{93}$ using phenomenological potentials:
a small bump near
100 K and a main peak near 180 K. Their TB simulations give values for those two
temperatures roughly 100 K higher.
The experiments \cite{Sch97} indicate a substantial enhancement of the
melting temperature in the size region around N=55 atoms. The reported
melting temperature of Na$^+_{55}$ is 325 K, surprisingly higher than that
of Na$_{142}^+$, which is a local maximum in the size region of the
third icosahedral shell closing. Our simulations do not reproduce this
enhancement of T$_m$ for Na
|
$_{55}$
and predict that this cluster melts in a single stage at
T$_m \approx$ 190 K (fig. 3), a result found also by Calvo and Spiegelmann.
\cite{Cal99} The OFMD method does not account for electronic quantum-shell
effects, and full KS calculations may be needed to clarify this
discrepancy, although it is not clear {\em a priori} how
electronic shell effects could
shift the value of T$_m$ by such a large amount. Of course, another possibility
is that the icosahedron is not the ground state isomer.
However, K\"ummel {\em et
al}\cite{Kum99} have recently found that the experimental photoabsorption
spectrum of Na$_{55}^+$ is best reproduced with a slightly oblate isomer which
is close to icosahedral. We have also investigated a bcc-like growing
sequence finding that bcc structures are less stable than icosahedral ones
for all cluster sizes studied. Also, we studied the melting behavior
of a Na$_{55}$ isomer with bcc structure and did not find an enhanced melting
temperature for it either. In summary, the discrepancy between experiment and
theory for Na$_{55}$ deserves further attention.
Various quantities have been analyzed in order to investigate the nature
of the transitions at T$_s$ and T$_m$.
The short-time averages (sta) of the distances between each atom and the
center of mass of the cluster, $<r_i(t)>_{sta}$, have been
calculated, and the cluster evolution during the trajectories has been
followed visually using computer graphics. The $<r_i(t)>_{sta}$ curves
for Na$_{142}$ are
presented in Figs. 4-6 for three representative temperatures. At low
temperatures (Fig. 4) the values of $<r_i(t)>_{sta}$ are almost independent
of time.
The movies show that the clusters are solid, the atoms
merely vibrating around their equilibrium positions. Curve crossings are due to
oscillatory motion and slight structural relaxations rather than
diffusive motion. At this low temperature quasidegenerate groups which are
characteristic of the symmetry can be distinguished: one line near the centre of mass
of the cluster identifies the central atom (its position does not exactly
coincide with the center of mass because of the location of the five surface
vacancies); 12 lines correspond to the first icosahedral shell; another 42
complete the second shell, within which we can distinguish the 12 vertex atoms
from the rest because their distances to the centre of mass are slightly
larger; finally, 82 lines describe the external shell, where again we
can distinguish the 7 vertex atoms from the rest.
The radial atomic density distributions with
respect to the cluster center, $\rho (r)$, are shown for Na$_{142}$
in Fig. 7.
At the lowest temperature, T=30 K, the atoms in the icosohedral isomer
are distributed in three well separated shells, a surface shell and two inner
shells; as discussed above, subshells form in the second and third shells.
The shell structure is still present at T=130 K.
Figures 5 and 7 show that at T=160K the
atomic shells of the Na$_{142}$ cluster are still well defined, but the
movies reveal isomerization transitions, similar to those found
at the begining of the melting-like transition of Na$_8$ and Na$_{20}$,
\cite{Agu99} with no true diffusion.
These isomerizations involve the motion of vacancies
in the outer shell, in such a way that different
isomers are visited which preserve the icosahedral structure.
The onset of this motion is gradual and does not lead to features in the
specific heat, although it is detected in the temperature evolution of the
diffusion coefficient (see Fig. 8 and discussion below).
The true surface melting stage does not develop in the icosahedral
isomer until a temperature of T$_s\approx$ 240 K is reached.
Fig. 6 shows the time evolution of $<r_i>_{sta}$ for Na$_{142}$ at a temperature
T= 361 K at which the cluster is liquid with all the atoms diffusing throughout
the cluster. Some specific cases of atoms that
at the begining of the simulation are near (far from) the center of mass of
the cluster and end in a position far from (near) the center of mass of the
cluster are shown in boldface. The atomic
density distribution at 280 K,
a temperature just above the melting point,
is nearly uniform across the cluster, a radial expansion
of the cluster by about 5 bohr units
is evident,
and the surface is more diffuse.
The $<r_i(t)>_{sta}$ curves for Na$_{92}$ at low temperature are
qualitatively similar to those of Na$_{142}$.
Na$_{92}$ shows surface melting at T$_s\approx$ 130 K.
This temperature is in the range where the isomerization processes in Na$_{142}$
set in, but the larger number of vacancies in the surface shell of Na$_{92}$
allows for more rapid surface diffusion and these processes give rise to a distinct
peak in the specific heat.
Na$_{55}$ is a perfect two-shell icosahedron, so
surface atoms have no empty sites available to move to, and diffusion within
an atomic shell is almost
as difficult as diffusion across different shells. When the
surface atoms have enough energy to exchange positions with one another
they can as easily migrate throughout the whole cluster, and melting
proceeds in a single stage at 190 K. Calvo and Spiegelmann\cite{Cal99} have
suggested that this one-step melting is
associated with
a large energy gap between the ground state icosahedral structure
and the closest low-lying isomers, but this cannot be a general result
for perfect icosahedral metallic clusters, as the details of melting
are material dependent. For example, a surface melting stage has been observed
in simulations of icosahedral Ni$_{55}$.\cite{Guv93}
The variation of the diffusion coefficient with
temperature is shown in Fig. 8 for Na$_{142}$.
At temperatures less than about 140K, D is close to zero,
indicating only an oscillatory motion of the atoms.
For temperatures between 140K and T$_s$,
the diffusion coefficient increases indicating that
the atoms in the cluster are not undergoing simple vibrational motion;
the atomic motions are, nevertheless, of the special kind discussed above
that preserve the icosahedral structure. The slope of D(T) increases
appreciably at T$_s$ when surface melting occurs, but there is no noticeable
feature when the cluster finally melts at T$_m$.
The features of D(T) for Na$_{92}$ which are not shown here, are
very similar: D(T) is very sensitive to the surface melting
stage, where appreciable diffusive motion begins, and the homogeneous melting
transition is masked by that effect. We conclude that
the D(T) curve is a good indicator of homogeneous melting only in those cases
where the surface melting stage is absent, as for example in Na$_{55}$.
Our results suggest that the melting transition in
large icosahedral sodium clusters occurs in a smaller temperature range
than for small clusters
such as Na$_8$ or Na$_{20}$,\cite{Agu99} at least near an icosahedral shell
closing. Furthermore, the size of any prepeak
diminishes with respect to the main homogenous melting peak as the cluster size
increases, that is as the fraction of atoms that
can take part in premelting decreases. Consequently, a homogeneous melting
temperature can be defined with less ambiguity for large clusters.
These comments apply to the caloric and the specific heat curves, which are the
quantities amenable to experimental measurement. In contrast, microscopic
quantities such as the diffusion coefficient D or the $<r_i(t)>_{sta}$ curves
are very sensitive to any small reorganization in the atomic arrangement, and
it is difficult to determine the melting transition from the variation of
these quantities with temperature.
A helpful structural, as opposed to thermal, indicator of
the melting transition in medium-sized or large clusters is the shape of the
radial atomic density distribution.
The atomic density displays pronounced shell structure
at low temperatures which is smoothed out at intermediate temperatures
where the vacancy diffusion and/or surface melting mechanisms are present.
Above T$_m$ the density is flat.
In figure 9 we compare the calculated values of the melting temperature
with the experimental values. Our earlier results for
Na$_8$ and Na$_{20}$ \cite{Agu99} are also included, although for such small
sizes there is some ambiguity in defining a melting temperature.
There is excellent agreement with the experimental
results for Na$_{92}$ and Na$_{142}$.\cite{Sch97} Measurements of
the temperature dependence of the photoabsorption cross sections for Na$_n^+$
(n=4--16) have recently been reported.\cite{Sch99,Hab99}
Although the spectra do not show evidence of a sharp melting
transition, some encouraging comparison
between theory and experiment can be made. The
experimental spectra do not change appreciably
upon increasing the cluster temperature, until at T=105 K (the value given as
the experimental melting temperature of Na$_8$ in Fig. 9) the spectra
begin to evolve in a continuous way. In our study of the melting behaviour of
Na$_8$ \cite{Agu99} we found a broad transition starting at T=110 K and
continuing until T=220 K,
at which point the ``liquid'' state was fully developed.
This may explain the absence of abrupt changes in the experimental
photoabsorption spectrum
with temperature. In any case, we feel that the good agreement between theory
and experiment may extend to the small sizes.
However, our method is not expected
to give accurate results if oscillations in the melting temperature with
cluster size arise as a consequence of electronic shell effects, which is not
yet known. The discrepancy
for Na$_{55}$ remains intriguing. In this regard it is noteworthy
that our calculated melting temperatures for the three large clusters fit
precisely the expected large N behaviour,
T$_m$(Na$_N$)=T$_m$(bulk) + C/N$^{\frac{2} {3}}$, where $C$ is a constant,
and yield as a bulk melting
temperature T$_m$(bulk)=350 K, which is close to the observed value of 371 K.
A similar extrapolation to the bulk melting temperature
is not evident in the experimental data.
As it is well known that the specific isomer used to start the heating dynamics
can affect the details of the melting transition, \cite{Bon97} we have also
studied the melting of amorphous Na$_{92}$ and Na$_{142}$ clusters that we
obtained by a simulated annealing. For the amorphous Na$_{142}$ cluster
bulk melting occurred at the same temperature, T=270 K, as for the
icosahedral cluster, while surface melting took place at a much lower
temperature, T=130 K. However, melting of the amorphous Na$_{92}$ cluster
took place over a wide temperature, and no sharp transitions were detected.
We attach little significance to these results as the initial structures and
the melting behaviour must depend on the details of the annealing,
and the subsequent heating.
\section{Discussion and conclusions}
A few comments regarding the quality of the simulations
are perhaps in order here.
The orbital-free representation of the atomic interactions, although much more
efficient than the more accurate KS treatments, is still
substantially more expensive
computationally than a simulation using
phenomenological many-body potentials. Such potentials contain
a number of parameters that are usually chosen by fitting some bulk and/or
molecular properties. In contrast
our model is free of external parameters, although there are
approximations in the kinetic and
exchange-correlation functionals.
The orbital-free scheme accounts, albeit approximately, for the effects of the
detailed electronic distribution on the total energy and the forces on the ions.
We feel that this is particularly important in metallic clusters for which a
large proportion of atoms are on the surface and experience a very different
electronic environment than an atom in the interior. Furthermore, the
adjustment of the electronic structure and consequently the energy and forces
to rearrangements of the ions is also taken into account.
But the price to be paid
for the more accurate description of the interactions is a less complete
statistical sampling of the phase space. The simulation times are substantially
shorter than those that can be achieved in phenomenological simulations.
Longer simulations would be needed in order to fully converge the heights
of the specific heat peaks, or in order to observe a van der Waals loop in the
caloric curves, to mention two examples. But we expect that the
locations of the various transitions are reliable. All the indicators we have
used, both thermal and structural ones, are in essential agreement on
the temperature at which the transitions start.
As we discussed in a previous paper,
\cite{Agu99} longer trajectories may induce just a slight lowering in the
transition temperatures.
The melting-like transitions of Na$_{142}$, Na$_{92}$, and Na$_{55}$ have been
investigated by applying an orbital-free, density-functional molecular
dynamics method. The computational effort which is required is modest in
comparison with the traditional Car-Parrinello Molecular Dynamics technique
based on Kohn-Sham orbitals, that would be very costly for clusters of this
size. Specifically, the computational effort to update
the electronic system scales linearly with the system size N, in contrast to
the N$^3$ scaling of orbital-based methods. This saving allows the study of
large clusters. However, the price to pay
is an approximate electron kinetic energy.
An icosahedral isomer of Na$_{142}$ melts in two steps as evidenced
by the thermal indicators.
Nevertheless, there are isomerization
transitions involving surface defects at a temperature as low as 130 K,
that preserve the icosahedral structure and do not give rise to any
pronounced feature in the caloric curve.
The transition
at T$_s \approx$ 240 K from that isomerization regime
to a phase in which the surface atoms aquire a substantial diffusive motion
is best described as surface melting. This is followed at T$_m
\approx$ 270 K by homogeneous melting.
For Na$_{92}$, there is a minor peak in C$_v$ at T$_s \approx$130K which
we associate with surface melting. The smaller value of T$_s$, for this
cluster compared with Na$_{142}$, is due to the less ordered surface.
Na$_{55}$, being a perfect two-shell icosahedron
with no surface defects melts in a single stage at 190 K. In all cases, for
T$>$T$_m$ the atoms are able to diffuse throughout the cluster volume.
Both the calculated T$_m$ at which homogeneous melting occurs and the estimated
latent heat of fusion q$_m$ are in excellent agreement with the experimental
results of Haberland and coworkers for Na$_{142}$ and Na$_{92}$; our earlier
results on the melting of Na$_8$ \cite{Agu99} are also consistent with the
variation of the measured optical spectrum with temperature.
A serious discrepancy between theory and experiment remains for Na$_{55}$.
We have found that structural quantities obtained from the simulations which
are very useful in the study of melting in small clusters,\cite{Agu99} such
as the diffusion coefficient, are not, in the case of the larger clusters
studied here, efficient indicators of homogeneous melting, which is better
located with thermal indicators. A better structural indicator is the
evolution with temperature of the average radial ion density.
This quantity flattens when homogeneous melting occurs.
$\;$
$\;$
$\;$
{\bf ACKNOWLEDGEMENTS:} We would like to thank H. Haberland, S. K\"ummel and F. Calvo
for sending us preprints of their respective works prior to publication.
This work has been supported by DGES
(Grants PB95-0720-C02-01 and PB98-D345), NATO(Grant CRG.961128)
and Junta de Castilla y Le\'on (VA28/99).
A. Aguado acknowledges a graduate fellowship from Junta de Castilla y Le\'on.
M. J. Stott acknowledges the support of the NSERC of Canada and an Iberdrola
visiting professorship at the University of Valladolid.
{\bf Captions of Figures.}
{\bf Figure 1} Caloric and specific heat curves
of Na$_{142}$, taking the
internal cluster temperature as the independent variable.
The deviation around the mean
temperature is smaller than the size of the circles.
{\bf Figure 2} Caloric and specific heat curves
of Na$_{92}$, taking the
internal cluster temperature as the independent variable.
The deviation around the mean
temperature is smaller than the size of the circles.
{\bf Figure 3} Caloric and specific heat curves
of Na$_{55}$, taking the
internal cluster temperature as the independent variable.
The deviation around the mean
temperature is smaller than the size of the circles.
{\bf Figure 4} Short-time averaged distances $<r_i(t)>_{sta}$ between each atom
and the center of mass in Na$_{142}$, as functions of time for
the icosahedral isomer at T= 30 K.
{\bf Figure 5} Short-time averaged distances $<r_i(t)>_{sta}$ between each atom
and the center of mass in Na$_{142}$,
as functions of time for the icosahedral
isomer at T= 160 K. The bold lines follow the
evolution of a particular atom
in the surface shell and another in the outermost core
shell.
{\bf Figure 6} Short-time averaged distances $<r_i(t)>_{sta}$ between each atom
and the center of mass in Na$_{142}$, as functions of time at T= 361 K.
The bold lines are to guide the eye in following the diffusive behavior of
specific atoms.
{\bf Figure 7} Time
averaged radial atomic densities of the icosahedral isomer of
Na$_{142}$, at some representative
temperatures.
{\bf Figure 8} Diffusion coefficient as a function of temperature for the
icosahedral isomer of Na$_{142}$.
{\bf Figure 9} Calculated melting temperatures, compared with the
experimental values. The experimental values for the larger cluster
sizes are taken from ref. \onlinecite{Sch97}, while that for the smallest
Na$_8$ cluster is taken from ref. \onlinecite{Sch99} (see text for details).
|
\section{INTRODUCTION}\label{introduction}
\subsection{Reinforcement Learning for Industrial Robotics}
In a close future, it is likely to see robots programming themselves to carry out new industrial tasks. From manufacturing to assembling, there are a wide range of tasks that can be performed faster and with higher accuracy by robot manipulators. Over the past two decades, reinforcement learning in robotics \cite{survey_RL_robotics, survey_PS_robotics} has made rapid progress and enabled robots to learn a wide variety of tasks \cite{interestRobotRL1, interestRobotRL2, interestRobotRL3}. The emergence of self-programming robots might speed up the development of industrial robotic platforms insofar as robots can learn to execute tasks with very high accuracy and precision.
One major step in a robot learning algorithm is the exploration phase. In such phase, random commands are sent to the robot such that it discovers both its environment and the way it responds to commands sent. In this process, random commands are sent to the robot, which can result in any possible movement within its reachable workspace. In an industrial context, such unpredictable behavior is dangerous, for instance when a robot has to learn a task jointly with a human worker (e.g. an assembly task). For this reason, it seems interesting to work with KUKA LBR iiwa robot manipulators, which are very good for collaborative tasks as their compliance can be adjusted easily and they can be programmed to stop when feeling contact.
\subsection{Literature Overview}
Reinforcement Learning (RL) and Optimal Feedback Control (OFC) are very similar in their formulation : find a policy that minimizes a certain cost function under a certain dynamics (see section \ref{section2} for more details). They both enable to phrase many challenging robotic tasks. Indeed, a solution to such problem is both an optimal open-loop trajectory and a feedback controller. If the dynamics is linear and the cost function quadratic, an optimal solution can be computed analytically using Linear-Quadratic-Regulators theory \cite{optimal_control_book}.
When the dynamics is more complex (non-linear), the problem becomes more difficult but can still be solved with iterative Linear-Quadratic-Regulator algorithm (iLQR or iLQG) \cite{iLQG2}. As its name suggests, this algorithm iteratively fits local linear approximations to the dynamics and computes a locally optimal solution under this linear model. In the context of RL, the dynamics is considered unknown. To deal with this issue, \cite{mitrovic, GPS_unknown_dynamics_simulation} have proposed to build the linear model by exploring the environment and make a linear regression.
In \cite{CDC16}, we recently proposed another method that consists in computing the cost function likewise, using exploration and quadratic regression. This way, the model is more precise and can converge faster towards high precision tasks, which is the main purpose of our research. Indeed, in some tasks, for example Cartesian positioning, a typical approach \cite{GPS_unknown_dynamics_robot} consists in including the Cartesian position in the state, build a linear model and then build a quadratic cost from this linear approximation. Such approach does not really make sens as this quantity has already been approximated in the first order and thus cannot produce a good second order model for update.
\subsection{Main contribution and paper organization}
In this paper, we extend the concepts of \cite{CDC16}. Second order methods have been implemented to compute trajectory update and this way increase the speed of convergence by reducing the number of iLQG pass required. We also study the influence of different parameters on the speed of convergence.Such parameters are compared and chosen using the V-REP software \cite{VREP}. Finally, we propose an experimental validation on the physical device, using the parameters found by simulation. The KUKA LBR iiwa learns a positioning task in Cartesian space using angular position control without any geometric model provided. This rather simple task enables to measure easily the accuracy of the policy found.
The paper is organized as follows. Derivation of iLQG with learnt dynamics and cost function is written in section \ref{section2}. In section \ref{section3}, we try to find the best learning parameters through simulating the same learning situation with different parameters using the VREP simulation software. Experimental validation on KUKA LBR iiwa is presented in section \ref{section4} and section \ref{discussion} proposes a discussion on the results and future work.
\section{LEARNING A LOCAL TRAJECTORY WITH HIGH PRECISION}\label{section2}
This section summarizes the method used. First, the derivation of iLQG in the context of unknown dynamics and learnt cost function is written. The second order method to compute the improved controller is explain in a second step.
\subsection{A few definitions}\label{subsection21}
\begin{figure}[t]
\centering
\includegraphics[width=3.5cm, height=4.62cm]{trajectory_optimization.pdf}
\caption{Definition of a trajectory}
\label{trajectory_definition}
\end{figure}
This section begins with some useful definitions:
\begin{itemize}
\item A \textbf{trajectory} $\tau$ of length $T$ is defined by the repetition $T$ times of the pattern shown in Fig. \ref{trajectory_definition}. Mathematically, it can be denoted by
\begin{displaymath}
\{\mathbf{x}_0, \mathbf{u}_0, \mathbf{x}_1, \mathbf{u}_1, ..., \mathbf{u}_{T-1}, \mathbf{x}_{T}\},
\end{displaymath}
where $\mathbf{x}_t$ and $\mathbf{u}_t$ represent respectively state and control vectors. In our problem (see section \ref{section3} and \ref{section4}), the state is the vector of joint positions and the actions are joints target positions.
\vspace{5pt}
\item The \textbf{cost} and \textbf{dynamics} functions are defined as follows:
\begin{equation}
\label{general_cost}
l_t = L_t(\mathbf{x_t}, \mathbf{u}_t),
\end{equation}
\begin{equation}
\label{general_dynamics}
\mathbf{x}_{t+1} = F_t(\mathbf{x}_t, \mathbf{u}_t).
\end{equation}
$L_t$ outputs the cost and $F_t$ the next state, both with respect to previous state and action.
\vspace{5pt}
\item The \textbf{controller} is the function we want to optimize. For a given state, it needs to output the action with smallest cost that follows the dynamics. In our case, it is denoted by $\Pi$ and has the special special form of a time-varying linear controller:
\begin{equation}
\label{global_controller}
\mathbf{u}_t = \Pi(\mathbf{x}_t) = K_t \mathbf{x}_t + k_t.
\end{equation}
\end{itemize}
The guiding principle of iLQG is to alternate between the two following steps. From a nominal trajectory, denoted by $\mathbf{\bar{x}}_t$ and $\mathbf{\bar{u}}_t$, $ t \in \{0, ..., T\}$, compute a new local optimal controller. From a given controller, draw a new nominal trajectory.
\subsection{Local approximations of cost and dynamics}\label{subsection22}
As explained earlier, from a given nominal trajectory $\bar{\tau}$, the goal is to update the controller such that the cost is minimized in its neighborhood. In this process, the first step is to compute local approximations of the cost function and dynamics around the nominal trajectory:
\begin{equation}
\label{local_dynamics}
F_t(\mathbf{\bar{x}}_t + \mathbf{\delta x}_t, \mathbf{\bar{u}}_t + \mathbf{\delta u}_t) = \mathbf{\bar{x}}_{t+1} + F_{xu_t} \mathbf{\delta xu}_t,
\end{equation}
\begin{IEEEeqnarray}{l}
\label{local_cost}
L_t(\mathbf{\bar{x}}_t + \mathbf{\delta x}_t, \mathbf{\bar{u}}_t + \mathbf{\delta u}_t) = \bar{l}_t + L_{xu_t} \mathbf{\delta xu}_t + \nonumber\\
\frac{1}{2} \mathbf{\delta xu}_t^T L_{xu,xu_t} \mathbf{\delta xu}_t,
\end{IEEEeqnarray}
where $\mathbf{\delta x}_t$ and $\mathbf{\delta u}_t$ represent variations from the nominal trajectory and $xu_t$ is the vector $[x_t,u_t]^T$. The notation $A_z$ (resp. $A_{z_1,z_2}$) is the Jacobian (resp. Hessian) matrix of $A$ w.r.t. $z$ (resp. $z_1$ and $z_2$).
We propose to compute both approximations following an exploration and regression scheme. The first stage generates a certain number $N$ of random trajectories around the nominal. These trajectories are normally distributed around $\bar{\tau}$ with a certain time-varying covariance $\Sigma_t$. Hence, during the sample generation phase, the controller is stochastic and follows:
\begin{equation}
\label{global_controller}
P(\mathbf{u}_t|\mathbf{x}_t) = \mathcal{N}(K_t \mathbf{x}_t + k_t, \Sigma_t), \hspace{5pt} \forall t \in \{0, ..., T\},
\end{equation}
where $\mathcal{N}$ stands for normal distribution. From these samples, we can make two regressions a linear one to get the dynamics and a second order polynomial one \cite{polyreg} to approximate the cost function.
\subsection{Update the controller}
This section is about updating the controller once we have good Taylor expansions of the dynamics and cost function. In order to get a controller with low cost over the whole trajectory, we need to use the two value functions: $Q$ and $V$. $Q^{\Pi}_t$ represents the expected cost until the end of the trajectory if following $\Pi$ after being in state $x_t$ and selecting action $u_t$. $V^{\Pi}_t$ is the same but conditioned only on $x_t$, if $\Pi$ is deterministic, these two functions are exactly the same. The reader can refer to \cite{RL_textbook} for more detailed definitions of these value functions.
First, we need to compute quadratic Taylor expansions of both value functions:
\begin{IEEEeqnarray}{l}
\label{local_Q}
Q^{\Pi}_t(\mathbf{\bar{x}}_t + \mathbf{\delta x}_t, \mathbf{\bar{u}}_t + \mathbf{\delta u}_t) = Q_{0_t} + Q_{xu_t} \mathbf{\delta xu}_t + \nonumber\\
\frac{1}{2} \mathbf{\delta xu}_t^T Q_{xu,xu_t} \mathbf{\delta xu}_t,
\end{IEEEeqnarray}
\begin{equation}
\label{local_V}
V^{\Pi}_t(\mathbf{\bar{x}}_t + \mathbf{\delta x}_t) = V_{0_t} + V_{xu_t} \mathbf{\delta xu}_t + \frac{1}{2} \mathbf{\delta xu}_t^T V_{xu,xu_t} \mathbf{\delta xu}_t.
\end{equation}
In the context of trajectory optimization defined above, \cite{GPS_unknown_dynamics_robot} shows that these two functions can be approximated quadratically by
\begin{IEEEeqnarray}{l}
\label{new_Q}
Q_{xu,xu_t} = L_{xu,xu_t} + F_{xu_t} V_{x,x_{t+1}} F_{xu_t}^T, \nonumber\\
Q_{xu_t} = L_{xu_t} + V_{x_{t+1}} F_{xu_t}^T, \nonumber\\
V_{x,x_t} = Q_{x,x_t} + Q_{u,x_t}^T Q_{u,u_t}^{-1} Q_{u,x_t}\nonumber\\
V_{x_t} = Q_{x_t} + Q_{u,x_t}^T Q_{u,u_t}^{-1} Q_{u_t}^T.
\end{IEEEeqnarray}
These functions are computed backward for all the time steps, starting with $V_T = l_T(x_T)$, the final cost.
Under such quadratic value functions, following the derivation in \cite{iLQG}, we can show that the optimal controller under such dynamics and cost is defined by
\begin{IEEEeqnarray}{l}
\label{new_Q}
K_t=-Q_{u,u_t}^{-1} Q_{u,x_t}, \nonumber\\
k_t=\bar{u_t} -Q_{u,u_t}^{-1} Q_{u_t} - K_t \bar{x_t}.
\end{IEEEeqnarray}
A criterion to compute the new covariance is also needed. The goal being to explore the environment, we follow \cite{GPS} and choose the covariance with highest entropy in order to maximize information gained during exploration. Such covariance matrix is:
\begin{equation}
\Sigma_t = Q_{u,u_t}^{-1}.
\end{equation}
\subsection{Limit the deviation from nominal trajectory}
The controller derived above is optimal only if the dynamics and cost are respectively linear and quadratic everywhere. The approximations being only valid locally, the controller needs to be kept close from the nominal trajectory to remain acceptable for update. This problem can be solved by adding a constraint to the cost minimization problem:
\begin{equation}
\label{KL_constraint}
D_{KL}(p_{new}(\tau)||p_{old}(\tau)) \leq \epsilon,
\end{equation}
where $D_{KL}$ is the statistical Kullback-Leibler divergence. $p_{old}(\tau)$ and $p_{new}(\tau)$ are the probability trajectory distributions under the current controller and the updated one.
\cite{GPS_unknown_dynamics_robot} shows that such constrained optimization problem can be solved rather easily by introducing the modified cost function:
\begin{equation}
l_{mod}(x_t, u_t) = \frac{1}{\eta} l(x_t, u_t) - \log(p_{old}(x_t, u_t))
\end{equation}
Indeed, using dual gradient descent, we can find a solution to the constrained problem by alternating between the two following steps: \begin{itemize}
\item Compute the optimal unconstrained controller under $l_{mod}$ for a given $\eta$
\item If the controller does not satisfy (\ref{KL_constraint}), increase $\eta$.
\end{itemize}
A large $\eta$ has the effect of increasing the importance on constraint satisfaction, so the larger $\eta$ is, the closer the new trajectory distribution will be from the previous one.
\subsection{Initialize $\eta$ and choose $\epsilon$}
The way $Q$ is defined from approximation does not guaranty positive definiteness for $Q_{u,u_t}$. Which means that it might not be eligible to be a covariance matrix. This issue is addressed by increasing $\eta$ such that the distribution is close enough from the previous one. As the previous trajectory has a positive definite covariance, there must be an $\eta$ that will enforce positive definiteness. This gives a good way to initialize $\eta$ for a given pass.
Finally, the choice of $\epsilon$ is very important. If it is too small, the controller sequence won't progress towards optimality. On the other hand, if it is too large, it might be unstable. The idea is to start with a certain $\epsilon_{ini}$ and decrease it if the new accepted controller is worst than the previous one.
\section{KUKA LBR IIWA POSITIONING: TUNE THE LEARNING PARAMETERS}\label{section3}
A validation of the method is proposed on learning a simple inverse kinematics task. We consider a KUKA LBR iiwa robot (Fig. \ref{iiwa_VREP}), where the geometric parameters are unknown. The state variables are the joints angular positions and the control vector gathers target joints positions for next state. The idea is to reach a Cartesian position of the end effector with high accuracy ($<0.1mm$) without any geometric model.
\subsection{Cost function}
For this problem, the cost function needs to be expressed in terms of the Cartesian distance between the end-effector and the target point. We chose the cost function proposed in \cite{GPS_unknown_dynamics_robot}:
\begin{equation}
l(d)=d^2 + v \log(d^2 + \alpha),
\end{equation}
where $v$ and $\alpha$ are both real user defined parameters. As we do not consider any geometric parameter of the robot, the distance cannot be obtained with direct model considerations and needs to be measured from sensors.
\subsection{Tune the algorithm parameters}
Previous work \cite{CDC16} showed that a number of samples around $40$ is a good balance between accurate quadratic regression and exploration time for $7$ d.o.f. robots. So we carry out our experiments with $
|
N=40$. Among all the parameters defined in previous sections, we identified $4$ critical ones : $cov_{ini}$ (the initial covariance, defined below), $v$ and $\alpha$ from the cost function, $\epsilon_{ini}$. In this section, we learned optimal angular positions for the situation below with different sets of values on these parameters using V-REP simulation software. The situation is the following:
\begin{itemize}
\item Initial position : All $7$ angles at $0$ (straight position on Fig. \ref{iiwa_VREP})
\item Target position : Cartesian vector $[500, 500, 500]^T$ in $mm$, in the robot frame (red sphere on Fig. \ref{iiwa_VREP})
\item Initial mean command : target angular positions = initial positions (no move command).
\end{itemize}
Fig. \ref{iiwa_VREP} show a trajectory found by the algorithm.
The initial covariance matrix is also an important parameter as it defines the exploration range for all the future steps. Indeed, if it has large values, next iteration needs to have large covariance also because of (\ref{KL_constraint}). In our implementation, we start with diagonal covariance matrix where all the diagonal entries are the same. we denote $cov_{ini}$ the initial value of such diagonal entries, it is one of the parameters to be studied.
\begin{figure}[t]
\centering
\includegraphics[width=4cm, height = 5.6cm]{iiwa_fl.PNG}
\caption{Trajectory learnt on V-REP software with a KUKA LBR iiwa}
\label{iiwa_VREP}
\end{figure}
\subsection{Results and analysis}
From what we acknowledged running our algorithm, we picked up three values for each parameter and tried all the $81$ possible combinations to choose a good set of parameters for positioning task. Results obtained are summarized in table \ref{table_result}. In our simulation, the robot was only allowed $16$ trials to reach $0.1mm$ precision. Thus, we insist that in table \ref{table_result}, an underlined number represents the number of iLQG iterations before convergence whereas other numbers are the remaining distance to objective after $16$ iterations.
\begin{table}[ht]
\caption{Algorithm parameters influence}
\begin{center}
\begin{tabular}{c|c|c|c|c}
\multicolumn{5}{c}{$cov_{ini}=1$}\tabularnewline
\hline
\multirow{2}{*}{$v$} & \multirow{2}{*}{$\epsilon_{ini}$} & \multicolumn{3}{c}{$\alpha$}\tabularnewline
& & $10^{-3}$ & $10^{-5}$ & $10^{-7}$\tabularnewline
\hline
\multirow{3}{*}{$0.1$} & \textcolor{Black}{100} & \underline{\textcolor{Black}{11}} & \underline{\textcolor{Black}{16}} & \underline{\textcolor{Black}{13}}\tabularnewline
& \textcolor{gray}{1000} & \textcolor{gray}{0.25} & \underline{\textcolor{gray}{12}} & \underline{\textcolor{gray}{10}}\tabularnewline
& \textcolor{Blue}{10000} & \underline{\textcolor{Blue}{13}} & \textcolor{Blue}{0.27} & \underline{\textcolor{Blue}{8}}\tabularnewline
\cline{1-5}
\multirow{3}{*}{$1$} & \textcolor{Black}{100} & \textcolor{Black}{0.11} & \underline{\textcolor{Black}{14}} & \underline{\textcolor{Black}{16}}\tabularnewline
& \textcolor{gray}{1000} & \underline{\textcolor{gray}{10}} & \underline{\textcolor{gray}{12}} & \underline{\textcolor{gray}{10}}\tabularnewline
& \textcolor{Blue}{10000} & \textcolor{Blue}{0.10} & \textcolor{Blue}{1.69} & \textcolor{Blue}{0.24}\tabularnewline
\cline{1-5}
\multirow{3}{*}{$10$} & \textcolor{Black}{100} & \textcolor{Black}{0.11} & \textcolor{Black}{0.22} & \textcolor{Black}{0.84}\tabularnewline
& \textcolor{gray}{1000} & \textcolor{gray}{0.13} & \underline{\textcolor{gray}{12}} & \textcolor{gray}{0.20}\tabularnewline
& \textcolor{Blue}{10000} & \underline{\textcolor{Blue}{13}} & \textcolor{Blue}{0.23} & \underline{\textcolor{Blue}{15}}\tabularnewline
\hline
\end{tabular}
\vspace*{2.5mm}
\begin{tabular}{c|c|c|c|c}
\multicolumn{5}{c}{$cov_{ini}=10$}\tabularnewline
\hline
\multirow{2}{*}{$v$} & \multirow{2}{*}{$\epsilon_{ini}$} & \multicolumn{3}{c}{$\alpha$}\tabularnewline
& & $10^{-3}$ & $10^{-5}$ & $10^{-7}$\tabularnewline
\hline
\multirow{3}{*}{$0.1$} & \textcolor{Black}{100} & \textcolor{Black}{0.32} & \textcolor{Black}{0.15} & \textcolor{Black}{0.39}\tabularnewline
& \textcolor{gray}{1000} & \textcolor{gray}{0.45} & \textcolor{gray}{0.28} & \textcolor{gray}{0.22}\tabularnewline
& \textcolor{Blue}{10000} & \textcolor{Blue}{0.30} & \textcolor{Blue}{0.29} & \textcolor{Blue}{0.31}\tabularnewline
\cline{1-5}
\multirow{3}{*}{$1$} & \textcolor{Black}{100} & \textcolor{Black}{0.14} & \textcolor{Black}{0.32} & \textcolor{Black}{0.32}\tabularnewline
& \textcolor{gray}{1000} & \underline{\textcolor{gray}{14}} & \textcolor{gray}{1.93} & \textcolor{gray}{1.70}\tabularnewline
& \textcolor{Blue}{10000} & \textcolor{Blue}{1.82} & \textcolor{Blue}{0.99} & \textcolor{Blue}{0.11}\tabularnewline
\cline{1-5}
\multirow{3}{*}{$10$} & \textcolor{Black}{100} &\textcolor{Black}{0.34} & \textcolor{Black}{0.38} & \textcolor{Black}{0.39}\tabularnewline
& \textcolor{gray}{1000} & \textcolor{gray}{0.71} & \textcolor{gray}{0.29} & \textcolor{gray}{0.53}\tabularnewline
& \textcolor{Blue}{10000} & \textcolor{Blue}{0.70} & \textcolor{Blue}{0.14} & \textcolor{Blue}{2.31}\tabularnewline
\hline
\end{tabular}
\vspace*{2.5mm}
\begin{tabular}{c|c|c|c|c}
\multicolumn{5}{c}{$cov_{ini}=100$}\tabularnewline
\hline
\multirow{2}{*}{$v$} & \multirow{2}{*}{$\epsilon_{ini}$} & \multicolumn{3}{c}{$\alpha$}\tabularnewline
& & $10^{-3}$ & $10^{-5}$ & $10^{-7}$\tabularnewline
\hline
\multirow{3}{*}{$0.1$} & \textcolor{Black}{100} & \textcolor{Black}{12.79} & \textcolor{Black}{12.42} & \textcolor{Black}{17.83}\tabularnewline
& \textcolor{gray}{1000} & \textcolor{gray}{4.42} & \textcolor{gray}{0.30} & \textcolor{gray}{3.50}\tabularnewline
& \textcolor{Blue}{10000} & \textcolor{Blue}{2.88} & \textcolor{Blue}{10.93} & \textcolor{Blue}{2.60}\tabularnewline
\cline{1-5}
\multirow{3}{*}{$1$} & \textcolor{Black}{100} & \textcolor{Black}{24.37} & \textcolor{Black}{15.75} & \textcolor{Black}{10.13}\tabularnewline
& \textcolor{gray}{1000} & \textcolor{gray}{7.66} & \textcolor{gray}{6.32} & \textcolor{gray}{1.87}\tabularnewline
& \textcolor{Blue}{10000} & \textcolor{Blue}{2.67} & \textcolor{Blue}{8.37} & \textcolor{Blue}{6.44}\tabularnewline
\cline{1-5}
\multirow{3}{*}{$10$} & \textcolor{Black}{100} & \textcolor{Black}{1.93} & \textcolor{Black}{8.93} & \textcolor{Black}{$10^{11}$}\tabularnewline
& \textcolor{gray}{1000} & \color{gray}{8.03} & \textcolor{gray}{2.23} & \textcolor{gray}{3.50}\tabularnewline
& \textcolor{Blue}{10000} & \textcolor{Blue}{2.70} & \textcolor{Blue}{4.83} & \textcolor{Blue}{2.60}\tabularnewline
\hline
\end{tabular}
\\[5pt]
\label{table_result}
\caption*{
\textit{
An underlined figure represents the number of iLQG iterations to reach $0.1mm$ precision, Other numbers represent the distance remaining after $16$ iterations.}}
\end{center}
\end{table}
Together with the raw data in table \ref{table_result}, we plot the evolution of the distance within the iterations of a simulation for several sets of parameters. Looking at table \ref{table_result}, it seems that the most critical parameter is $cov_{ini}$. Fig. \ref{cov_curve} shows three learning curves where only $cov_{ini}$ varies. From here it appears that the initial covariance is not crucial in the early stages of the learning process. However, looking at the bottom plot, which is a zoom on the final steps, we acknowledge that if the covariance is too large, the algorithm will not converge towards the desired accuracy behavior. Hence, we recommend to keep $cov_{ini}$ around $1$ to obtain the desired accurate behavior.
\begin{figure}[tb]
\centering
\includegraphics[width=2.5in]{covariance_influence.pdf}
\caption{Covariance variation for $v=1$, $\alpha=10^{-5}$, $\epsilon_{ini}=1000$}
\label{cov_curve}
\end{figure}
After setting $cov_{ini}$ to $1$, we made the same plots for the other parameters (Fig. \ref{other_curves}). These reveal that $v$ and $\alpha$ do not appear to influence the behavior in this range of values. However, looking at the bottom plot, we can see that $\epsilon_{ini}$ needs to be kept large enough such that an iLQG iteration can make enough progress towards optimality. For small $\epsilon_{ini}$, we waste time stuck near the initial configuration. For the three plots in Fig. \ref{other_curves}, the zooms are not included as they do not reveal anything more.
\begin{figure}[tb]
\centering
\includegraphics[width=2.7in]{other_influence.pdf}
\caption{Variation of other parameters for $cov_{ini}=1$. When they do not vary, other parameters take the following values : $v=1$, $\alpha=10^{-5}$, $\epsilon_{ini}=1000$}
\label{other_curves}
\end{figure}
Results in table \ref{table_result} seem to correspond to what has been said above. Hence, for the experimental validation in section \ref{section4}, we choose the configuration with smallest number of iterations: $cov_{ini}=1$, $v=0.1$, $\alpha=10^{-7}$ and $\epsilon_{ini}=10000$.
\section{EXPERIMENTAL VALIDATION ON THE ROBOT}\label{section4}
In this section, we run our algorithm on a real KUKA LBR iiwa for a similar positioning task. The situation is slightly different:
\begin{itemize}
\item Initial position : $[140,0,0,0,0,0,0]^T$, angular positions in $^{\circ}$ (Fig. \ref{iiwa_pictures}, left picture)
\item Target position : $[-600,400,750]^T$, Cartesian position, in $mm$ and in the robot frame.
\item Initial mean command : target angular positions = initial positions (no move command).
\end{itemize}
\begin{figure}[!tb]
\centering
\includegraphics[width=4cm, height = 4.8cm]{iiwa_initial_optimized.jpg}
\includegraphics[width=4cm, height = 4.8cm]{iiwa_final_optimized.jpg}
\caption{Initial configuration of the KUKA LBR iiwa.}
\label{iiwa_pictures}
\end{figure}
The choice of changing the initial configuration was motivated by two reasons. First, it enables to show that the parameters found in section \ref{section3} are not case dependant. Second, we had constraints with our working environment and this configuration was better regarding research going on with other robots simultaneously.
Fig. \ref{iiwa_pictures} shows the KUKA LBR iiwa in its initial and final configuration (after reaching the desired end-effector position).
\subsection{Results obtained on the robot}
The learning process defined above resulted in the learning curve on Fig. \ref{LCrobot}. We note that it takes as many steps to go from initial configuration to $1mm$ accuracy than from $1mm$ to $0.1mm$. The final command provided by the algorithm is $[144.266,25.351,2.328,-56.812,5.385,24.984,4.754]^T$. Regarding the learning time, the overall process took approximately $9$ minutes, $6$ for exploration and $3$ for calculations.
\begin{figure}[tb]
\centering
\includegraphics[]{learning_curve_robot.pdf}
\caption{Learning curve iLQG on the robot.}
\label{LCrobot}
\end{figure}
\subsection{Measure the Cartesian distance}\label{measurements}
On this experimental validation, the distance was computed from the end-effector position read from the robot internal values. Even if it was probably computed thanks to direct DH model, our algorithm used it as an output from a black box. Thus, similar results would have been obtained using any other distance measurement sensor (e.g. laser tracker). We just note that, the precision reached is relative to the measurement tool precision.
However, in future work, it will be useful to use an external measurement tool in order to compare our positioning method precision with other techniques. Indeed, the precision of the inverse kinematics of the robot cannot be defined with internal robot measurements. The previous statement is precisely the reason why we need to calibrate industrial robots. Hence, we will need to train the robot with external distance measurement sensors and to compare the precision with other methods using the same sensors.
\section{DISCUSSION}\label{discussion}
In previous work \cite{CDC16}, we showed that learning the cost function is more stable and converges faster than including distance in the state and approximate it in the first order. Here, we extend this work with second order improvement of the controller, which shows faster convergence properties under well chosen parameters. The high precision reached for this simple positioning task let us hope that such methods will be suitable for more complex industrial tasks.
In many applications, it is also interesting to handle orientation of the end effector. Such modification is not an issue, one just needs to make several points on the end effector match several target points ($2$ or $3$ depending on the shape of the tool). This has been done with V-REP and converges just as well, even if taking more time. We chose not to present these results in this paper as they do not show any additional challenge and learning curves are less easy to interpret as distances are to be averaged between the three points.
In future work, we plan on trying to handle manipulation tasks with contact, which is a major challenge as the functions to approximate will not be smooth anymore near contact points.
\FloatBarrier
\bibliographystyle{IEEEtran}
|
\section{Introduction}
This paper is the third in a series of works aimed to investigate ion acceleration in non-relativistic collisionless shocks via large hybrid (kinetic ions--fluid electrons) simulations.
In previous papers we discussed how diffusive shock acceleration \citep[DSA, e.g.,][]{bell78a,blandford-ostriker78} at strong shocks can be very efficient in accelerating particles \citep[][hereafter Paper I]{DSA}, and how energetic ions induce magnetic field amplification via plasma instabilities \citep[][Paper II]{MFA}.
The connection between magnetic field amplification and particle acceleration is prominent in supernova remnants (SNRs), which are regarded as the sources of Galactic cosmic rays (CRs) up to the so-called \emph{knee} (a few $Z$ PeV, with $Z$ the nucleus charge).
Our kinetic simulations support such a paradigm:
in Paper I, we found that shocks propagating almost along the large-scale magnetic field $\bf B_0$ (quasi-parallel shocks) channel 10--20\% of their bulk flow energy into energetic ions.
In Paper II, we showed that, when acceleration is efficient, the initial magnetic field is effectively amplified.
The total magnetic field is found to scale as $B_{tot}/B_0\approx \sqrt{M_A/2}$, where $M_A=v_{sh}/v_A$ is the Alfv\'enic Mach number, i.e., the ratio of the shock velocity $v_{sh}$ and the Alfv\'en speed $v_A=B/\sqrt{4\pi m n}$ (with $m$ and $n$ the proton mass and number density).
These results are consistent with multi-wavelength observations of young remnants, which suggest that magnetic fields at SNR blast waves are several tens to hundred times larger than in the interstellar medium \citep[see, e.g.,][]{P+06,Uchiyama+07,tycho,reynoso+13}.
DSA predicts that strong shocks should accelerate particles with a universal power-law $\propto p^{-4}$ in momentum, and in Paper I we confirmed such scaling for the first time in kinetic simulations.
In spite of DSA spectral slope being independent of the details of particle scattering, magnetic field amplification regulates particles diffusion, and thereby the acceleration rate and the maximum energy that can be achieved.
The acceleration time up to energy $E$ is $T_{acc}\approx D(E)/v_{sh}^2$, where $D(E)$ is the spatial diffusion coefficient \citep[see, e.g.,][]{drury83}.
This characteristic time must be compared with the duration of the ejecta-dominated stage, since in the Sedov stage shock velocity and magnetic field amplification drop quite rapidly, and highest-energy particles are expected to escape the accelerator \citep[see, e.g.][]{escape}.
In order for SNRs to accelerate CRs up to the knee, diffusion near a shock has to be dramatically enhanced compared to diffusion in the interstellar medium.
From the B/C ratio in CRs, one infers an average Galactic diffusion coefficient $D_G(E)\approx 2\times 10^{28} (E/10 {\rm GeV})^{1/3}{\rm cm}^2{\rm s}^{-1}$ \citep[see, e.g.,][]{ba11a}.
However, if the diffusion coefficient at SNR shocks were not much smaller than $D_G(E)$, the maximum energy achievable in these sources would be less than $\sim 10$GeV.
This follows from comparing the acceleration time $T_{acc}\simeq D_G(E_{max})/v_{sh}^2$ with the Sedov time $T_{Sed}\sim 10^3$yr, when $v_{sh}(T_{Sed})\approx 4000$kms$^{-1}$.
Acceleration is significantly faster if the mean free path for pitch-angle scattering is as small as the particle gyroradius, $r_L$ (usually referred to as \emph{Bohm diffusion}), in which case the diffusion coefficient reads $D_B(E)\approx v r_L(E)\propto E/B$, where $v$ is the particle velocity.
Still, for Bohm diffusion in the typical Galactic field of a few $\mu$G, one obtains $E_{max}\approx 10^4-10^5$GeV, more than an order of magnitude below the CR knee \citep[e.g.,][]{lagage-cesarsky83b}.
A natural explanation is that the magnetic field amplification inferred in young remnants may be responsible for enhancing ion scattering beyond the Bohm limit calculated in the Galactic field: in this case, the actual diffusion coefficient would be Bohm {\em in the amplified magnetic field}, i.e., a factor of $\delta B/B_0\sim 10-100$ smaller.
In this paper we characterize the transport of energetic ions in kinetic simulations of non-relativistic collisionless shocks, in which the magnetic irregularities responsible for particle scattering are generated self-consistently by the different flavors of streaming instability.
The paper is structured as follows.
In Section \ref{sec:hybrid} we summarize the properties of the self-generated magnetic turbulence, as inferred from the large simulations of Paper II.
In Section \ref{sec:diff} we point out a novel technique inspired by analytical approaches to DSA, which returns the mean diffusion coefficient relevant for energetic particles in the shock precursor;
our findings are supported by the analysis of individual ion tracks, which provides the spatial dependence of the diffusion coefficient in different shock regions (Section \ref{sec:track}).
Throughout the paper, we interpret our results in terms of Bohm and self-generated diffusion coefficient.
We discuss how diffusion regulates the time evolution of the maximum energy achieved by accelerated ions in Section \ref{sec:Emax}, and conclude in Section \ref{sec:concl}.
\section{Hybrid simulations}\label{sec:hybrid}
\begin{table}
\caption{Parameters of the relevant hybrid runs (see also Paper II)}\label{tab:box}
\begin{center}
\begin{tabular}{cccccc} \hline \hline
Run & M & $x$ $[c/\omega_p]$ & $y$ $[c/\omega_p]$ & $t_{max}[\omega_c^{-1}]$ & $\Delta t [\omega_c^{-1}]$\\
\hline
A & 20 & $5\times 10^4$ & $1000$ & $1000$ & $5\times 10^{-4}$ \\
B & 20 & $ 10^5$ & $100$ & $2500$ & $5\times 10^{-4}$ \\
D & 80 & $ 4\times 10^5$ & $200$ & $500$ & $ 2.5\times 10^{-4}$ \\
F & 60 & $ 2\times 10^5$ & $20$ & $1600$ & $ 2.5\times 10^{-4}$ \\
\hline
\end{tabular}
\end{center}
\end{table}
In Paper I and II we have discussed simulations of non-relativistic, collisionless shocks performed with the \emph{dHybrid} code \citep{gargate+07}.
The main strength of hybrid simulations, which treat ions as kinetic particles, and electrons as a neutralizing fluid \citep[see, e.g.,][]{lipatov02}, is to allow for larger/longer simulations (in physical units) with respect to full particle-in-cell methods.
We measure lengths in units of ion skin depth $c/\omega_p$, where $\omega_p=\sqrt{4\pi n e^2/m}$ is the ion plasma frequency, and time in units of inverse cyclotron frequency $\omega_c^{-1}=mc/eB_0$, with $c$ the speed of light and $e$ the ion charge.
Velocities are normalized to the Alfv\'en speed $v_A=B_0/\sqrt{4\pi m n}$, and energies to $E_{sh}=\frac m2 v_{sh}^2$, where ${\bf v}_{sh}= -v_{sh}{\bf x}$ is the velocity of the upstream fluid in the simulation frame.
The shock is characterized by its Alfv\'enic Mach number $M_A=v_{sh}/v_A$, and throughout the paper we assume the sonic Mach number to be roughly equal to $M_A$ (thermal to magnetic pressure ratio $\beta=2$), indicating both simply with $M$.
Among the runs in Paper II, we focus on the ones summarized in table \ref{tab:box}.
They correspond to strong shocks ($M\gg 1$), and account for different features: large transverse size (Run A), long term evolution (Run B), and very large $M$ (Run D).
Our unprecedentedly-large 2D and 3D simulations allow us to assess that ions are accelerated via DSA at quasi-parallel shocks (${\bf v}_{sh}$ almost parallel to ${\bf B}_0$), and that the initial magnetic field is amplified in the precursor up to
\begin{equation}\label{eq:db}
\left\langle\frac{B_{tot}}{B_0}\right\rangle^2\approx \frac{M}{2},
\end{equation}
if the acceleration efficiency is $\zeta_{cr}\approx 15\%$ (Equation~2 in Paper II).
The spectrum of the excited turbulence depends on the shock strength, as illustrated in Section 4 of Paper II.
For $M\lesssim 30$, the turbulence spectrum is consistent with the quasi-linear prediction of resonant streaming instability \citep[e.g.,][]{skilling75a, bell78a}, while for stronger shocks the non-resonant hybrid \citep[NRH,][]{bell04,bell05} instability is the fastest to grow.
In the high-Mach number regime, the turbulence is excited far upstream by the streaming of escaping ions (with energy close to the maximum energy $E_{max}$), and the most unstable mode has wavenumber $k_{max}\gg 1/r_L(E_{max})$.
In the nonlinear stage of the instability (when $b\equiv\delta B/B_0\gg 1$), one has $k_{max}(b)\propto b^{-2}$ \citep[see also][]{rs09}, and the condition $kr_L(E_{max})\approx 1$ may be met at some point in the precursor. This resonance disrupts the coherence of the ion current, and leads to pitch-angle diffusion of energetic ions (see Section 5 in Paper II for more details).
We define the CR precursor as the upstream region where ions with energy up to $\sim E_{max}$ diffuse, and the far upstream as the region where the most energetic CRs escape freely, triggering the NRH instability.
Note that, since typically in the precursor $b\gg1$, the helicity of the self-generated waves is unimportant, and it makes little sense to distinguish between resonant and NRH instability.
We argue that this is the reason why Equation~\ref{eq:db}, which matches the prediction of resonant streaming instability \citep[e.g.,][]{ab06}, provides a good description of the maximum amplification factor achieved in the upstream even for very strong shocks (figure 5 in Paper II).
\section{Diffusion coefficient}\label{sec:diff}
Quantitative approaches to DSA are based on a description of the CR transport, which typically requires an a priori description of how particles diffuse while being advected with the fluid \citep[see][for a comparison of numerical, Monte Carlo and analytical approaches to non-linear DSA]{comparison}.
The simplest description introduces a diffusion coefficient accounting for ion diffusion parallel to the local magnetic field \citep[see, e.g.,][and references therein for generalizations to anisotropic diffusion]{jokipii87,Bill+03}.
Even Monte Carlo approaches \citep[e.g.][]{jones-ellison91}, which do not adopt an explicit diffusion coefficient, need to prescribe the CR mean free path for pitch-angle scattering.
In this Section, we calculate CR diffusion in our kinetic simulations, in order to study the feedback of self-generated turbulence on energetic particles.
\subsection{Bohm and self-generated diffusion}
For non-relativistic shocks, the most popular choice is to assume that particles diffuse via small-angle deflections, with a mean free path of the order of the particles' gyroradius $r_L$ (Bohm diffusion).
The corresponding diffusion coefficient reads\footnote{Note the factor of 2 in the denominator, which is peculiar of the reduced space-dimensionality of the simulation: it is equal to 1 in 1D, to 2 in 2D, and to the canonical value of 3 in 3D.}:
\begin{equation}\label{eq:DB}
D_B(E)\equiv\frac{v}{2}r_L(v,B)=\frac{v}{2}\frac{pc}{eB}=\frac{E}{m\omega_c}\,.
\end{equation}
In this limit, the spectral energy distribution of the magnetic irregularities is neglected, while in reality one must consider that ions with momentum $p$ preferentially scatter against modes with wavenumber $\bar{k}_p=m\omega_c/p$.
Strictly speaking, this resonance condition should involve only the component of ${\bf p}$ along ${\bf k}\parallel \hat{\bf x}$, and also a multiplicative factor of $\mathcal{O}(1)$, which can be ignored for our purposes \citep[see, e.g.,][for details]{skilling75c}.
As in Paper II, we introduce the normalized magnetic energy density per unit logarithmic bandwidth of waves with wavenumber $k$, $\mathcal{F}(k)$, defined as:
\begin{equation}\label{eq:F}
\frac{B_{\perp}^2}{8\pi}=\frac{B_{0}^2}{8\pi}\int_{k_{min}}^{k_{max}}\frac{dk}{k}\mathcal{F}(k),
\end{equation}
where $B_{\perp}$ is the transverse component of the magnetic field, $\mathcal{F}(k)/k=|\tilde{B}_y(k)|^2+|\tilde{B}_z(k)|^2$, and $\tilde{B}_{i}(k)$ is the Fourier transform of $B_{i}(x)$.
In the quasi-linear limit, the diffusion coefficient in the presence of Alfv\'enic modes with spectrum $\mathcal{F}(k)$ reads \citep[see, e.g.,][]{bell78a}:
\begin{equation}
D(p)=\frac{4}{\pi}\frac{v(p)}{3}\frac{r_L(p)}{\mathcal{F}(\bar{k}_p)}.
\end{equation}
By using Equation~\ref{eq:DB}, and considering that the magnetic turbulence is produced by energetic ions themselves, we can cast the \emph{self-generated} diffusion coefficient as:
\begin{equation}\label{eq:Dsg}
D_{sg}(E)=\frac{8}{3\pi}\frac{D_B(E)}{\mathcal{F}(\bar{k})},
\end{equation}
where $\bar{k}$ is the resonant wavenumber.
Equation~\ref{eq:Dsg} emphasizes how $D_B$ corresponds to pitch-angle diffusion in Alfv\'enic magnetic turbulence with $\mathcal{F}(k)\approx 1$ at all wavelengths, and also to $B_\perp/B_0\approx 1$, according to Equation~\ref{eq:F}.
In Paper II we showed that, for $M=20$, the shock generates a $p^{-4}$ distribution of non-relativistic particles that excites a wave spectrum $\mathcal F(k)\propto k^{-1}\propto p$ (figure 6 of Paper II);
therefore, the self-generated diffusion coefficient scales as $D_{sg}(p)\propto E/p\propto p$ (while $D_B\propto E$, instead).
The peculiar $\mathcal F\propto k^{-1}$ scaling depends on the fact that our CRs are non-relativistic: for a $p^{-4}$ distribution of relativistic CRs, in the presence of resonant streaming instability, one would have a flat $\mathcal F(k)\approx \mathcal F_0$ distribution, and the suppression of the diffusion coefficient would be independent of $p$ and proportional to $\mathcal{F}_0\propto \delta B/B_0$.
Nevertheless, the different scaling of $\mathcal F(k)$ in the relativistic and in the non-relativistic regime compensates the corresponding scaling of $D_B(E)$, in such a way that $D_{sg}(p)\propto p$ is realized \emph{at any momentum}.
\subsection{Extracting the diffusion coefficient from simulations}\label{sec:anal}
Let us consider the stationary, one-dimensional diffusion-convection equation for the isotropic (in momentum space) distribution function of accelerated particles, $f(x,p)$, \citep[e.g.,][]{skilling75a}:
\begin{equation}\label{eq:convdiff}
u\frac{\partial f}{\partial x} = \frac{\partial }{\partial x}\left[ D(x,p)\frac{\partial f}{\partial x}\right]+\frac{p}{3}\frac{{\rm d} u}{{\rm d} x}\frac{\partial f}{\partial p}\,.
\end{equation}
With the boundary condition $f(p)\to 0$ at upstream infinity, an excellent approximate solution of the equation above reads \citep[see, e.g.,][]{boundary}:
\begin{equation}\label{eq:fxp}
f(x,p)=f_{sh}(p)\exp{\int_0^x{\rm d}x' \frac{u(x')}{D(x',p)}},
\end{equation}
where
\begin{equation}
f_{sh}(p)\propto \left(\frac{p}{p_{inj}}\right)^{-q};\quad q=\frac{3r}{r-1}
\end{equation}
is the CR distribution function at the shock, and $r$ is the shock compression ratio.
All the equations are written in the shock reference frame\footnote{Our simulations are in the downstream reference frame, instead.}:
the shock is at $x=0$, the upstream is for $x>0$, and $u(x)<0$ is the fluid velocity (we dropped the notation $\tilde{u}$ we adopted in Paper II).
\begin{figure}\centering
\includegraphics[trim=0px 70px 10px 0px, clip=true, width=.485\textwidth]{fxp20.eps}
\caption{\label{fig:fxp}
\emph{Top panel}: ion spectrum (color code) at different positions around the shock with $M=20$ (Run A), at $t=1000\omega_c^{-1}$.
The upstream cold beam is thermalized at the shock front ($x_{sh}\approx 6000c/\omega_p$), while ions with $E\gtrsim E_{sh}$ diffuse upstream of the shock, proportional to their energy.
\emph{Bottom panel}: differential density profile of CRs corresponding to the dashed lines in the top panel.
Low-energy CRs are confined close to the shock, while more energetic ions diffuse much further.
High-energy ions also show a shallow jump across the shock, their gyroradii being much larger than the shock thickness.
Note that the CR distribution is increasingly dominated by higher-energy particles further into the upstream.
\emph{A color figure is available in the online journal.} }
\end{figure}
In Paper I we have shown that the CR spectrum at the shock is consistent with the DSA prediction;
now we want to check that also the expected spatial dependence of the upstream CR distribution is recovered.
Figure \ref{fig:fxp} shows the ion spectrum as a function of position (top panel), for a parallel shock with $M=20$ (Run A).
The bottom panel in the same figure illustrates the distribution $Ef(x,E)$ for accelerated particles with energy $E=10,40,150 E_{sh}$, corresponding to the dashed lines in the top panel.
Three things are worth noting.
First, the larger the energy, the larger the extent of the distribution upstream of the shock.
At any position in the precursor, the CR spectrum has a low-energy cut-off, which moves to higher energies for larger $x$, in qualitative agreement with Equation~\ref{eq:fxp}.
Second, the CR distribution at higher energies has a smaller jump across the shock (see, e.g., the curve for $150E_{sh}$ in Figure \ref{fig:fxp}), much smaller than the compression factor $r\approx 4$.
This behavior is peculiar of ions with gyroradii larger than the shock thickness, and induces a non-linear modification of shock jump conditions, as discussed in Paper I.
Third, moving from the shock towards the upstream, the CR distribution is first exponentially suppressed, and then flattens at a level $\ll f_{sh}(p)$ \citep[as also observed by][]{giacalone04}.
As discussed in Section 5 of Paper II, the diffusion approximation breaks far upstream, and for $E\gtrsim E_{max}$, where the fraction of escaping ions is close to unity and the CR spectrum is cut off.
In these cases, care should be taken when interpreting results obtained by using Equation~\ref{eq:convdiff}.
The diffusion coefficient describing the self-generated ion scattering in the precursor can be worked out by considering the upstream distribution of non-thermal particles.
Let us assume that $D(x,p)$ and $u$ are constant in $x$, at least where $f(x,p)\simeq f_{sh}(p)$, so that in the upstream $f(x,p)\simeq f_{sh}(p)\exp [ux/D(p)]$.
By integrating Equation~\ref{eq:fxp} from 0 to a given $x_0$, arbitrarily chosen such that $f(x_0,p)\ll f_{sh}(p)$, we get:
\begin{equation}
F_{up}(p)\equiv\int_0^{x_0} f(x,p) dx\simeq f_{sh}(p) \int_0^{x_0} \exp\left[\frac{ux}{D(p)}\right]dx\,,
\end{equation}
from which
\begin{equation}\label{eq:cd}
D(p)\simeq \frac{uF_{up}(p)}{f_{sh}(p)}\,.
\end{equation}
$F_{up}(p)$ is a global quantity that can be calculated easily in simulations, and we checked that results do not strongly depend on the particular choice of $x_0$, and on timescales shorter than the dynamical ones.
We comment more on these points in Section \ref{sec:track}.
\begin{figure}\centering
\includegraphics[trim=55px 0px 30px 30px, clip=true, width=.5\textwidth]{cd20.eps}
\caption{\label{fig:CoeffDiff20}
Time evolution of the diffusion coefficient in a parallel shock with $M=20$ (Run B), calculated via Equation~\ref{eq:cd} with $x_0=10^4c/\omega_p$, and plotted as divided by the Bohm diffusion coefficient (Equation~\ref{eq:DB}).
At late times, normalization and energy scaling match well the self-generated diffusion coefficient (dashed line), which corresponds to Equation~\ref{eq:Dsg} with $\mathcal F=1$ at $k=1/r_L(E_{max}$) and $E_{max}\approx 300E_{sh})$.
\emph{A color figure is available in the online journal.}}
\end{figure}
\begin{figure}\centering
\includegraphics[trim=55px 0px 30px 30px, clip=true, width=.5\textwidth]{cd80.eps}
\caption{\label{fig:CoeffDiff80} As in Figure \ref{fig:CoeffDiff20}, for a parallel shock with $M=80$ (Run D) and $x_0=2\times 10^4c/\omega_p$.
The inferred diffusion coefficient, at late times, is smaller than Bohm by a factor $\lesssim 5$, consistent with the level of magnetic field amplification in the precursor ($
|
\delta B/B_0\approx 3-5$, see figure 7 in Paper II).
The dashed line corresponds to Equation~\ref{eq:Dsg} with $\mathcal F=3$ at $k=1/r_L(E_{max}$) and $E_{max}\approx 100E_{sh})$.
\emph{A color figure is available in the online journal.}}
\end{figure}
The diffusion coefficient estimated with the procedure above is plotted in Figure \ref{fig:CoeffDiff20} for Run B ($M=20$), as a function of time, and normalized to $D_B(E)$.
The inferred diffusion coefficient is a factor of few larger than Bohm in the background field, but at later times, when self-generated fields have had sufficient time to grow, it becomes comparable to $D_B$ close to $E_{max}$.
The energy dependence of $D$ agrees well with the self-generated diffusion coefficient, i.e., $D_{sg}(p)\propto p$ (Equation \ref{eq:Dsg} with $\mathcal F(k)\propto k^{-1}$, as generated via streaming instability by a $f(p)\propto p^{-4}$ distribution of non-relativistic particles).
The dashed line corresponds to Equation~\ref{eq:Dsg}, normalized by posing $\mathcal F\approx 1$ at $k=1/r_L(E_{max}$), where $E_{max}\approx 300E_{sh})$;
such a normalization is consistent with the wave spectrum $\mathcal F(k)$ in the precursor (see figure 6 in Paper II), and is comparable with $D_B$ close to $E_{max}$ because $\mathcal F\approx 1$ at resonance with $E_{max}$, i.e., highest-energy ions feel $\delta B/B\approx 1$ on their gyration scales.
Finally, above $E_{max}(t)$, the diffusion coefficient increases quite rapidly because the lack of long-wavelength modes makes scattering very ineffective;
in this regime, however, the use of the diffusion--convection equation (Equation~\ref{eq:convdiff}) becomes questionable.
Equation~\ref{eq:Dsg} suggests that, when magnetic field amplification is effective ($\delta B/B_0\gtrsim 1$ and, in turn, $\mathcal{F}(k)\gtrsim 1$), the self-generated diffusion coefficient should be smaller than $D_B(E)$.
By repeating the exercise above for a very strong shock with $M=80$ (Run D), which shows high levels of magnetic field amplification, we confirm this to be the case.
Figure \ref{fig:CoeffDiff80} illustrates the diffusion coefficient calculated by using Equation~\ref{eq:cd}:
the measured diffusion coefficient is smaller than Bohm in the background field $B_0$ at nearly all energies, consistent with $\mathcal F(k)$ being larger than 1 at wavenumbers resonant with accelerated ions (see bottom panel of figure 7 in Paper II).
It is important to notice that the energy scaling of the diffusion coefficient does not follow the quasi-linear prediction for self-generated diffusion ($D_{sg}\propto p^{-1}$, dashed line in Figure \ref{fig:CoeffDiff80}), being apparently more consistent with Bohm diffusion.
The reason for this difference with respect to the $M=20$ case is that the spectrum of the magnetic turbulence for $M=80$ does not follow the $\mathcal F\propto k^{-1}$ trend, because of the relevance of the NRH instability.
As it follows from figure 7 in Paper II, most of the magnetic energy in the precursor is at scales comparable with the gyroradii of the most energetic ions, and such large-scale turbulence is found to effectively scatter particles of any energy.
Since for $M=80$ we find $\mathcal F\gg 1$, this result suggests how to extend the quasi-linear theory of self-generated diffusion (Equation~\ref{eq:Dsg}) into the regime of nonlinear field amplification.
The suppression of the diffusion coefficient can be quantitatively estimated as of the order of $\delta B/B_0$, which corresponds to \emph{Bohm diffusion in the total (amplified) field, at all the energies}.
\subsection{Comments and caveats}
First of all, we notice that when amplification is strongly nonlinear, the magnetic field becomes very tangled ($B_\perp/B_\parallel\sim 1$);
in this regime the distinction between parallel and perpendicular diffusion \cite[e.g.,][]{jokipii87} is lessened, and isotropic spatial diffusion should provide a reliable approximation.
Another result inferred from our simulations is that diffusion is enhanced even close to $E_{max}$.
This statement is independent of $M$, and relies on most of the wave energy being at wavelengths resonant with high-energy particles (figure 6, 7 in Paper II).
As discussed above, this effect is prominent for non-relativistic CRs, but it is expected even for non-too-steep relativistic distributions, the correction being just logarithmic for $f(p)\propto p^{-4}$.
In general, Bohm diffusion in $B_{tot}\sim b B_0$, with the large amplification factors $b\sim 10-30$ expected at very fast shocks (Paper II), should be fast enough to allow young SNRs to accelerate CRs up to the knee.
Nevertheless, more investigation of high-$M$ shocks are needed to definitively assess the ability of SNRs to act as PeVatrons.
We point out two main effects potentially contributing corrections to our findings:
i) the presence of filamentation \citep{filam}, which might lead to inhomogeneous diffusion;
and ii) the limited duration (in physical time) of our simulations for large $M$.
Hints of Bohm-like diffusion in CR precursors have already been outlined by \cite{rb13}, who used a MHD-kinetic code that exploits a spherical harmonic expansion of the Vlasov--Fokker--Planck equation to calculate a self-consistent magnetic field configuration starting from an initial CR current of mono-energetic ions.
These authors inferred diffusion faster than Bohm in $B_0$, but slower than Bohm in $B_{tot}$, possibly because simulations were not converged (see their section 4.3).
Our hybrid simulations, instead, test CR diffusion for an ion power-law distribution that forms and grows spontaneously because of DSA, without any need to prescribe particle injection or escape; therefore, we can self-consistently study the connection between CR spectrum, wave spectrum, and momentum dependence of the diffusion coefficient.
We also distinguish between different regimes of field amplification, and confirm that for strong shocks Bohm diffusion in $B_{tot}$ provides a reasonable description for the transport of particles of any energy.
Finally, we attest to the relative relevance of NRH and resonant instability in amplifying the magnetic field in the far upstream and in the precursor, providing the theoretical framework for calculating both self-generated turbulence and diffusion for a given CR distribution, in different shock regions.
\section{Particle Tracking}\label{sec:track}
\begin{figure}\centering
\includegraphics[trim=0px 17px 0px 280px, clip=true, width=.5\textwidth]{M20_E10.eps}
\includegraphics[trim=0px 17px 0px 280px, clip=true, width=.5\textwidth]{M20_E100.eps}
\includegraphics[trim=0px 17px 0px 280px, clip=true, width=.5\textwidth]{M20_E1000.eps}
\caption{\label{fig:track}
Running diffusion coefficient for $M=20$ shock (run B) at $t=2400\omega_c^{-1}$, in a region of width $\lambda(E)$ upstream of the shock.
Gray curves show $D(t)$ for ensambles of 100 particles, with energy $E=10,100,1000E_{sh}$ (panels from top to bottom), and random initial velocity direction.
The red thick line in each panel shows the averaged diffusion coefficient (see Equation~\ref{eq:runD})
\emph{A color figure is available in the online journal.}}
\end{figure}
A natural question is whether the diffusion coefficient obtained with Equation~\ref{eq:cd} is recovered also when analyzing the trajectories of individual particles.
In order to reconstruct the local diffusion coefficient, we select a region of the simulation box, take a snapshot of its electromagnetic configuration, impose periodic boundary conditions, and propagate many particles in it for long time using a Boris pusher \cite{boris70}.
Such an ``ergodic'' approach is valid if the shock structure does not vary dramatically on dynamical time-scales, and if fields are almost uniform in the box.
For any energy $E$, we use a box of width $\lambda(E)$, defined as twice the Bohm diffusion length in $B_0$, i.e.:
\begin{equation}\label{lambda}
\lambda(E)\simeq 2\frac{D_B(E)}{v_{sh}}=M\frac{E}{E_{sh}}\frac{c}{\omega_p}.
\end{equation}
The spatial diffusion coefficient along ${\bf B}_0$ can be calculated by taking the asymptotic time limit of the running diffusion coefficient $D(E,t)$, defined as:
\begin{equation}\label{eq:runD}
D(E)\equiv\lim_{t\to\infty} D(E,t) =\lim_{t\to\infty}\sum_{n=1}^N\frac{|x_n(t)-x_{n}(0)|^2}{2tN}.
\end{equation}
In order to remove statistical fluctuations, we average over a large number of particles, $N$, with fixed energy and random velocity direction.
Figure \ref{fig:track} shows the running diffusion coefficient in the precursor of run D (red thick line), averaged over $N=100$ random particles (thin gray lines) of different energies ($E=10,100,1000E_{sh}$, top to bottom panels, respectively).
For all the energies below $E_{max}\approx 300E_{sh}$, the running diffusion coefficient tends to an asymptotic value.
This confirms that particles are indeed diffusing, i.e., their mean displacement from the initial position $x(0)$ increases in time as $\langle \Delta x\rangle\propto \sqrt{2Dt}$.
Conversely, tracks of $E=1000E_{sh}$ particles (bottom panel of Figure \ref{fig:track}) show a mixed behavior: beside diffusing ions, we have free-streaming ions, for which $D(t)\propto t$ (see Equation~\ref{eq:runD});
when averaged over all the particles, this bimodal distribution returns a large $D\sim 10D_B$, but this result must be interpreted with a grain of salt.
Finally, we have some tracks that curve down for a while, which correspond to ions momentarily trapped in the generated turbulence.
\begin{figure}\centering
\includegraphics[trim=0px 30px 0px 260px, clip=true, width=.485\textwidth]{cd2080.eps}
\caption{\label{fig:cd2080}
Diffusion coefficient immediately in front of the shock for $M=20,80$, inferred by tracking individual particles (points with fiducial error bars of 20\%), and by using the procedure outlined in Section \ref{sec:anal} (solid red and blue lines, corresponding to the last time in Figures \ref{fig:CoeffDiff20} and \ref{fig:CoeffDiff80}).
Particles are propagated in periodic boxes with fields extracted from regions of thickness $\lambda(E)$ ahead of the shock.
\emph{A color figure is available in the online journal.}}
\end{figure}
In Figure \ref{fig:cd2080} we compare the upstream diffusion coefficient measured by averaging the CR distribution function over the upstream (as in Section \ref{sec:anal}, solid lines), and by tracking individual particles with different energies (symbols) in a region of thickness $\lambda(E)$ ahead of the shock in Runs B and D, at $t=2400\omega_c^{-1}$ and $t=500\omega_c^{-1}$, respectively.
The agreement between the two methods is very good, even at energies above $E_{max}$, in spite of Equation~\ref{eq:convdiff} becoming progressively less accurate.
The biggest limitation of the analytical method of Section \ref{sec:anal} is that it only applies to the shock precursor;
particle tracking is the only viable choice to study ion transport far upstream and in the downstream.
In Figure \ref{fig:cdx} we show the diffusion coefficient measured by propagating particles with energy $E=20,100E_{sh}$ in different regions of the shock in Run B.
The spatial profile of the diffusion coefficient is rather similar at different energies, and shows a minimum behind the shock, where $D(E)$ is about $r\approx 4$ times smaller than immediately upstream, consistently with field compression at the shock.
Finally, the diffusion coefficient increases when moving toward upstream and downstream infinity, consistently with the profile of the self-generated turbulence.
\begin{figure}\centering
\includegraphics[trim=0px 280px 0px 15px, clip=true, width=.485\textwidth]{cdx.eps}
\caption{\label{fig:cdx}
Spatial dependence of the diffusion coefficient for a Mach 20 shock (Run B), at $t=2400\omega_c^{-1}$, for ions with energy $E=20,100E_{sh}$ (magenta and cyan lines, respectively).
Points correspond to $D(E,x)$ calculated by tracking CRs in periodic boxes centered at $x$, and of width $2\lambda(E)$, indicated as the distance between the vertical colored lines and the dotted line; bars indicate a fiducial 20\% error.
\emph{A color figure is available in the online journal.}}
\end{figure}
\section{Maximum ion energy}\label{sec:Emax}
The diffusion rate is of primary importance for determining the maximum energy achievable via DSA.
Let us consider the evolution of $E_{max}(t)$, determined by fitting the post-shock non-thermal ion spectrum (figure 2 in Paper II) with a power-law $\propto E^{-1.5}$, plus an exponential cut-off at $E_{max}(t)$.
In DSA, the instantaneous maximum ion energy is regulated by the finite acceleration time, the other potential limiting factors being the size of the system, which must encompass particle trajectories, or energy losses (usually relevant for leptons only).
The acceleration rate depends on the time it takes a particle to diffuse back and forth across the shock, and is calculated as \citep[see, e.g.,][]{drury83}:
\begin{equation}\label{eq:Tacc}
T_{acc}(E)=\frac{3}{u_1-u_2}\left[\frac{D_1(E)}{u_1}+\frac{D_2(E)}{u_2} \right],
\end{equation}
where the subscripts 1 and 2 refer to upstream and downstream, respectively.
For simplicity, we assume $u$ (the fluid velocity in the shock reference frame) and $D$ to be piecewise constant upstream and downstream; Equation~\ref{eq:Tacc} can be generalized to the case of efficient CR acceleration, in which these quantities depend on $x$ \citep{BAC07}.
We then pose $D_1\simeq r D_2\equiv D$ (as inferred from Figure \ref{fig:cdx}), and rewrite Equation \ref{eq:Tacc} as:
\begin{equation}\
T_{acc}(E)\simeq\frac{6r^3}{(r^2-1)(r+1)}\frac{D(E)}{v_{sh}^2}\simeq\frac{6D(E)}{v_{sh}^2},
\end{equation}
with $v_{sh}$ the velocity of the upstream fluid in the downstream frame.
By posing $t\approx T_{acc}(E_{max})$, one obtains:
\begin{equation}\label{eq:Emax_B}
E_{max}(t)\simeq \frac{E_{sh}}{3\kappa}\omega_c t,
\end{equation}
where we introduced $\kappa\equiv D(E_{max})/D_B(E_{max})$.
Figure \ref{fig:Emax} shows the time evolution of the inferred maximum ion energy for Runs B and F in Table \ref{tab:box}, corresponding to $M=20$ and 60.
We compare $E_{max}(t)$ with the scaling provided by Equation~\ref{eq:Emax_B}, obtaining good fits with $\kappa_{20}\approx 2.1$ and $\kappa_{60}\approx 1.2$ for $M=20$ and $M=60$, respectively (dashed lines in Figure \ref{fig:Emax}).
For the low-$M$ case, $\kappa_{20}$ is consistent with the value of $D(E_{max})$ inferred from Figures \ref{fig:CoeffDiff20} and \ref{fig:cd2080}, within a factor of about 2.
As an example of a high-$M$ case, we follow for a long time a quasi-1D shock with $M=60$ (Run F), which is a reasonable choice because the particle spectrum is not very dependent on the transverse size of the simulations, as we showed in the appendix of Paper II.
Very interestingly, the value of $\kappa_{60}$ for the $M=60$ case is smaller than for the lower-$M$ shock, attesting to the relevance of magnetic field amplification (which increases with the shock strength, see Equation \ref{eq:db}) in favoring the rapid energization of accelerated particles.
In particular, since
\begin{equation}
\kappa=\frac{D(E_{max})}{D_B(E_{max})}\propto \frac{B_0}{B_{tot}}\propto \frac1{\sqrt{M}},
\end{equation}
one would expect $\kappa_{60}/\kappa_{20}\approx \sqrt{60/20}=\sqrt{3}$, which is in good agreement with the best-fitting values in Figure \ref{fig:Emax}.
\begin{figure}\centering
\includegraphics[trim=30px 0px 40px 5px, clip=true, width=.485\textwidth]{Emax_evo.eps}
\caption{\label{fig:Emax}
Time evolution of the maximum ion energy for parallel shocks with $M=20$ and 60 (Runs B and F in Table \ref{tab:box}), compared with the DSA prediction according to Equation \ref{eq:Emax_B}, with $\kappa_{20}= 2.1$ and $\kappa_{60}=$1.2, respectively (dashed lines).}
\end{figure}
It is worth remembering that Equation~\ref{eq:Emax_B} is expected to be accurate only within a factor of a few, since $\kappa$ is in principle a function of time, and both $D(E)$ and $u_1$ should be functions of the position in the precursor.
Yet, the good agreement between DSA theory and simulations confirms that the acceleration time is dominated by the most recent (longest) cycle, and suggests that diffusion is a good approximation for the transport of non-thermal ions up to the the exponential cut-off;
this can also be viewed as an independent estimate of the diffusion coefficient close to $E_{max}$.
Moreover, we showed that the more effective the magnetic field amplification, the more rapid the increase of the CR maximum energy with time;
this is a direct consequence of the scaling of the diffusion coefficient with $B_{tot}/B_0$, and has crucial implications on the maximum energy achievable in given classes of sources, and in particular on the possibility of producing PeV protons in SNRs.
As a final comment, we report that \cite{gs12} found a significantly shallower dependence of $E_{max}$ on $t$ after $t\sim 200 \omega_c^{-1}$ (figure 9 in their paper) because of the use of small computational boxes.
Instead, our large longitudinal dimension in run B ($10^5c/\omega_p$) allows us to properly account for the diffusion lengths of the most energetic ions until $t\approx 2000\omega_c^{-1}$.
\section{Conclusions}\label{sec:concl}
This paper is the third of a series aimed to investigate several aspects of ion acceleration at non-relativistic shocks through hybrid simulations.
In previous papers \citep[][Paper I,II]{DSA,MFA}, we outlined the features of DSA acceleration and magnetic field amplification.
Here, we study the effects of self-generated magnetic turbulence on the accelerated particles, characterizing how particles diffuse in pitch-angle, which corresponds to a random walk in space.
Particle diffusion allows multiple shock crossing, and is crucial in regulating the acceleration rate.
We find that, in the shock precursor of quasi-parallel shocks, accelerated ions are scattered by the self-generated magnetic turbulence, with a mean free path roughly comparable with the particle's gyroradius.
There are several interesting points to notice.
\begin{itemize}
\item At low Mach numbers ($M\lesssim 30$), scattering is due to resonant Alfv\'en waves with amplitude $\delta B/B_0\lesssim 1$ generated by accelerated ions (Figure \ref{fig:CoeffDiff20}), as predicted within the quasi-linear theory of streaming instability (section 3 of Paper II).
In this case, the \emph{self-generated diffusion} coefficient must be calculated in the fraction of the total magnetic field in waves with resonant wavelengths (Equation~\ref{eq:Dsg}).
\item For strong shocks, instead, $\delta B/B_0\gg 1$, and the wave spectrum has a more complicated shape because of the relevance of the NRH instability.
Most of the magnetic energy is in modes resonant in wavelength with high-energy ions (figure 7 in Paper II), and all the accelerated particles feel large-scale, non-linear perturbations.
The result is that energetic particles experience \emph{Bohm-like diffusion} in the total (amplified) magnetic field (Figure \ref{fig:CoeffDiff80}).
\item We calculate the local diffusion coefficient: i) by exploiting the analytic theory of DSA (Section \ref{sec:anal}); and
ii) by tracking individual particles in self-consistent electromagnetic fields (Section \ref{sec:track}).
The two methods return very consistent results.
The latter allows us also to investigate the spatial dependence of the diffusion coefficient (Figure \ref{fig:cdx}).
\item The evolution of $E_{max}(t)$ is governed by the time it takes for energetic ions to diffuse back and forth across the shock.
Bohm-like diffusion in the amplified field accounts reasonably well for such an evolution (Figure \ref{fig:Emax}), providing another independent test of particle diffusion at highest energies.
\item The maximum energy achievable in a given amount of time depends on magnetic field amplification (compare the curves for shocks with $M=20$ and 60 in Figure \ref{fig:Emax}):
stronger shocks accelerate CRs up to larger energies, proportional to the suppression of the diffusion coefficient produced by the larger amplification of the initial magnetic field.
\end{itemize}
In a forthcoming publication we will cover the mechanisms that lead to the injection of ions into DSA, in order to provide closure for the present series of papers.
\subsection*{}
We wish to thank L.\ Gargat\'e for providing a version of \emph{dHybrid}, and the referee for her/his precious comments.
This research was supported by NSF grant AST-0807381 and NASA grant NNX12AD01G, and facilitated by the Max-Planck/Princeton Center for Plasma Physics.
This work was also partially supported by a grant from the Simons Foundation (grant \#267233 to AS), and by the NSF under Grant No.\ PHYS-1066293 and the hospitality of the Aspen Center for Physics.
Simulations were performed on the computational resources supported by the PICSciE-OIT TIGRESS High Performance Computing Center and Visualization Laboratory. This research also used the resources of the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No.\ DE-AC02-05CH11231, and XSEDE's Stampede under allocation No.\ TG-AST100035.
\bibliographystyle{yahapj}
|
\section{INTRODUCTION}
The pose of a rigid object can be represented as a 6D transform consisting of translation and rotation. This 6D pose allows a robotic set-up to manipulate and analyze the object. By obtaining this pose through visual pose estimation, the flexibility of the set-up is greatly increased.
Without visual pose estimation, the position of objects needs to be known beforehand by using mechanical fixtures. Introducing new objects thus requires the assembly of new fixtures.
\begin{figure}[tb]
\centering
\begin{subfigure}[t]{.24\textwidth}
\centering
\includegraphics[trim=350 50 350 70, clip, width=0.99\linewidth]{gfx2/synth.png}
\caption{Synthetic training data}
\label{fig:res:synth}
\end{subfigure}%
~
\begin{subfigure}[t]{.24\textwidth}
\centering
\includegraphics[trim=350 50 350 70, clip, width=0.99\linewidth]{gfx2/t.png}
\caption{Real test data}
\label{fig:res:real}
\end{subfigure}%
\begin{subfigure}[t]{.24\textwidth}
\centering
\includegraphics[trim=350 50 350 70, clip, clip, width=0.99\linewidth]{gfx2/nodr.png}
\caption{No DR}
\label{fig:res:synth_pred_no_noise}
\end{subfigure}%
~
\begin{subfigure}[t]{.24\textwidth}
\centering
\includegraphics[trim=350 50 350 70, clip, clip, width=0.99\linewidth]{gfx2/nodrt.png}
\caption{No DR}
\label{fig:res:real_pred_no_noise}
\end{subfigure}%
\begin{subfigure}[t]{.24\textwidth}
\centering
\includegraphics[trim=350 50 350 70, clip, clip, width=0.99\linewidth]{gfx2/dr.png}
\caption{With DR}
\label{fig:res:noise_pred}
\end{subfigure}%
~
\begin{subfigure}[t]{.24\textwidth}
\centering
\includegraphics[trim=350 50 350 70, clip, clip, width=0.99\linewidth]{gfx2/drt.png}
\caption{With DR}
\label{fig:res:real_pred_noise}
\end{subfigure}%
\caption{ Visualization of the domain randomization's (DR) impact on the network's generalization ability. (a) and (b), show synthetic and real data, respectively.
(c) show a perfect feature prediction on synthetic data without domain randomization. In (d), the network trained without domain randomization shows bad results on the real test data. (e) show the more difficult feature prediction with domain randomization, but increased performance on the test scene in (f).
}
\label{fig:res}
\end{figure}
However, the complex configuration of the visual pose estimation system can make the set-up very time-consuming.
Different scenarios vary greatly, and it is essential to fit the configuration to the scenario. Variations such as camera type, lighting, background clutter, and levels of occlusion all influence the system.
While controlling these influences can simplify the set-up \cite{hagelskjaer2019using}, the environment cannot always be controlled. For example, attempting to remove shadows in an image can quickly become a time-consuming task.
Properties such as object material and shape will also influence the performance of the pose estimation algorithm.
In academia, good results are often obtained by the method known as Grad Student Descent \cite{gencoglu2019hark}, where hyper-parameters are continuously tuned until adequate results are obtained.
As a result of the difficulties, the set-up time can often last more than a week \cite{hagelskjaer2017does}. This set-up time significantly reduces the usability of robotics. And as most of the time is spent on adjusting the software \cite{hagelskjaer2017does}, it is essential to simplify this part.
One approach has been the automatic tuning of pose estimation parameters using training data \cite{hagelskjaer2019bayesian}. This method uses real data to train the parameters. This approach has even shown increased performance compared with parameters tuned manually.
Deep learning algorithms have used this approach very successfully, using real training data to tune the network structure. On most benchmarking datasets, the top-performing algorithms are based on deep neural networks \cite{hinterstoisser2012model, brachmann2014learning, kaskman2019homebreweddb, xiang2017posecnn}.
However, obtaining real training is a task that needs to be performed manually. Moreover, for deep learning models, this can easily amount to more than hundreds of images that must be collected and correctly labeled.
To overcome this obstacle, the use of synthetic training data has been introduced \cite{thalhammer2019towards, denninger2019blenderproc, kehl2017ssd}. By using synthetic training data, the manual set-up is greatly reduced, while the algorithm retains high performance.
In the BOP challenge \cite{hodan2020bop} methods using synthetic training data obtain results in the same range as methods trained on real data.
The method that the optimization in this paper is based on also obtains results on the LINEMOD \cite{hinterstoisser2012model} and OCCLUSION \cite{brachmann2014learning} datasets similar to methods trained on real data.
However, the success of methods trained on synthetic data is highly dependent on domain randomization \cite{hagelskjaer2020bridging}, which is configured manually. The pose estimation parameters are also found heuristically to obtain the best results.
In this paper, we present a method for the automatic optimization of pose estimation parameters. The optimization is performed using the same synthetic data used for training the deep learning model. However, our results show that domain randomization is necessary during parameter optimization to obtain good results. The effect of the domain randomization is shown in Fig.~\ref{fig:res}. To this end, we employ the domain randomization from the deep learning training. Using domain randomization drastically improves the performance and allows our model to outperform models trained with real data and heuristically found parameters. On the challenging OCCLUSION \cite{brachmann2014learning} dataset, our method trained with synthetic data, and optimized with synthetic data, obtains a recall of 82.0~\%, which is the new state-of-the-art performance.
To create a usable parameter optimization system, the parameters are split into two groups. This allows the isolation of parameters that both influence performance and run-time. For many applications, this run-time is very important with respect to the feasibility of the system. Our optimization allows the user to obtain the best performance to the desired maximum run-time.
This paper presents the following main contributions.
\begin{itemize}
\item A method for the automatic set-up of pose estimation systems using only synthetic data.
\item A simple method to obtain the desired run-time of the system.
\item State-of-the-art results on the challenging OCCLUSION dataset.
\end{itemize}
The remaining paper is structured as follows: We first review related papers in Sec.~\ref{related}. In Sec.~\ref{pose_estimation}, our developed method is explained. In Sec.~\ref{evalution}, the parameter optimization is performed, and the performance is verified. Finally, in Sec.~\ref{conclusion}, a conclusion is given to this paper, and further work is discussed.
\section{RELATED WORK}
\label{related}
Since the introduction of deep neural networks, domain randomization has always been present \cite{shorten2019survey}. Domain randomization is used to minimize the distance between the training and test sets. The less the training set represents the test set, the more critical domain randomization is.
The most common domain randomization strategies are adding Gaussian noise, randomly cropping the image, rotating the image, and changing, hue, contrast and brightness \cite{shorten2019survey}. These methods are generally seen in many pose estimation methods \cite{kehl2017ssd, peng2019pvnet, he2020pvn3d}.
There also exist other strategies for the generalization of networks. One commonly used strategy is pre-training, where the network is trained on a large and diverse dataset to learn features, which are then re-used when trained on the actual dataset. Another approach is the dropout of features during training to avoid the network relying on a single feature.
\subsubsection{Pose Estimation}
Successful pose estimation methods are generally based on networks operating on 2D images. This is a result of the success of 2D convolutional networks, as many good networks have been developed with good pre-trained weights and domain randomization strategies \cite{he2017mask}. The following pose estimation methods all use 2D networks for pose estimation. 3D information is then often used to refine the poses. In our approach, we use the 3D data directly in the learning. It gives the advantage that all modalities are included in the network, but the disadvantage is that no networks pre-trained on large networks are available.
An example is SSD-6D \cite{kehl2017ssd}, a deep learning based pose estimation method using only synthetic data. Renders of objects are cut and pasted onto images from the COCO \cite{lin2014microsoft} dataset. The network is trained to classify the rotation and presence of objects. Domain randomization is performed by changing the brightness and contrast. The method is tested on the LINEMOD dataset, where it obtains good performance but is outperformed by methods trained with real data.
In PVNet \cite{peng2019pvnet} 10000 images are rendered and cut and pasted onto images from the SUN397 \cite{xiao2010sun} dataset. Then random cropping, resizing, rotation, and Gaussian noise is applied. This network also uses real training data along with synthetic data.
PVN3D \cite{he2020pvn3d} use the training data and domain randomization from PVNet \cite{peng2019pvnet}, but expand with 3D point-clouds. The network obtains very good performance on the LINEMOD \cite{hinterstoisser2012model} dataset, which it is trained on, but on the more challenging OCCLUSION dataset, it does not perform well.
PoseCNN \cite{xiang2017posecnn} is the current state-of-the-art method on the OCCLUSION \cite{brachmann2014learning} dataset. Real data is used along with synthetic renders with objects placed randomly in a scene.
The DPOD \cite{zakharov2019dpod} method uses real training data and object renderings which are cut and pasted onto images from the COCO \cite{lin2014microsoft} dataset. Brightness, saturation, and Gaussian noise is used to increase the generalizability.
Similar to our method DenseFusion \cite{wang2019densefusion} process point-cloud data. The point-cloud processing is combined with 2D processing, and the network is only trained on real data. The network is tested on the LINEMOD \cite{hinterstoisser2012model} and obtains good results, but is outperformed by methods using both using synthetic \cite{hagelskjaer2020bridging} or real data \cite{he2020pvn3d}.
The method CosyPose \cite{labbe2020cosypose} show very good results with extensive domain randomization. One million synthetic training images are generated, with objects placed randomly. Gaussian blur, contrast, brightness, color, and sharpness filters are all added to the images. The cut and paste strategy are then used to increase the background variance. This data, along with real training data, show state-of-the-art performance on two pose estimation datasets. The method also showed the best recall for methods using only synthetic data on the 2020 BOP challenge \cite{hodan2020bop}.
As mentioned, most of these methods only use the cut and paste methods to generate the background. The BlenderProc \cite{denninger2019blenderproc} dataset use photo-realistic renderings to include reflections and shadows. However, methods using this dataset still use the cut and paste methods to increase performance \cite{labbe2020cosypose}.
\subsubsection{Learning Domain Randomization}
In the reviewed pose estimation methods, the domain randomizations are manually tuned. However, a number of methods exist to tune the domain randomization automatically.
One approach is to directly minimize the domain gap between real and synthetic data by introducing unlabelled real data \cite{ganin2016domain}. During training on the synthetic data, the learned features should not be able to discriminate between real and synthetic data.
The domain randomization problem can also be seen as an optimization task \cite{zoph2020learning}. Here the COCO \cite{lin2014microsoft} dataset is used as a validation set where different domain randomization strategies are tested. The best domain randomization policy is then used for testing.
The same strategy is seen in AutoAugment \cite{cubuk2019autoaugment} which tests different domain randomization strategies on a validation dataset. Here state-of-the-art results are obtained on the many datasets.
Domain randomization can also be used to increase robustness against adversarial images \cite{yu2019pda}. Similar to our method, the goal is to increase the amount of domain randomization instead of finding the correct domain randomization. They show that this improves performance compared with fixed domain randomization.
\subsubsection{Domain randomization in 3D}
Successful deep learning of point-clouds was introduced with PointNet
\cite{qi2017pointnet}. Here domain randomization consists of random Gaussian noise in the overall position, Gaussian noise for each point, and a random rotation. We base our domain randomization on this approach but add random Gaussian noise to the two added channels of normal vector and RGB color. Additionally, as the image is obtained with a 2.5D scanner, we add flattening of the point-cloud to simulate wrongly obtained depth.
For 3D point-clouds a number of new domain randomization methods have also been developed.
One approach is cross-modal domain randomization \cite{wang2021pointaugmenting}, where the object is cut and pasted into different scenes. More similar to the approach seen in 2D, where more samples can be created from small amounts of data.
Another method to create more samples from existing data is point interpolation \cite{chen2020pointmixup}. New data is generated by interpolating between the points from two different point-clouds. These two methods are useful when only small amounts of data are present.
Learning domain randomization for point-clouds has also been performed. In PointAugment \cite{li2020pointaugment} a network is trained to generate domain randomization. The domain randomization is both point-wise and shape-wise. The augmentation is optimized during network training with the requirement that the loss with augmentation should be slightly larger than without augmentation. This approach shows improved performance
compared with classic augmentation.
A method that successfully uses 3D domain randomization is PointVoteNet \cite{hagelskjaer2019pointvotenet}, which is trained on real data. To create more samples background data is cut and pasted.
\subsubsection{Previous Method}
Our pose estimation method is based on a previous method \cite{hagelskjaer2020bridging}. The method is trained on the BlenderProc \cite{denninger2019blenderproc} dataset created for the BOP challenge \cite{hodan2020bop}. This method use domain randomization based on the Kinect sensor. In this paper, the domain randomization is optimized during the training without knowledge of the scene.
An example of optimizing classic computer vision is seen with Bayesian Optimization \cite{hagelskjaer2019bayesian}. The parameters of an existing pose estimation method are optimized using real data, and the improvement gives state-of-the-art performance. We use a similar approach in our paper, but synthetic images with domain randomization are used instead of real training data. Thus, the set-up of our method is entirely based on synthetic data.
\section{METHOD}
\label{pose_estimation}
The pose estimation in this paper is based on an existing method \cite{hagelskjaer2020bridging}. In the first step, a candidate detector is used, and from this output, the method extracts point-clouds. These point-clouds are fed through a modified DGCNN \cite{dgcnn} network, and three predictions are obtained; classification, background segmentation, and votes. The classification is the probability that the object is in the center of the point-cloud. This is used to filter point-clouds, as performing pose estimation for each point-cloud is very time-consuming. The background segmentation and the votes are combined for the remaining point-clouds to create keypoint matches to the object model. RANSAC \cite{fischler1981random} is then used to find possible pose estimations, which are further refined by Coarse to Fine Iterative Closest Point (C2F-ICP). Finally, the best pose estimation is determined by a depth check using the projected model.
Compared with the previous method \cite{hagelskjaer2020bridging}, the RANSAC, ICP, and depth distances are scaled by object diagonal. The depth check is also aided by a contour check.
The deep neural networks are trained using only synthetic data from the BlenderProc dataset \cite{hodan2020bop, denninger2019blenderproc}. However, all parameters in the pose estimation algorithm were originally found heuristically. In this paper, these parameters are instead found by optimization, using the same synthetic data.
\subsection{Candidate Detector}
In the original implementation, a MASK R-CNN \cite{he2017mask} trained on the synthetic BlenderProc \cite{denninger2019blenderproc} dataset was used as the candidate detector. However, the network's ability to classify point-clouds makes it agnostic to the candidate detector. As a result, the method is also tested using detections from the CosyPose method \cite{labbe2020cosypose}. CosyPose was selected based on the good results using synthetic data on the BOP datasets \cite{labbe2020cosypose}. The detections have been created by running the CosyPose detector trained on the BlenderProc dataset on the complete LINEMOD \cite{hinterstoisser2012model} and OCCLUSION \cite{brachmann2014learning} datasets.
\subsection{Domain Randomization Optimization}
\begin{figure}[tb]
\vspace{1.5mm}
\centering
\includegraphics[trim=0 0 0 0, clip, width=0.95\linewidth]{gfx/parapose_network.drawio.pdf}
\caption{Overview of the network training and domain randomization optimization. The domain randomization model is updated after each epoch.}
\label{fig:model_training}
\vspace{-6mm}
\end{figure}
The method is based on a subset of domain randomizations, each with a noise level. As in other methods \cite{peng2019pvnet}, the level of noise in each sample is a random Gaussian based on a selected max level.
The task of the optimization is to determine the max noise level for each subset of domain randomization, which best improves the generalizability.
The six subsets of domain randomization are Gaussian XYZ noise, Gaussian normal vector noise, Gaussian RGB noise, Gaussian RGB shift, rotation, and flattening.
The noise starts at a predetermined level of 1.0, 0.02, 0.02, 0.04, 5.0, and 0.02, respectively. Where point noise is in millimeter, rotation is in degrees, and all other is in percentage. For each parameter, the jump size is half the original value.
The first four epochs are run without domain randomization. After which, the domain randomization is activated, with noise at predetermined levels. At eight epochs, the optimization of the domain randomization is started by recording the loss level and increasing the first noise level.
After one epoch, the loss is measured and compared with the recorded loss. If the loss is decreased by more than 2.5 percent, then the noise isn't limiting the training, and the noise level is increased again. Otherwise, the next noise type is increased. If the loss increase is more than five percent, the max level is reduced to the previous level, and this noise type is no longer increased. An overview of the training and domain randomization procedure is shown in Fig.~\ref{fig:model_training}.
\subsection{Parameters}
Several parameters are tuned in this pose estimation method to obtain good performance. To explain the method, an overview is given of the optimized parameters. For a further explanation of the method, the reader is referred to the original paper \cite{hagelskjaer2020bridging}.
We divide the parameters into continuous and discrete parameters. The continuous parameters can have any real value within a known range. An important aspect is that fine-tuning these values will primarily increase the precision without influencing the run-time.
The discrete parameters are integer values that impact both the precision and run-time of the algorithm. It is, therefore, essential to find the optimal trade-off for the given task.
For the discrete parameters, the following parameters are optimized.
The number of point-clouds which are classified, $PC$. The remaining point-clouds to perform pose estimation upon, $PE$. The number of RANSAC iterations, $RI$. The number of RANSAC proposals to be sorted with the depth check, $DC$. And the number of ICP iterations, $II$.
For the continuous parameters, the following are optimized.
The cut-off radius size to determine if less than 2048 points are accepted, $sr$.
The threshold for accepting votes at symmetric positions, $vt$. The distance in mm for the RANSAC algorithm, $rd$. The C2F-ICP distance in mm, $id$. The scaling in the C2F-ICP, $is$. The distance in mm to determine whether a point belongs to foreground or object, $bd$. The distance for accepting object points, $ad$.
Additionally, a number of parameters are seen as essential to the structure of the algorithm and are kept fixed. The C2F-ICP has three resolutions. The network structure has 2048 input points. A minimum of 512 points is needed to process a point-cloud, and a minimum of 100 key-point matches are necessary for the RANSAC algorithm. The point-cloud is pre-processed to a 1mm voxel grid. The object is pre-processed to a 5mm voxel grid for the C2F-ICP. The object model is maximally at 100 keypoints on object model. The RANSAC iterations are split into parts of 50. Normal vectors are computed with a radius of 10 mm.
\subsection{Pose Estimation Parameter Optimization}
As stated in Sec.~\ref{pose_estimation} two different types of parameters are used in the pose estimation method. The discrete parameters have a direct influence on the run-time, while the continuous parameters do not. As a result of these differences, two different parameter optimizations are performed.
First, the continuous parameters are found with fixed discrete parameter values, and the optimized continuous parameters are then used for the discrete optimization.
\subsubsection{Continuous Parameters}
For the continuous parameters, the goal is to find the optimal parameter set for performance. As many parameters are present, the parameters are expected to influence each other. The risk of a local optimization finding a local maximum is, therefore, present. To avoid this, a global optimization method is used.
The chosen optimization method is Bayesian Optimization with Upper Confidence Bound as the acquisition function \cite{snoek2012practical}. This method has been used successfully for pose estimation parameter optimization \cite{hagelskjaer2019bayesian}. It can be represented by a single parameter, which is the trade-off between exploration and exploitation. The method works without residuals, which is useful as very small changes in parameter values will generally not change the recall. It is a bounded optimization, and most values have logical bounds, and the remaining can be bounded by reason, such as metric bounds.
The optimization consists of four iterations, with the first being random exploration and then moving to an exploitation strategy. This is achieved by decreasing the $\kappa$ value. 50 random iterations are performed, followed by 100 with $\kappa = 0.5$, then 50 with $\kappa = 0.1$, and finally 50 with $\kappa = 0.01$, resulting in 250 iterations in all.
\subsubsection{Discrete Parameters}
The discrete parameter values both influence the recall and the run-time. As a result, one cannot simply determine the best parameter set.
Instead, the goal of this optimization is to find a list of parameter sets. This list will give the parameter sets that give the highest increase in recall when increasing the run-time.
A user would then be able to select the best trade-off for the given situation.
As the parameter values are discrete, the optimization is performed with a grid search. All parameter sets are sorted according to run-time, and starting with the lowest run-time, parameter sets are added if they increase the recall.
\subsubsection{Varying Number of Objects}
The algorithm's run-time depends not only on the parameter values but also on the number of searched objects. While a new optimization could be performed if any objects are removed or added, it is desirable to perform the optimization once and
|
then adjust the parameters to fit run-time given the number of objects. The relationship between discrete parameter values, number of objects, and run-time are shown in Eq.~\ref{eqn:runtime}. This equation can be solved using least-squares on the grid search results. Here $t_{image}$ is the run-time of the system, $t_{pre}$ is the pre-processing and initial detection of interest points,
$t_{net}$ is the is run-time of the network, $t_{ran}$ is the run-time of a RANSAC iteration, $t_{icp}$ is the run-time of an ICP iteration, and $t_{depth}$ is the run-time of the depth check.
\begin{equation}
\label{eqn:runtime}
\begin{aligned}
t_{image} = t_{pre} + obj * ( t_{net} * PC + PE * \\ ( t_{ran} * RI + DC * ( t_{icp} * II + t_{depth} )))
\end{aligned}
\end{equation}
\subsection{Evaluation Metrics}
The success criteria of a pose estimation must first be defined before the performance can be optimized.
A simple score is the distance between the ground truth pose and the pose estimate. This is performed in the Tejani \cite{tejani2014latent} and Doumanoglou \cite{doumanoglou2016recovering} datasets poses with less than 50mm error are declared correct. This metric also adds a fifteen-degree maximum error between the Z-axes to include angular errors while accommodating symmetric objects.
Two central problems with this score are that errors further away from the object center are penalized less severely, and only angular errors in the Z-axis are included.
For the LINEMOD \cite{hinterstoisser2012model} and OCCLUSION \cite{brachmann2014learning} datasets, the average distance (ADD) metric is used. First, the object model is converted to a point-cloud which is projected into the scene both by the ground truth and the found pose. The ADD metric is then computed as the average distance between each point in the two point-clouds. A pose estimate is successful if the average distance is smaller than ten percent of the object model's diagonal. The ADD/I score can be used to accommodate symmetric objects where the distance is simply to the nearest point in the comparison cloud.
As the ground truth labels in the OCCLUSION dataset are not very precise, this metric fits well with the dataset. And as the test datasets use this score for evaluation, it would seem obvious to use this for the optimization.
However, as the parameter optimization is performed with synthetic data, the ground truth is exact, and a more discriminating metric can be used. Therefore, the metric established from the BOP test is used for the optimization.
The metric consists of three different scores, Visible Surface Discrepancy (VSD) \cite{hodan2018bop,hodavn2016evaluation}, Maximum Symmetry-Aware Surface Distance (MSSD) \cite{drost2017introducing}, and Maximum Symmetry-Aware Projection Distance (MSPD) \cite{hodan2020bop}. To understand the metric a very general description is given for each score.
The object is first projected into the scene according to the ground truth pose and the pose estimate. In VSD, the distance in depth, for the visible part of the model is calculated. In MSSD, the maximum distance between each point in the model is calculated. The last score, MSPD, is the distance in pixels of the projected object and is therefore good for methods without depth data.
Multiple thresholds are used instead of a single cut-off value, and the multiple scores are combined into an average.
\section{EXPERIMENTS}
\label{evalution}
To show the performance of the developed method, tests are performed on the LINEMOD dataset \cite{hinterstoisser2012model} and the challenging OCCLUSION \cite{brachmann2014learning} dataset. The parameters are first optimized, and the optimized method is then tested on the real data and compared with the previous approach and current state-of-the-art methods.
The optimization is performed with the synthetic BlenderProc dataset \cite{hodan2020bop,denninger2019blenderproc}. For the network training, 10000 synthetic images are used, and for the parameter optimization, nine previously unseen synthetic images are used. All experiments are performed on a PC environment (Intel i9-9820X 3.30GHz CPU and NVIDIA GeForce RTX 2080 GPU).
\subsection{Network and Domain Randomization Optimization}
For each object, a network and domain randomization is trained. The training runs for sixty epochs with parameters according to the previous method \cite{hagelskjaer2020bridging}. The resulting noise levels for three objects are shown in Fig.~\ref{fig:noise_opt_graph}. The objects are Ape, Can, and Eggbox, as they respectively represent small, large, and symmetric objects. It is seen that the network tolerates a much lower noise increase for XYZ and rotation. The noise levels appear similar for all objects, although the symmetric object Eggbox has a very low XYZ level.
\subsection{Continuous Optimization}
The continuous optimization is performed using Bayesian Optimization. The discrete parameters are fixed to values of, $PC=32$, $PE=6$, $RI=500$, $II=10$, and $DC=2$. By setting $PE=6$ and $DC=2$, twelve pose estimations are checked for each object. This ensures that all the continuous parameters are used for the pose estimation without making the run-time unfeasible. The optimization is performed both with and without domain randomization.
Additionally, an optimization is made using the ADD/I metric instead of the BOP.
The algorithm's recall resulting from the parameters span from 40~\% to 93~\%. The correct parameters thus have a strong influence on the performance. The parameters with and without domain randomization, and the heuristically found parameters, are shown in Tab.~\ref{tab:found_con}.
Compared with the heuristic parameters, a notable difference is seen with the much lower voting threshold, $vt$, giving more matches to RANSAC.
Compared with the optimization without domain randomization, the C2F-ICP and depth check parameters are the most significant difference. Here a much more refined search is performed compared with using domain randomization. The same tendency is seen with the optimization using the ADD/I metric.
\begin{figure}[t]
\vspace{1.5mm}
\centering
\includegraphics[trim=0 0 0 0, clip, width=0.95\linewidth]{gfx/bar_plot_noise.pdf}
\caption{Number of noise level increases for each noise type shown for three different types of objects. }
\label{fig:noise_opt_graph}
\end{figure}
\begin{table}[tb]
\begin{center}
\caption{The found values from the continuous parameter search compared with the heuristic parameters.}
\label{tab:found_con}
\begin{tabular}{|c|l|l|l|l|l|l|l|}
\hline
Param & $vt$ & $rd$ & $id$ & $is$ & $bd$ & $ad$ & $sr$ \\ \hline
Heur. & 0.95 & 10.0 & 2.5 & 2.0 & 10 & 5 & 72 \\ \hline
No DR Opt. & 0.27 & 17.03 & 1.24 & 2.25 & 21 & 1 & 108 \\ \hline
Opt. ADD & 0.275 & 12.85 & 0.77 & 3.49 & 59 & 5 & 66 \\ \hline
Opt. & 0.174 & 19.88 & 4.85 & 1.24 & 86 & 12 & 108 \\ \hline
\end{tabular}
\end{center}
\vspace{-6mm}
\end{table}
\subsection{Discrete Optimization}
The discrete optimization is performed with the parameters found in the continuous optimization. A grid search is then performed using varying discrete parameters. The tested values are shown in Tab.~\ref{tab:tested_param}. As unfeasible parameter settings cannot be tested, e.g., $PE=10$ when $PC=8$, the final test iterations are 576.
As the optimization is performed with 15 objects, the run-time will not match the OCCLUSION dataset, which has only eight objects.
To compute the expected run-time, the 576 iterations from the discrete optimization are used to solve Eq.~\ref{eqn:runtime} using least-square optimization. The resulting time contribution by increasing each discrete parameter is shown in Tab.~\ref{tab:part_time}.
\begin{table}[htb]
\centering
\caption{Values tested in the discrete grid search.}
\label{tab:tested_param}
\begin{tabular}{|l|l|l|l|l|}
\hline
$PC$ & $PE$ & $RI$ & $DC$ & $II$ \\ \hline
8,16,32 & 2,4,6,8,10 & 500,1500,2500 & 1,2,5,10 & 10,30,50 \\ \hline
\end{tabular}
\vspace{-4mm}
\end{table}
\begin{table}[htb]
\centering
\caption{Time consumption of each part of the model.}
\label{tab:part_time}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Part & $t_{pre}$ & $t_{net}$ & $t_{ran}$ & $t_{depth}$ & $t_{icp}$ \\
\hline
Time (s) & 8.57e-01 & 7.99e-03 & 2.70e-04 & 9.12e-03 & 1.67e-04 \\ \hline
\end{tabular}
\vspace{-2mm}
\end{table}
\begin{figure}[htb]
\vspace{1.5mm}
\centering
\includegraphics[trim=0 0 0 35, clip, width=0.95\linewidth]{gfx/Figure_1_1.png}
\caption{ The output of the discrete optimization shown as blue crosses. The red line shows the best-performing parameter sets in relation to run-time. Parameter values are visualized for some of the parameters which have the best trade-off between run-time and performance. }
\label{fig:disc_opt_graph}
\vspace{-1mm}
\end{figure}
The results are sorted according to run-time and performance, and the results are shown in Fig.~\ref{fig:disc_opt_graph}. Of the 576 iterations, 14 parameter sets give the best trade-off between run-time and performance. The least influential parameter is found to be the number of C2F-ICP iterations, which is always at 10.
While the depth check improves the performance, it also significantly increases the run-time. It is, therefore, only increased in value at the maximum run-time.
\begin{table}[tb]
\begin{center}
\caption{The resulting run-time and recall for the heuristic approach and with optimized parameters, with different run-times according to Fig.~\ref{fig:disc_opt_graph}. Results are shown with the original MASK R-CNN detector and the CosyPose detector. }
\label{tab:final_parameters}
\small
\begin{tabular}{|l|l|l|}
\hline
Method & Run-time (s) & Recall ($\%$) \\ \hline
Previous method & 3.5 & 77.2 \\ \hline
Heuristic $<$4 sec (prev. net) & 3.4 & 77.58 \\ \hline
Heuristic $<$4 sec & 3.4 & 77.63 \\ \hline
No DR Opt. $<$4 sec & 3.7 & 77.40 \\ \hline
ADD/I Opt. $<$4 sec & 3.6 & 77.49 \\ \hline
Opt. $<$4 sec, MASK R-CNN & 3.6 & 78.91 \\ \hline
Opt. Max, MASK R-CNN & 26.3 & 80.11 \\ \hline
Opt. $<$4 sec, CosyPose & 3.6 & 80.45 \\ \hline
Opt. Max, CosyPose & 26.2 & 82.03 \\ \hline
\end{tabular}
\end{center}
\vspace{-6mm}
\end{table}
\begin{table*}[htb]
\vspace{1.5mm}
\centering
\caption{ Results for the LINEMOD dataset \cite{hinterstoisser2012model} in \% recall with the ADD/I score. The competing methods are DPOD \cite{zakharov2019dpod}, SSD-6D \cite{kehl2017ssd} (obtained from \cite{wang2019densefusion}), DenseFusion \cite{wang2019densefusion}, PointVoteNet \cite{hagelskjaer2019pointvotenet}, PVN3D \cite{he2020pvn3d} and the previous method \cite{hagelskjaer2020bridging}. Rotation invariant objects are marked with an *. }
\label{tab:linemod}
\small
\begin{tabular}{|l|c|c|c|c||c|c|c|c|c|}
\hline
\begin{tabular}{@{}c@{}} Training \\ Data \end{tabular} & \multicolumn{4}{c||}{\textbf{Real}} & \multicolumn{5}{c|}{\textbf{Synthetic}} \\
\hline
& DenseFusion & DPOD & PointVoteNet & PVN3D & DPOD & SSD6D & Prev. & \multicolumn{2}{c|}{ParaPose} \\
& & & & & & & & $<4s$ & MAX \\
\hline
Ape & 92 & 87.7 & 80.7 & 97.3 & 55.2 & 65 & 97.7 & 97.9 & 97.9 \\
Bench v. & 93 & 98.5 & 100 & 99.7 & 72.7 & 80 & 99.8 & 100 & 100 \\
Camera & 94 & 96.1 & 100 & 99.6 & 34.8 & 78 & 98.3 & 97.9 & 97.9 \\
Can & 93 & 99.7 & 99.7 & 99.5 & 83.6 & 86 & 98.8 & 99.9 & 99.9 \\
Cat & 97 & 94.7 & 99.8 & 99.8 & 65.1 & 70 & 99.9 & 99.9 & 99.8 \\
Driller & 87 & 98.8 & 99.9 & 99.3 & 73.3 & 73 & 99.2 & 100 & 100 \\
Duck & 92 & 86.3 & 97.9 & 98.2 & 50.0 & 66 & 97.8 & 98.3 & 98.3 \\
Eggbox* & 100 & 99.9 & 99.9 & 99.8 & 89.1 & 100 & 97.7 & 98.5 & 98.5 \\
Glue* & 100 & 96.8 & 84.4 & 100 & 84.4 & 100 & 98.9 & 99.9 & 100 \\
Hole p. & 92 & 86.9 & 92.8 & 99.9 & 35.4 & 49 & 94.1 & 98.8 & 99.5 \\
Iron & 97 & 100 & 100 & 99.7 & 98.8 & 78 & 100 & 100 & 100 \\
Lamp & 95 & 96.8 & 100 & 99.8 & 74.3 & 73 & 92.8 & 100 & 99.9 \\
Phone & 93 & 94.7 & 96.2 & 99.5 & 47.0 & 79 & 99.1 & 99.3 & 99.4 \\
\hline
Average & 94.3 & 95.15 & 96.3 & \textbf{99.4} & 66.4 & 79 & 98.0 & 99.3 & 99.3 \\
\hline
\end{tabular}
\vspace{-4mm}
\end{table*}
\subsection{Performance on real data}
Several experiments are performed to test the effectiveness of the synthetic parameter optimization method. All tests are performed on the OCCLUSION \cite{brachmann2014learning} dataset.
The tests are performed with the parameters found in the parameter optimization. For the discrete parameters, two-parameter sets are selected, one for a run-time under four seconds and one with maximum run-time. The parameters are ($PC=16,PE=4,RI=1500,II=10,DC=1$) and ($PC=32,PE=8,RI=2500,II=10,DC=10$), respectively.
We also test the previously trained network with our modified method, without the parameter optimization but with the same discrete parameters. Additionally, we test the new network with heuristic parameters and with parameter optimization performed without domain randomization.
For the optimized parameters, tests are performed both with the MASK R-CNN detector and the CosyPose detector.
The resulting recall and run-times of the tests are shown in Tab.~\ref{tab:final_parameters}.
The results show that the network using optimized domain randomization has the same performance as the manually tuned model but does not significantly increase the performance. The optimization without domain randomization shows poor generalization and performs worse than heuristically tuned parameters.
However, the significance of the parameter optimization with domain randomization is shown. The same method increases from a recall of 77.63~\% to 78.91~\% by tuning the parameters.
The optimization using the ADD/I metric shows poor results, verifying the importance of the BOP metric.
The maximum run-time parameters are also shown to increase the performance, going to a recall of 80.11~\%. Additionally, using the CosyPose detector gives superior results, increasing the recall to 82.0~\%.
\subsection{Comparison with state-of-the-art methods}
\begin{table}[tb]
\centering
\caption{ Results on the OCCLUSION dataset \cite{brachmann2014learning} in \% recall with the ADD/I score. The compared methods are PointVoteNet \cite{hagelskjaer2019pointvotenet}, PVN3D \cite{he2020pvn3d}, PoseCNN \cite{xiang2017posecnn}, and the previous method \cite{hagelskjaer2020bridging}. The score for \cite{he2020pvn3d} is from \cite{hesupplementary}. Rotation invariant objects are marked with an *.}
\label{tab:occlusion}
\small
\begin{tabular}{|l|c|c|c||c|c|c|}
\hline
\begin{tabular}{@{}c@{}} Training \\ Data \end{tabular} & \multicolumn{3}{c||}{\textbf{Real}} & \multicolumn{3}{c|}{\textbf{Synthetic}} \\
\hline
& {\footnotesize \cite{hagelskjaer2019pointvotenet}} & {\footnotesize \cite{he2020pvn3d}} & {\footnotesize \cite{xiang2017posecnn}} & {\footnotesize Prev.} & \multicolumn{2}{c|}{\footnotesize ParaPose} \\
& & & & & {\footnotesize $<$ 4 s} & {\footnotesize MAX} \\
\hline
Ape & 70.0 & 33.9 & 76.2 & 66.1 & 66.6 & 68.5 \\
Can & 95.5 & 88.6 & 87.4 & 91.5 & 94.3 & 95.7 \\
Cat & 60.8 & 39.1 & 52.2 & 60.7 & 66.3 & 68.8 \\
Driller & 87.9 & 78.4 & 90.3 & 92.8 & 92.8 & 94.6 \\
Duck & 70.7 & 41.9 & 77.7 & 71.2 & 73.7 & 74.8 \\
Eggbox* & 58.7 & 80.9 & 72.2 & 69.7 & 71.3 & 73.4 \\
Glue* & 66.9 & 68.1 & 76.7 & 71.5 & 81.1 & 81.8 \\
Hole p. & 90.6 & 74.7 & 91.4 & 91.5 & 96.4 & 97.3 \\
\hline
Average & 75.1 & 63.2 & 78.0 & 77.2 & \textbf{80.5} & \textbf{82.0} \\
\hline
\end{tabular}
\vspace{-6mm}
\end{table}
The method is compared with current state-of-the-art methods to show the effectiveness of the developed parameter optimization. Only methods using RGB-D data are included as these have the highest performance.
We showcase the results using the CosyPose \cite{labbe2020cosypose} detector with the optimized parameters for run-time under four seconds and maximum run-time. Results are shown for the LINEMOD and the OCCLUSION datasets. The performance on LINEMOD is shown in Tab.~\ref{tab:linemod}. The results show that our method outperforms all methods trained on synthetic data and most methods using real training data, while only PVN3D \cite{he2020pvn3d} obtain better results.
The results for the OCCLUSION dataset are shown in Tab.~\ref{tab:occlusion}. Apart from the previous method \cite{hagelskjaer2020bridging}, all methods have been trained on real data. Our methods outperforms all previous methods and obtains a recall of 82.0~\%. This is the new state-of-the-art result on this dataset, an improvement of four percentage points, and is obtained by using only synthetic data. Additionally, it should be noted that the method PVN3D \cite{he2020pvn3d} which obtained the highest score on the LINEMOD \cite{hinterstoisser2012model} dataset, only obtains a 63.2~\% recall on this more challenging dataset.
\section{CONCLUSION}
\label{conclusion}
This paper presents a method for the set-up of pose estimation using only synthetic data. The network is trained, and parameters are optimized using synthetic data. The method also provides the best trade-off between performance to a given run-time. While the run-time is estimated using simulation, it generally matches the real world and allows for a more straightforward set-up procedure.
By using the optimized parameters, the pose estimation system outperforms the original method using manually tuned parameters. The developed method is compared with current state-of-the-art methods, which also use real training data. On the challenging OCCLUSION dataset, our method obtains a recall of 82.0~\%, which is the new state-of-the-art.
In future work, this method could be expanded to process entire scene point-clouds. The network can then be pre-trained on large datasets to create strong weights which increase generalizability. The network can also be trained for multiple objects, and the noise model can be trained for the entire scene.
{\small
\bibliographystyle{ieee_fullname}
|
\section{Missing proofs from Section \ref{sec:hardness}}
\label{app:hardness}
\begin{claim}
Suppose $X_1, X_2, \ldots, X_n$ is an optimal ordering of the random variables. In an optimal stopping rule for this ordering, let $S$ be the set of random variables that are accepted only when their realization is $1$, and $T$ be the random variables that are accepted whenever their realization is positive. Then, $S$ precedes $T$. That is, $i < j$ whenever $X_i \in S$ and $X_j \in T$.
\end{claim}
\begin{proof} Suppose there is an optimal ordering for which $S$ does not precede $T$. In such a case there must be a pair of {\em adjacent} random variables $X_i, X_j$ in the ordering such that $X_i \in S$, $X_j \in T$, and $X_j$ appears before $X_i$. Let $L$ be the sequence of random variables that precede $X_j$ and $R$ be the sequence of random variables that succeed $X_i$.
We prove the claim by a standard interchange argument in which $X_i$ and $X_j$ are swapped: not surprisingly, the contributions from $L$ and from $R$ will be the same in both sequences, so their difference will assume a simple form.
Let $\sigma_{ij}$ be the ordering where $X_i$ precedes $X_j$ and let $\sigma_{ji}$ be the interchanged ordering where $X_j$ precedes $X_i$. For their fixed thresholds, let $f(L)$ be the probability that none of the random variables in $L$ is accepted and let $E(L)$ be the expected reward given that a random variable in $L$ is accepted (similarly define for $R$). Define $V_{\sigma_{ij}}$ and $V_{\sigma_{ji}}$ to be the expected rewards under orderings $\sigma_{ij}$ and $\sigma_{ji}$ respectively, and under these fixed thresholds.
Then, it is easy to verify that
$$V_{\sigma_{ij}} \; = \; E(L)(1-f(L)) + f(L) [ q_i + (1-q_i) (p_j m_j + q_j) + (1-q_i)(1-q_j-p_j) E(R)(1-f(R))]$$
and
$$V_{\sigma_{ji}} \; = \; E(L)(1-f(L)) + f(L) [ p_j m_j + q_j + (1-q_j-p_j) q_i + (1-q_j-p_j) (1-q_i) E(R)(1-f(R))].$$
Simplifying, we have:
$$V_{\sigma_{ij}} - V_{\sigma_{ji}} \;\; = \;\; f(L) p_j q_i (1 - m_j) \; > 0.$$
Thus, swapping $i$ and $j$ while retaining their acceptance thresholds improves the original ordering, which, therefore, cannot be optimal.
\end{proof}
\begin{claim}
Suppose $(S,T)=(S^\sigma, T^\sigma)$ for some ordering $\sigma$. Then, the ordering $\sigma$ is optimal if and only if (i) $S$ precedes $T$; (ii) the random variables in $S$ are arranged arbitrarily; and (iii) the random variables in $T$ appear in weakly decreasing order of $E_i$.
In particular, if $E_1 = E_2 = \ldots = E_n$, then the random variables in $T$ can be arranged arbitrarily as well.
\end{claim}
\noindent
{\bf Proof.}
We already know that any ordering in which $S$ does not precede $T$ is sub-optimal, verifying (i). To see (ii), note that any random variable in $S$ that is accepted results in a value of 1, and that the probability of accepting {\em some} random variable in $S$ is $1 - \Pi_{i: X_i \in S} (1-q_i)$, regardless of how these random variables are ordered. We can verify (iii) using a simple interchange argument as well.
Suppose $X_i, X_j \in T$. As before, we let $\sigma_{ij}$ be an ordering in which $X_i$ appears immediately before $X_j$, with
$L$ being the sequence of random variables that precede $X_i$ and $R$ being the sequence of random variables that succeed $X_j$. $\sigma_{ji}$ is the ordering with $X_i$ and $X_j$ interchanged.
Let $f(L)$ be the probability that none of the random variables in $L$ is accepted and let $E(L)$ be the expected reward given that a random variable in $L$ is accepted (similarly define for $R$). Define $V_{\sigma_{ij}}$ and $V_{\sigma_{ji}}$ to be the expected rewards under orderings $\sigma_{ij}$ and $\sigma_{ji}$ respectively, and under these fixed thresholds.
Then, it is easy to verify that
$$V_{\sigma_{ij}} \; = \; E(L)(1-f(L)) + f(L) [ p_i m_i + q_i + (1-q_i-p_i) (p_j m_j + q_j) + (1-q_i-p_i)(1-q_j-p_j) E(R)(1-f(R))]$$
and
$$V_{\sigma_{ji}} \; = \; E(L)(1-f(L)) + f(L) [ p_j m_j + q_j + (1-q_j-p_j) (p_i m_i + q_i) + (1-q_j-p_j) (1-q_i-p_i) E(R)(1-f(R))].$$
Simplifying, we have:
\begin{eqnarray*}
V_{\sigma_{ij}} - V_{\sigma_{ji}} & = & f(L) [ (p_j + q_j) (p_i m_i + q_i) - (p_i+q_i) (p_j m_j + q_j) ] \\
& = & \frac{f(L)}{(p_i+q_i)(p_j+q_j)} (E_i - E_j).
\end{eqnarray*}
Thus, it is optimal for $X_i$ to appear before $X_j$ in $T$ if $E_i > E_j$.
To see that any ordering satisfying properties $(i)-(iii)$ must be optimal note that the value of any ordering satisfying all of these properties is identical, and so should be optimal (because $(S,T)$ is assumed to be an optimal partition).
\qed
\section{Missing proofs for section \ref{sec:prophet}}
\label{app:prophet}
We prove the following statement to complete the proof of Theorem \ref{th:prophet} in Section \ref{sec:prophet}.
\begin{lemma}
$$ \max(T1, T2, T3) \ge 0.8 \text{MAX}$$
where $\text{MAX}$ is as defined in \eqref{eq:max} and $T1, T2, T3$ are as defined in \eqref{eq:T123}.
\end{lemma}
Using some algebraic manipulations, we can equivalently express MAX defined in \eqref{eq:max} as:
\begin{eqnarray*}
MAX& :=& p^*b^* + (1-p^*)p_w b_w + (1-p^*)(1-p_w)a^*\\
&=& p^*b^*+p_w b_w -p^*p_w b_w+p^*p_wb^*-p^*p_wb^* + (1-p^*)(1-p_w)a^*\\
&=& p^*p_w(b^*-b_w) + (1-p_w)p^*b^*+p_w b_w + (1-p^*)(1-p_w)a^*\\
&=& f(p^*, p_w, b^*, b_w) + g(p^*, p_w, b^*, b_w) + h(p^*, p_w, a^*)
\end{eqnarray*}
where
$$f(p^*, p_w, b^*, b_w):= p^*p_w(b^*-b_w)$$
$$g(p^*, p_w, b^*, b_w) := (1-p_w)p^*b^*+p_w b_w$$
$$h(p^*, p_w, a^*) := (1-p^*)(1-p_w)a^*$$
Now, notice that is $T1=g + h$ and $T2=f + g$. The idea of the proof is to bound the relative fraction of $f$ or $h$ to the offline expectation. We assume w.l.o.g. that $b_w=1$. We can do this by multiplying every random variable by some appropriate constant $\alpha$, such that $b_w = 1$. This scales the prophet's reward by $\alpha$ since $E[\max\{\alpha X, \alpha Y\}] = \alpha E[\max\{X, Y\}]$. Furthermore, $V(\alpha X, \alpha Y) = E[\max\{\alpha X, V(\alpha Y)\}]=\alpha E[\max\{X, V(Y)] = \alpha V(X, Y)$. Thus, the optimal reward also scales by $\alpha$ and so the competitive ratio remains the same.
The following three claims together prove the lemma statement.
\begin{claim}\label{Policy_1_best}
If $T1\ge \max(T2, T3)$, then $T1\ge 0.8 MAX$
\end{claim}
\begin{proof}
We can assume that $\mu^* = p^*b^*+(1-p^*)a^* \leq 1$ otherwise either $T2$ or $T3$ would be greater than $T_1$. Now since $T1\geq T2$, this implies that $f\leq h$, and combined with the previous inequality, we get
\begin{eqnarray*}
p^*p_w(b^*-1) &\leq & (1-p^*)(1-p_w)a^*\\
&\leq& (1-p_w)(1-p^*b^*)
\end{eqnarray*}
Rearranging terms, we get $p_w\leq \frac{1-p^*b^*}{1-p^*}$. Next, we will prove the following:
$$f(p^*, p_w, b^*) < \frac{1}{3}g(p^*, p_w, b^*)$$
The derivation is as follows:
\begin{eqnarray*}
\frac{g(p^*, p_w, b^*)}{f(p^*, p_w, b^*)}&=& \frac{(1-p_w)p^*b^*+p_w}{p^*p_w(b^*-1)}\\
&=&\frac{(\frac{1}{p_w}-1)p^*b^*+1}{p^*(b^*-1)}\\
&\geq & \frac{(\frac{p^*(b^*-1)}{1-p^*b^*})p^*b^*+1}{p^*(b^*-1)}\\
&=&\frac{(p^*)^2(b^*-1)b^* + 1 -p^*b^*}{p^*(b^*-1)(1-p^*b^*)}\\
&=&\frac{p^*b^*}{1-p^*b^*} + \frac{1}{p^*(b^*-1)}\\
&=&\frac{t}{1-t} + \frac{1}{t-p^*}
\end{eqnarray*}
Where we let $t=p^*b^*$. For fixed $p^*$, this term is minimized when $t = \frac{1}{2}(1+p^*)$. Plugging this in, the minimum is $\frac{3+p^*}{1-p^*}$. This is minimized when $p^*\to 0$. Thus, $\frac{f(p^*,p_w, b^*)}{g(p^*, p_w, b^*)} < \frac{1}{3}$.\\
When $T1$ is maximum, it gives a competitive ratio of $\frac{g+h}{f+g+h}\geq \frac{g+f}{g+2f}\geq \frac{(4/3)g}{(5/3)g}=4/5$
\end{proof}
\begin{claim}\label{Policy_2_best}
If $T2\ge \max(T1, T3)$, then $T2\ge 0.8 MAX$
\end{claim}
\begin{proof}
We will prove that
$$h(p^*, p_w, a^*)\leq \frac{1}{3}g(p^*, p_w, b^*)$$
Notice that since $T2\geq T3$, we must have $a^*\leq p_w$ and $b^*\geq 1$. Furthermore, $h\leq f$ so combining these inequalities, we have the relation $(1-p^*)(1-p_w)a^*\leq (1-p^*)(1-p_w)p_w\leq p_wp^*(b^*-1)\Rightarrow b^*\geq \frac{(1-p^*)(1-p_w)}{p^*}+1$. Using these two inequalities,
\begin{eqnarray*}
\frac{g(p^*, p_w, b^*)}{h(p^*, p_w, a^*)} &=& \frac{(1-p_w)p^*b^*+p_w}{(1-p^*)(1-p_w)a^*}\\
&\geq& \frac{(1-p_w)^2(1-p^*)+(1-p_w)p^*+p_w}{(1-p^*)(1-p_w)p_w}\\
&=& \frac{1-(1-p^*)(1-p_w)p^*}{(1-p^*)(1-p_w)p_w}\\
&=&\frac{1}{(1-p^*)(1-p_w)p_w} - \frac{p^*}{p_w}
\end{eqnarray*}
Differentiating this with respect to $p^*$, we see that the minimum is when $p^* = 1-p_w$. Putting this back in, we have to minimize the term
$$\frac{1}{(1-p_w)^2p_w} -\frac{1-p_w}{p_w}$$
The minimum is $3$, attained at $p_w = 0$, proving that $h(p^*, p_w, a)\leq \frac{1}{3}g(p^*, p_w, b^*)$.\\
When $T2$ is maximum, it gives a competitive ratio of $\frac{f+g}{f+g+h} \geq \frac{g+h}{g+2h}\geq\frac{(4/3)g}{(5/3)g}=4/5$.
\end{proof}
\begin{claim}\label{Policy_3_best}
If $T3 \ge \max(T1, T2) $, then $T3 \ge 0.8 MAX$.
\end{claim}
\begin{proof}
$T3$ has an expected reward of $p^*b^*+(1-p^*)a^*$. Since $T3\geq T2$, we have $a^*\geq p_w$, and since $T3\geq T1$, we have $p^*b^* + (1-p^*)a^* \geq p_w + (1-p_w)p^*b^* + (1-p_w)(1-p^*)a^*$. Now suppose we swap the variables $a^*$ and $p_w$ in the expected reward and in the constraints. Then the ``expected reward'' is
$$p^*b^* + (1-p^*)p_w$$
and we have the constraints
$$p_w \leq a^*$$
$$p^*b^* + (1-p^*)p_w\geq a^* + (1-a^*)p^*b^*+(1-a^*)(1-p^*)p_w$$
The second constraint can be rearranged to show that $b^*\geq \frac{(1-p^*)(1-p_w)}{p^*} + 1$. Thus, this case can be reduced to the case of Claim \ref{Policy_2_best}, so the rest of the proof is the same.
\end{proof}
\section{Analysis of Algorithm \ref{alg:fptas}: Proof of Theorem \ref{th:FPTAS-same}}
We show that Algorithm \ref{alg:fptas} is an FPTAS for the problem in \eqref{eq:paritionProblem} of finding an optimal ordered partition. Algorithm \ref{alg:fptas} discretizes the interval $[0,1]$ using a multiplicative grid with parameter $1-\frac{\epsilon}{2n}$, so that it needs to search over only $poly(n,1/\epsilon)$ partitions. Lemma \ref{Partition-Subset} allows us to restrict to these grid points.
Then, in Lemma \ref{ALG-OPT approx} and Lemma \ref{lem:runtime}, respectively, we show that Algorithm \ref{alg:fptas} achieves the required approximation and run-time, to complete the proof of Theorem \ref{th:FPTAS-same}.
\begin{lemma}\label{Partition-Subset}
Let $\mathcal{L}' \subseteq \mathcal{L}$ be the set of all ordered partitions $\{(S',T')\}$ in $\mathcal{L}$ with additional restriction on the last variable $X$ in $(S,T)$ that $V(X) = E[\max(X,0)] \ge \frac{\epsilon \text{OPT}}{2n}$ for $0 \le \epsilon \le 1$. Let $\text{OPT}'=\max_{(S',T')\in \mathcal{L}'} V(S',T')$. Then,
$$\text{OPT}'\ge \left(1-\frac{\epsilon}{2}\right) \text{OPT}.$$
\end{lemma}
\begin{proof}
Let $\delta := \frac{\epsilon \text{OPT}}{2n}$. W.l.o.g., assume that an optimal ordered partition $(S,T)$ orders the variables as $(X_1,\ldots, X_n)$, so that $\text{OPT}=V(X_1, \ldots, X_n)$. Suppose that there exist $1\le k\le n$ such that $V(X_k)> \delta$. Then,
\begin{eqnarray*}
\text{OPT}'\ge V(X_1, \ldots, X_k) & \ge & V(X_1,\ldots, X_n)-\sum_{i=k+1}^n V(X_i) \\
& \ge & \text{OPT} - (n-1)\delta\\
& \ge & (1-\frac{\epsilon}{2}) \text{OPT},
\end{eqnarray*}
where the second inequality followed from repeatedly applying Lemma $\ref{Adding to Tail}$.
Now if such a $k$ does not exist, then by Lemma \ref{Adding to Tail},
$$ \text{OPT} = V(X_1,\ldots, X_n) \le \sum_{i=1}^n V(X_i) \le n\delta \le \frac{\epsilon}{2} \text{OPT}$$
so that trivially, $ (1-\frac{\epsilon}{2})\text{OPT}\le 0 \le\text{OPT}'$.
\end{proof}
\newcommand{H}{H}
\begin{lemma}\label{ALG-OPT approx}
Let $\mathcal{L}^n$ be the set of ordered partitions returned by Algorithm \ref{alg:fptas} when run with parameters $\epsilon, \text{MAX}$ satisfying $\epsilon \in (0,1)$ and
$\text{MAX}\le \text{OPT}$. And, let
$\text{ALG}:=\max_{(S,T)\in \mathcal{L}^n} V(S,T), $ Then,
$$\text{ALG} \geq (1-\frac{\epsilon}{2n})^n \text{OPT}', $$ where $\text{OPT}'$ is as defined in Lemma \ref{Partition-Subset}.
\end{lemma}
\begin{proof}
Let $(S,T)$ be any ordered partition in $ \mathcal{L}'$ where $\mathcal{L}' \subseteq \mathcal{L}$ is the restricted collection of ordered partitions defined in Lemma \ref{Partition-Subset} satisfying $V(X) = E[\max(X,0)] \ge \frac{\epsilon \text{OPT}}{2n}$ for the last variable $X$ in $T$ (note that $T\ne \phi$ for all $(S,T)\in \mathcal{L}$). We show that there exist $(S', T')\in \mathcal{L}^n$ such that $V(S', T') \ge (1-\frac{\epsilon}{2n})^n V(S,T)$.
We prove this by induction. Let $(S_k, T_k)$ be an ordered partition obtained on restricting $S, T$ to the first $k$ variables $X_1, \ldots, X_k$ considered by the algorithm (here variables are ordered so that $E_1 \le \cdots \le E_k$). Let $1\le \bar{k}\le n$ be such that $X_{\bar{k}}$ is the last variable in $T$ and $V(X_{\bar k}) \ge \frac{\epsilon \text{OPT}}{2n}$; such a $\bar{k}$ must exist since $(S,T)\in \mathcal{L}'$ and $T\ne \phi$.
We show that for all $k\ge \bar{k}$, at the end of the iteration $k$ of the algorithm, there exists $(S'_k, T'_k)\in \mathcal{L}^k$ such that
\begin{equation}
\label{eq:induction}
\begin{array}{rcl}
V(S'_k) & \ge & V(S_k),\\
V(T'_k) & \ge & \rho^k V(T_k), \\
V(T'_k) & \ge & \frac{\epsilon}{2n}\text{MAX},
\end{array}
\end{equation}
where $\rho=(1-\frac{\epsilon}{2n})$.
We prove the induction basis for $k=\bar{k}$. By definition of $\bar{k}$, $S_k=\{X_1, \ldots, X_{k-1}\}$ and $T_k=\{X_k\}$.
Since $(\{X_1,\ldots, X_{k-1}\}, \phi)\in \mathcal{L}^{k-1}$, in the beginning of iteration $k$ (before TRIM), the partition $(S''_k, T''_k) = (\{X_1, \ldots, X_{k-1}\}, \{X_k\})$ is added to $\mathcal{L}^k$. Since
\begin{center}
$V(T''_k) = V(X_k)\ge \frac{\epsilon}{2n}\text{OPT} \ge \frac{\epsilon}{2n}\text{MAX},$
\end{center} during TRIM this partition will fall in bucket $\mathcal{B}_j$ for some $j\ge 1$. By the trimming criteria, one partition $(S'_{k}, T'_k) $ from bucket $\mathcal{B}_j$ will remain in $\mathcal{L}^k$ satisfying $V(T'_k) \ge \rho V(T''_k) =\rho V(T_k)\ge \rho^k V(T_k)$,
$V(T'_k)\ge \rho^J max \ge \frac{\epsilon}{2n}\text{MAX}$,
and $V(S'_k)\ge V(S''_k) \ge V(S_k)$.
Therefore, $S'_k, T'_k$ satisfies the conditions stated in \eqref{eq:induction} for $k=\bar{k}$.
Now, for the induction step, assume \eqref{eq:induction} holds for some $\bar{k} \le k<n$. Then, in the beginning of iteration $k+1$ (before $\text{TRIM}$ is called), the algorithm will add two partitions $(\{X_{k+1}, S'_k\}, T'_k)$ and $(S'_k, \{X_{k+1}, T'_k\})$ to $\mathcal{L}^{k+1}$.
We claim that one of these two partitions satisfies the required induction statement for $k+1$, but with a better factor $\rho^k$ instead of the required $\rho^{k+1}$. This can be observed as follows.
Depending on whether $X_{k+1}$ appears in $T_{k+1}$ or $S_{k+1}$, we can consider two cases: either $T_{k+1}=T_k$ or $S_{k+1}=S_k$. In the first case (when $T_{k+1}=T_k$), we set $(S'_{k+1}, T'_{k+1})$ as the first partition $(\{X_{k+1}, S'_k\}, T'_k)$. By induction hypothesis $V(T'_{k+1})=V(T'_k) \ge \rho^k V(T_k) = \rho^k V(T_{k+1})$; also $V(T'_{k+1})=V(T'_k) \ge \frac{\epsilon}{2n} \text{MAX} $; and
\begin{eqnarray}
\label{eq:tmp0}
V(S'_{k+1}) = V(X_{k+1}, S'_k) & = & V(X_{k+1}, V( S'_k))\nonumber\\
& \ge & V(X_{k+1}, V(S_k)) \nonumber\\
& = & V(X_{k+1}, S_k)\nonumber\\
& = & V(S_{k+1})
\end{eqnarray}
where the second line follows from the induction hypothesiss.
For the second case (when $S_{k+1}=S_k$), we use the second partition, and set $(S'_{k+1}, T'_{k+1})= (S'_k, \{X_{k+1}, T'_k\})$, so that by induction hypothesis, $V(S'_{k+1})= V(S'_k) \ge V(S_k) = V(S_{k+1})$, and
\begin{eqnarray}
\label{eq:tmp1}
V(T'_{k+1}) = V(X_{k+1}, T'_k) & = & V(X_{k+1}, V( T'_k))\nonumber\\
& \ge & V(X_{k+1}, \rho^{k} V(T_k)) \nonumber\\
& \ge & \rho^k V(X_{k+1}, T_k)\nonumber\\
& = & \rho^k V(T_{k+1})
\end{eqnarray}
where the second line follows from the induction hypothesis and the third line follows from Lemma $\ref{lemma-3}$. Also, $V(T'_{k+1}) = V(X_{k+1}, T'_k) \ge V(T'_k) \ge \frac{\epsilon}{2n} \text{MAX}$ by induction hypothesis.
However, one or both of these two partitions may be removed by the $\text{TRIM}$ procedure. We claim that
if any of the two partitions $(S'_{k+1}, T'_{k+1})\in \{(\{X_{k+1}, S'_k\}, T'_k), (S'_k, \{X_{k+1}, T'_k\})\}$ is removed by the $TRIM$ procedure, then there will remain another partition $(S''_{k+1}, T''_{k+1})$ in $\mathcal{L}^{k+1}$ satisfying:
\begin{equation}
\label{eq:tmp2}
\begin{array}{rcl}
V(S''_{k+1}) & \ge & V(S'_{k+1}), \\
V(T''_{k+1}) & \ge & \rho V(T'_{k+1}) ,\\
V(T''_{k+1}) & \ge & \frac{\epsilon}{2n}\text{MAX}
\end{array}
\end{equation}
To see \eqref{eq:tmp2}, note that since $V(T'_{k+1})\ge \frac{\epsilon}{2n} \text{MAX}$, $(S'_{k+1}, T'_{k+1})$ falls in a bucket $\mathcal{B}_j, j\ne 0$ during the $\text{TRIM}$ procedure.
Thus, the TRIM procedure will select one partition from this bucket, let it be $(S''_{k+1}, T''_{k+1})$. By definition of buckets, $V(T''_{k+1})\ge \frac{\epsilon}{2n} \text{MAX}$. Also, by the criteria for selecting a partition from a bucket, we have $V(S''_{k+1})\ge V(S'_{k+1})$, and by construction of buckets, if $j\ne 0$, $V(T''_{k+1})\ge \rho V(T'_{k+1})$.
Together, \eqref{eq:tmp0}, \eqref{eq:tmp1}, \eqref{eq:tmp2} prove the induction statement in \eqref{eq:induction}. Applying \eqref{eq:induction} for $k=n$, we get that there exists $(S'_n, T'_n) \in \mathcal{L}^n$ satisfying
\begin{eqnarray*}
V(S'_n, T'_n) & = & V(S'_n, V( T'_n)))\\
& \ge & V(S'_n, \rho^n V(T_n))\\
& \ge & V(S_n, \rho^n V(T_n))\\
& \ge & \rho^n V( S_n, V(T_n))\\
& = & \rho^n V(S,T)
\end{eqnarray*}
Here the first inequality followed from $V(T_n') \ge \rho^n V(T_n)$. For the second inequality, note that a variable in $S_n$ (and $S_n'$) is accepted if and only if it takes value $1$. Therefore, $S_n$ can be replaced by a $\{0,1\}$ variable $Y$ with probability $\prod_{i\in S_n} q_i$ to take value $1$ (and similarly $S_n'$ can be replaced by a $\{0,1\}$ variable $Y'$). Then, since we have $E[Y']=V(S'_n) \ge V(S_n) = E[Y]$, the second inequality follows from Lemma \ref{lemma-2}. The third inequality follows from Lemma \ref{lemma-3}.
This completes the proof of the lemma.
\end{proof}
\begin{lemma}
\label{lem:runtime}
Algorithm \ref{alg:fptas} with parameters $\epsilon\in (0,1)$ and $\text{MAX}\ge \frac{\text{OPT}}{2}$ runs in $O(\frac{n^4}{\epsilon^2})$ time.
\end{lemma}
\begin{proof}
Given $\text{MAX}\ge \frac{\text{OPT}}{2}$, in the TRIM procedure (Algorithm \ref{alg:trim}), we always have $\frac{max}{\text{MAX}} \le \frac{\text{OPT}}{\text{OPT}/2} \le 2$. Therefore, the condition $\rho^J max \ge \frac{\epsilon}{2n} \text{MAX}$ in the TRIM procedure ensures that the number of buckets
$$J\le \log_{1/\rho}(\frac{2n}{\epsilon} \frac{max}{\text{MAX}}) \le \log_{1/\rho}(\frac{4n}{\epsilon})=O(\frac{1}{(1-\rho)}\frac{n}{\epsilon})= O(\frac{n^2}{\epsilon^2})$$
Therefore, we maintain $O(\frac{n^2}{\epsilon^2})$ partitions in each iteration. Since for each partition, we need to calculate the expected reward, which is $O(n)$ time, and there are $n$ iterations, we get the lemma statement.
\end{proof}
Now, we are ready to prove Theorem \ref{th:FPTAS-same}.
\paragraph{Proof of Theorem \ref{th:FPTAS-same}}
Let $\mathcal{L}^n$ be the set of partitions returned by Algorithm \ref{alg:fptas} with parameters $\epsilon\in (0,1)$, and
\begin{center} $\text{MAX}:=\frac{1}{2} E[\max(X_1,\ldots, X_n)].$ \end{center}
Then, $\text{MAX}\geq \frac{1}{2} \text{OPT}$, so that by Lemma \ref{lem:runtime}, Algorithm \ref{alg:fptas} runs in time $O(\frac{n^4}{\epsilon^2})$ time.
Also, using prophet inequality \cite{samuel1984comparison}, $\text{MAX} \le \text{OPT}$,
so that by Lemma \ref{Partition-Subset} and Lemma \ref{ALG-OPT approx},
\begin{center}
$\text{ALG}\ge (1-\epsilon/2) \text{OPT}' \ge (1-\epsilon/2)^2 \text{OPT} \ge (1-\epsilon)\text{OPT}.$
\end{center}
\section{Other algebraic lemmas }
We used following lemmas in the analysis.
\begin{lemma}\label{scaling}
\textbf{Additive Scaling:} Given random variables $X_1,\ldots X_n$ and $c\in\mathbb{R}$ such that $Y_i:= X_i + c$ is a non-negative random variable. Let $\sigma$ be a permutation. Then $V(Y_{\sigma(1)}, \ldots, Y_{\sigma(n)}) = V(X_{\sigma(1)}, \ldots, X_{\sigma(n)}) + c$.
\end{lemma}
\begin{proof}
We prove by induction. For one variable, $V(Y) = E[Y] = E[X + c] = E[\max\{X, 0\}] + c = V(X) + c$.
W.l.o.g, let $\sigma = (1, 2, \ldots, k+1)$. For the inductive step:
\begin{eqnarray*}
V(Y_1,\ldots, Y_{k+1}) &=& V(Y_1, V(Y_2\ldots, Y_{k+1}))\\
&=&V(X_1 + c, V(X_2\ldots, X_{k+1}) + c)\\
&=&E[\max(X_1, V(X_{2}, \ldots, X_{k+1})] + c\\
&=& V(X_1,\ldots, X_{k+1}) + c
\end{eqnarray*}
where the second line follows from the induction hypothesis.
\end{proof}
\begin{lemma}\label{Adding to Tail}
For any $v \ge 0$, $E[\max\{X, c+v\}]\leq E[\max\{X, c\}]+v$ and $E[\max\{X,c-v\}]\geq E[\max\{X,c\}]-v$
\end{lemma}
\begin{lemma}\label{lemma-2}
Let $Y_1$ and $Y_2$ be two $\{0,1\}$ random variables where $E[Y_1] \geq E[Y_2]$. Then $E[\max\{Y_1, c\}] \geq E[\max\{Y_2,c\}]$ for any constant $0\leq c < 1$.
\end{lemma}
\begin{lemma}\label{lemma-3}
$E[\max\{X, \delta c\}]/E[\max\{X, c\}]\geq \delta$ for $0\leq \delta\leq 1$
\end{lemma}
\begin{proof}
For convenience we denote $V(X,c):=E[\max\{X, c\}]$
\begin{eqnarray*}
\frac{V(X,\delta c)}{V(X,c)}&\geq& \frac{V(X,c)-(c-\delta c)}{V(X,c)}\\
&=& 1-\frac{c(1-\delta)}{V(X,c)}\\
&\geq& 1-(1-\delta)\\
&=&\delta
\end{eqnarray*}
where in the first line, we used Lemma $\ref{Adding to Tail}$ and in the third line, we used $V(X,c)\geq c$
\end{proof}
\newpage
\section{Conclusions and further directions}\label{Conclusion}
\blue{In this paper, we took significant steps towards a comprehensive understanding of the optimal ordering problem when the distributions involved have support on a constant number of points. We provided a very strong hardness result that shows the problem is NP-hard even for a very special case of $3$ point distributions. Subsequently, we closed the problem for $2$-point distributions, as well as the said special case of $3$-point distributions, by providing a polynomial time algorithm and an FPTAS respectively. We also provided insights on the impact of ordering by proving improved prophet inequalities.}
There is much left to investigate.
An open question is whether the FPTAS derived in Section \ref{sec:fptas} can be extended to $k$-point distributions for any constant $k$ (we know from \cite{fu2018ptas} that a PTAS is possible). \blue{Our hardness result does not rule out the possibility of such an algorithm. }\sacomment{...although it makes it unlikely..? should we make comments about strong NP-hardness here? }
We proved that for two-point distributions, the expected reward under optimal ordering is within a factor of $1.25$ of the prophet's reward, thus improving the well-known prophet inequality for worst-case ordering (from factor $2$ to $1.25$). Can such a prophet inequality be proven for best ordering in general $k$-point distributions? Finally, an interesting direction is to conduct such an investigation into optimal ordering for other parametric forms of distributions.
\section{Discussion}
\label{sec:discussion}
\subsection{Optimal ordering for uniform distributions}
In this section, we will find the optimal ordering when the random variables $X_i$ form a nested sequence with support $[a_i, b_i]$. In \cite{hill1985selection}, it was shown that uniform random variables with support $[0, b_i]$ should be ordered by decreasing $b_i$, thus our result generalizes theirs.
\begin{prop}
Let $X_1,X_2,\ldots,X_n$ be uniform random variables such that $[a_j, b_j]\subseteq [a_i, b_i]$ for $i<j$. Then the optimal ordering is $(X_1,X_2,X_3,\ldots,X_n)$.
\end{prop}
\begin{proof}
It is enough to show that for any two consecutive random variables $X_i$ and $X_{i+1}$ in an ordering, if $[a_{i+1}, b_{i+1}]\subseteq [a_i, b_i]$, it is always preferable to examine $X_i$ first and then $X_{i+1}$. Because of monotonicity (Lemma \ref{monotonicity}), it is sufficient to only consider the tail end of the ordering (i.e. either $(X_i, X_{i+1}, \ldots)$ or $(X_{i+1}, X_i, \ldots)$). Wlog, assume the two orderings under consideration are $(X_1, X_2, \ldots)$ or $(X_2, X_1, \ldots)$. That is, we fix the ordering after $X_1, X_2$ and suppose the expected reward for that part of the ordering is some real number $c\geq 0$. Thus, our problem is reduced to determining whether $(X_1, X_2, c)$ or $(X_2, X_1, c)$ offers a better expected reward.
Wlog, we scale $X_1$ and $X_2$ (using Lemmas \ref{scaling}, \ref{Mult-Scaling}) such that $X_1\sim U[0,1]$ and $X_2\sim U[a,b]$ where $0<a<b<1$. Let $ALG:S\to \mathbb{R}$ give the expected return under optimal thresholding, where $S = \{(X_1, X_2, c), (X_2, X_1, c)\}$. Let $ALG^+:S\to \mathbb{R}$ give the expected return under optimal thresholding, and which also receives the information $X_1\in [0,a]$, $X_1\in [a,b]$, or $X_1\in [b,1]$ in advance.
Now, we make the observation that $ALG(X_1,X_2,c)\leq ALG^+(X_1,X_2,c)$ since we can simply ignore the additional information that $ALG^+$ provides and $ALG^+$ will return the same value as $ALG$. Hence, $\max\{ALG(X_1,X_2,c),ALG(X_2,X_1,c)\}\leq \max\{ALG^+(X_1,X_2,c),ALG^+(X_2,X_1,c)\}$.
Furthermore, $ALG^+(X_2,X_1,c)= ALG^+(X_1,X_2,c)$. If $X_1\in [0,a]$, we will always reject $X_1$ regardless of whether it comes first or second. In this case, $ALG^+(X_2,X_1,c)=ALG^+(X_1,X_2,c)$.
If $X_1\in [a,b]$, then $X_1$ truncated at $[a,b]$ is distributed $U[a,b]$ so it is iid with $X_2$. In this case, the ordering does not matter, hence $ALG^+(X_2,X_1,c)=ALG^+(X_1,X_2,c)$.
Finally, if $X_1\in [b,1]$, we will always reject $X_2$. Then, the ordering does not matter and $ALG^+(X_2,X_1,c)= ALG^+(X_1,X_2,c)$.
Hence, $\max\{ALG^+(X_1,X_2,c),ALG^+(X_2,X_1,c)\}=ALG^+(X_1,X_2,c)$. Furthermore, $ALG^+(X_1,X_2,c)=ALG(X_1,X_2,c)$. This is because, since $X_1$ comes first, knowing the extra information in $ALG^+$ does not help. Therefore, we have the inequality
\begin{eqnarray*}
\max\{ALG(X_1,X_2,c),ALG(X_2,X_1,c)\}&\leq & \max\{ALG^+(X_1,X_2,c),ALG^+(X_2,X_1,c)\}\\
&=& ALG^+(X_1,X_2,c)\\
&=& ALG(X_1,X_2,c)
\end{eqnarray*}
Hence, it is always beneficial to order $X_1$ before $X_2$ regardless of the remaining sequence. This proves the result.
\end{proof}
\section{Easy Orderings}\label{Easy Orderings}
\newcommand{Left Support Property\xspace}{Left Support Property\xspace}
\newcommand{LSP\xspace}{LSP\xspace}
\begin{theorem}\label{th:Two-Point_ordering}
Given random variables $X_i, i=1,\ldots, n$ with arbitrary two-point distributions, there exists an algorithm to find the optimal ordering for optimal stopping~in $O(n^2)$ time.
\end{theorem}
To derive the above result, we first investigate a simpler case when the left endpoint $a_i=0$ for all $i$. In this case, the optimal ordering turns out to be very simple: the variables can be ordered simply in descending order of their right endpoints. Our key result is in Lemma \ref{lem:LEP} and Corollary \ref{cor:LEP} where we show a Left Support Property\xspace (LSP\xspace) of optimal ordering for any set of distributions. This property enables us to extend the above simple algorithm to obtain an optimal ordering algorithm for general two-point distributions.
The following lemma characterizes the optimal ordering for the case when $a_i=0$ for all $i$.
\begin{lemma}\label{0lep-offlinemax}
Given random variables $X_i, i=1,\ldots, n$ that are two-point distributions with $a_i=0, i = 1,\ldots, n$. Then, an optimal ordering can be obtained by ordering the variables in descending order of their right endpoints $b_i$. That is, an ordering $\sigma$ is optimal if $b_{\sigma(1)}\ge \cdots \ge b_{\sigma(n)}$.
Furthermore, under an additional condition that $a_i\ne b_i, 0<p_i <1$ for all $i$, an ordering $\sigma$ is optimal {\it only if} $b_{\sigma(1)}\ge \cdots \ge b_{\sigma(n)}$.
\end{lemma}
\begin{proof}
Given two-point distributions with $a_i=0$ and an ordering $\sigma$ with $b_{\sigma(1)}\ge \cdots \ge b_{\sigma(n)}$, the optimal stopping policy (refer to Section \ref{sec:prelims}) reduces to the following: at any step $t$, check if $X_{\sigma(t)}=b_{\sigma(t)}$ (i.e., realizes its right endpoint); if yes, stop; otherwise continue. Since the variables are probed in descending order of right endpoints, the expected value $V_\sigma$ for this policy is simply equal to the maximum of right endpoints realized, or $0$. Since each $X_i$ can take either value $a_i=0$ or $b_i\ge 0$, this is same as $E[\max(X_1, \ldots, X_n)]$, that is, the expected hindsight maximum or the prophet's expected reward. Thus, trivially such an ordering $\sigma$ is optimal.
To prove the `only if' part of the lemma statement, suppose that for an ordering $\sigma$, there exists $j$ such that $b_{\sigma(j)} < b_{\sigma(j+1)}$. From the above discussion, there exists an ordering that achieves the expected hindsight maximum reward. Hence, it is sufficient to prove that with some positive probability, the reward at the stopping time under ordering $\sigma$ will be strictly smaller than the hindsight maximum.
Consider the event that $X_{\sigma(j)}$ is probed and takes value $b_{\sigma(j)}$. This event will happen, e.g., if $X_{\sigma(i)}=0$ for all $i<j$ and $X_{\sigma(j)}=b_{\sigma(j)}$, which has positive probability since $0<p_i<1$ for all $i$. Under this event, there are two possible scenarios for any stopping policy: either the stopping policy can accept $X_{\sigma(j)}=b_{\sigma(j)}$ and stop; or it can continue to probe the next variable. In both scenarios, we argue there is a positive probability that the hindsight reward is higher than the reward at stopping time. In the first scenario, this is the case if $X_{\sigma(j+1)}=b_{\sigma(j+1)}>b_{\sigma(j)}$ which can happen with probability $p_{\sigma(j+1)}>0$. In the second scenario, this is the case if $X_{\sigma(j+1)}=\cdots = X_{\sigma(n)}=0$, which can happen with probability $\prod_{i\ge j+1} (1-p_{\sigma(i)})>0$.
\end{proof}
\begin{lemma}[Left Support Property\xspace~(LSP\xspace)]
\label{lem:LEP} Suppose $X_1$, $X_2$, $\ldots$, $X_n$ are random variables with bounded support $X_i\in [a_i, b_i]$.
Then there exists an optimal ordering $\sigma$ with the property that \mbox{$\Pr(X_{\sigma(i)} \le V_\sigma(i+1))>0$} for $i=1,\ldots, n-1$. That is, for every $X_i$, its distribution has non-zero support on the left of the value $V_\sigma(i+1)$.
\end{lemma}
\begin{proof}
The proof is by construction. We show that from any optimal ordering we can obtain an ordering that satisfies LSP\xspace, without decreasing its value. W.l.o.g. let $\sigma=(1, \ldots, n)$ be an optimal ordering that does not satisfy LSP\xspace. Let $X_i$ be the first r.v. among $X_1,\ldots, X_{n-1}$ which violates LSP\xspace, i.e. $P(X_i \leq V_\sigma(i+1))=0$. Then, $X_i>V_\sigma(i+1)$ with probability $1$, and by definition $$V_\sigma(i)=E[\max(X_i, V_\sigma(i+1))] = E[X_i].$$
Now, consider an alternate ordering $$\sigma'=(1, \ldots, i-1, i+1, \ldots, n, i),$$
where variable $X_i$ is pushed to the end.
We show that $\sigma'$ satisfies LSP\xspace.
Observe that (refer to \eqref{eq:backward}),
$$V_{\sigma'}(i) = V(X_{i+1}, \ldots, X_n, X_i) = E[\max(X_{i+1}, \cdots, E[\max(X_n, E[\max(X_i, 0)])]\cdots)] \ge E[X_i]$$
Thus,
\begin{equation}
\label{eq:1}
V_{\sigma'}(i) \ge V_\sigma(i) = E[X_i]
\end{equation}
From above we can derive the following conclusions about the new ordering $\sigma'$:
\begin{itemize}
\item LSP\xspace~ property is satisfied for indices $i, \ldots n-1$ in $\sigma'$: Suppose for contradiction that for some $i\le j\le n-1$, LSP\xspace property is violated, i.e., suppose that $X_{\sigma'(j)}>V_{\sigma'}(j+1)$ with probability $1$. Now, since
$$V_{\sigma'}(j+1)\ge V_{\sigma'}(n) = E[\max(X_i, 0)]\ge E[X_i],$$
we have that $X_{\sigma'(j)}> E[X_i]$ with probability $1$. Since $\sigma'(j)\in \{i+1, \ldots, n\}$, this implies $V_\sigma(i+1)=V(X_{i+1}, \ldots, X_n)>E[X_i]$. For its expected value to be be below $V_\sigma(i+1)$, $X_i$ must take value below $V_\sigma(i+1)$ with non-zero probability. This implies that LSP\xspace property is satisfied by $X_i$ in ordering $\sigma$, which is a contradiction to the assumption we started with.
\item LSP\xspace~ property is satisfied for indices $1, \ldots, i-1$ in $\sigma'$: Since $\sigma(j)=\sigma'(j)$ for $1\le j\le i-1$, and $V_{\sigma'}(i) \ge V_\sigma(i)$, we have that (refer to \eqref{eq:backward})
$$V_{\sigma'}(j) = V(X_j,\ldots, X_{i-1}, V_{\sigma'}(i))\ge V(X_j,\ldots, X_{i-1}, V_{\sigma}(i)) = V_\sigma(j).$$
This means for $1\le j\le i-1$, $$\Pr(X_{\sigma'(j)} \le V_{\sigma'}(j+1))\ge \Pr(X_\sigma(j) \le V_{\sigma}(j+1))>0,$$
where the last inequality followed from the assumption that $X_i$ is the first variable to violate OP in the ordering $\sigma$. Thus, the LSP\xspace property is satisfied by $1\le j\le i-1$ in the new ordering.
\item $V_{\sigma'}\ge V_{\sigma}$: If $i=1$, $V_{\sigma'}(1)\ge V_\sigma(1) $ follows from \eqref{eq:1}, otherwise it follows from the observation in the previous bullet that $V_{\sigma'}(j)\ge V_{\sigma}(j)$ for $1\le j\le i-1$.
\end{itemize}
Thus, the new ordering $\sigma'$ satisfies LSP\xspace~ property, and has value greater than or equal to optimal ordering ($V_{\sigma'}\ge V_\sigma$).
\end{proof}
A corollary of Lemma \ref{lem:LEP} is the following Left Endpoint Property (LEP) for two-point distributions:
\begin{corollary}[Left Endpoint Property (LEP) for two-point distributions]
\label{cor:LEP}
Suppose $X_1$, $X_2$, $\ldots$, $X_n$ are random variables two-point distributions with support $X_i\in \{a_i, b_i\}$.
Then there exists an optimal ordering $\sigma$ with the property that $a_{\sigma(i)}\le V_\sigma(i+1)$ for $i=1, \ldots, n-1$.
\end{corollary}
The left endpoint property allows us to derive the following characterization of the optimal ordering in two-point distributions, which will significantly reduce the space of orderings to search over in order to find the optimal ordering.
\begin{prop} \label{Two-Point-Structure}
Given $n$ variables with two-point distributions, define $n$ orderings as follows: for each $i=1,\ldots, n$, define $\sigma^i$ as any ordering obtained by setting the last variable as $X_i$, and ordering the remaining variables in weakly descending order of their right endpoints. Then, at least one of these $n$ orderings is optimal.
\end{prop}
\begin{proof}
By Corollary \ref{cor:LEP} there exists an optimal ordering satisfying LEP, w.l.o.g. assume it is $\sigma^*=(1, \ldots, n)$.
Let $\sigma$ be
an ordering such that $\sigma(n)=n$, and $b_{\sigma(1)}\geq \ldots \geq b_{\sigma(n-1)}$, i.e., $\sigma$ is one of the $n$ orderings defined in the proposition statement. Then, we show that $V_\sigma\ge V(X_1,\ldots, X_n)$.
W.l.o.g., we assume that $b_i > E[\max{X_n, 0}]$ for all $i$, otherwise, $X_i$ will always be rejected in both $\sigma$ and $\sigma^*$. We will show that $V_{\sigma} = V(X_{\sigma(1)},\ldots, X_{\sigma(n-1)}, X_n)\geq V(X_1,\ldots, X_{n-1}, X_n) = V_{\sigma^*}$.
For $i=1,\ldots, n-1$, define:
\[
X_i' = \left\{\begin{array}{ll}
E[\max(X_n, 0)], & \text{w.p. } 1-p_i,\\
b_i, & \text{w.p. } p_i
\end{array}
\right\}
\]
and
\[
X_i'' = \left\{\begin{array}{ll}
0, & \text{w.p. } 1-p_i,\\
b_i - E[\max(X_n,0)], & \text{w.p. } p_i
\end{array}\right\}
\]
We claim the following sequence of relations:
\begin{align*}
V(X_{\sigma(1)},\ldots, X_{\sigma(n-1)}, X_n) & \geq V(X'_{\sigma(1)},\ldots, X'_{\sigma(n-1)}, X_n) \tag{1} \\
&=V(X'_{\sigma(1)},\ldots, X'_{\sigma(n-1)}) \tag{2} \\
&=V(X''_{\sigma(1)},\ldots, X''_{\sigma(n-1)}) + E[\max(X_n, 0)] \tag{3}\\
&\geq V(X''_{1},\ldots, X''_{n-1}) + E[\max(X_n, 0)] \tag{4} \\
&= V(X'_{1},\ldots, X'_{n-1})\tag{5}\\
&= V(X'_{1},\ldots, X'_{n-1}, X_n)\tag{6}\\
&= V(X_1,\ldots, X_n)\tag{7}
\end{align*}
We now justify each of the relations:
For $(1)$: First notice that $V_{\sigma}(i+1)\geq E[\max(X_n,0)]$ for all $i<n$. In case, $a_{\sigma(i)} \leq E[\max(X_n,0)] \le V_{\sigma}(i+1)$, then $X_{\sigma(i)}$ would be rejected if its left endpoint is realized, therefore, increasing its left endpoint to $E[\max(X_n,0)] $ does not change the overall expected value. On the other hand, if $a_{\sigma(i)} > E[\max(X_n,0)]$, then transforming by decreasing the left endpoint from $a_{\sigma(i)}$ to $E[X_n]$ can only decrease (or not change) the overall expected return.
For $(2)$ and $(6)$: If for any of the two orderings, the second last variable ($X'_{\sigma(n-1)}$ or $X'_{n-1}$) is probed and its left endpoint ($E[\max(X_n,0)]$) is realized, then accepting it would give $E[\max(X_n,0)]$ reward, while rejecting it and continuing would also give $E[\max(X_n, 0)]$. Thus, removing $X_n$ does not affect the overall expected reward.
For the $(3)$ and $(5)$: $X_i'\ge 0$ and $X_i''=X_i'-E[\max(X_n,0)]\ge 0$ due to the assumption made (w.l.o.g.) that $b_i \geq E[\max(X_n, 0)]$; we can therefore use an observation that for any sequence of variables $(Y_1,\ldots Y_k)$ and constant $c$ if $Y_i+c\ge 0$, then $V(Y_1+c,\ldots Y_k+c) = V(Y_1,\ldots Y_k) + c$. This is formally proven in Lemma \ref{scaling} in the appendix.
For $(4)$: Since $X_i''$ are two-point distributions with $0$ left endpoint, it follows from Lemma \ref{0lep-offlinemax} since $\sigma$ is an optimal ordering.
For $(7)$: This is due to the fact that $\sigma^*$ is an LEP ordering. Therefore, $a_i \leq V_{\sigma^*}(i+1)$ for all $i<n$, so that if $X_i$ realizes its left endpoint, it will be rejected anyway. Hence, changing its left endpoint to $E[\max(0, X_n)]\le V_{\sigma^*}(i+1)$ will not change its overall expected value.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{th:Two-Point_ordering}]
Proposition \ref{Two-Point-Structure} narrows down the space of orderings to be searched over to just $n$ orderings $\sigma^i, i=1,\ldots, n$, where the ordering $\sigma^i$ was defined in Proposition \ref{Two-Point-Structure}.
It will take $O(n)$ time to compute the expected reward $V_{\sigma^i}$ of each of these $n$ orderings, and therefore takes $O(n^2)$ time to compute the expected reward for all $n$ orderings and find the best ordering.
\end{proof}
\section{FPTAS for 3-Point Distributions with Differing Middle Point}
\label{sec:fptas}
\newcommand{\text{OPT}}{\text{OPT}}
\newcommand{\text{ALG}}{\text{ALG}}
\sacomment{We should add some discussion here on how our FPTAS is better than their PTAS. May be instead of using $poly(n,1/\epsilon)$ we should use exact expression, so that we can compare to theirs. Update: I have added text for this. I also did a pass and moved the proof to the appendix.}
Previously, in Section \ref{sec:hardness}, we proved that ordering $n$ random variables with three-point distributions is NP-hard even when each random variable $X_i$ has three point distribution with support on $\{0,m_i,1\}$. \blue{In this section, we provide an FPTAS for a slight generalization of this special case where the support is on three points $\{a_i, m_i, 1\}$ where $a_i < m_i <1$. Note that in \cite{fu2018ptas}, the authors provide a PTAS for this problem with distributions that have support on any constant number points. The runtime complexity of the PTAS proposed there is $O(n^{2^\epsilon})$ in general. However, (to the best of our understanding) when applied to our problem, its complexity reduces to $O\left(\left(\frac{n}{\epsilon}\right)^{\epsilon^{-3}}\right)$. On other hand, for any $\epsilon\in(0,1)$, our FPTAS runs in time $O(\frac{n^5}{\epsilon^2})$ (although it only applies to the special case of $3$-point support).}
\begin{theorem}[FPTAS for three-point distributions with same right endpoint]
\label{th:FPTAS-different}
Given a set of $n$ random variables $X_1,\ldots, X_n$, where each random variable $X_i, i=1,\ldots, n$ has three-point distribution with support on $\{a_i,m_i,1\}$ for some $m_i\in [0,1]$ and $a_i<m_i$.
Then, there exists an algorithm that runs in time $O(\frac{n^5}{\epsilon^2})$ to find an ordering $\sigma$ such that $\text{ALG}=V_\sigma \ge (1-\epsilon)\text{OPT}$. Here, $\text{OPT}:=V_{\sigma^*}$ denotes the optimal expected reward at stopping time under an optimal ordering $\sigma^*$.
\end{theorem}
We first demonstrate an FPTAS for an the special case where both left and right end points are the same for all $i$. Later we extend it to an FPTAS for the case when only the right end points are the same to prove Theorem \ref{th:FPTAS-different}.
\paragraph{Algorithm for same left and right end points.} We are given a set of random variables $X_1, \ldots, X_n$ with three-point distributions (refer to Definition \ref{def:three-point}) where for each $i$, $X_i$ takes values $\{0, m_i, 1\}$ with probabilities $1-p_i-q_i$, $p_i$ and $q_i$, respectively. The algorithm for finding an optimal ordering is based on the characterization of optimal orderings in this special case provided by Claim \ref{claim:ST} in Section \ref{sec:hardness} (also see Remark \ref{rem:ST}).
Recall that in this case, there exists an optimal ordering $\sigma$
that (under the optimal stopping policy) partitions the $n$ variables into an ordered partition $(S^\sigma, T^\sigma)$ (refer to Definition \ref{def:op}); and therefore the optimal ordering problem can be solved by finding an optimal ordered partition.
Specifically, define the collection $\mathcal{L}$ of ordered partitions as the collection of sequences $(S,T)$ of $n$ random variables that can be formed by partitioning the set of variables $\{X_1, \ldots X_n\}$ into two sets $S$ and $T$ with $T\ne \phi$, and then ordering the variables within $S$ and within $T$ in weakly descending order of $E_i$. Then, from Claim \ref{claim:ST}, we have that
\begin{equation}
\label{eq:paritionProblem}
\text{OPT} = \max_{(S, T)\in \mathcal{L}} V(S, T)
\end{equation}
Using this observation, Algorithm \ref{alg:fptas} is designed to solve the problem in \eqref{eq:paritionProblem} of finding an optimal ordered partition.
The idea behind the FPTAS is to discretize the interval $[0,1]$ using a multiplicative grid with parameter $1-\frac{\epsilon}{2n}$, so that it needs to search over only $poly(n,1/\epsilon)$ partitions.
A detailed description of the steps involved is provided below (Algorithm \ref{alg:fptas}). Here, given a sequence $A$ of random variables, $\{X_i, A\}$ denotes the sequence of random variables formed by concatenating a variable $X_i$ to the beginning of sequence $A$.
\newcommand{\text{MAX}}{\text{MAX}}
\begin{algorithm}[h]
\begin{algorithmic}
\caption{FPTAS for finding the optimal ordering through optimal partitioning
\label{alg:fptas}
}
\STATE {\bf Input:} Ordered sequence of variables $X_1, \ldots, X_n$ such that $E_1 \le \cdots \le E_n$, parameters $\text{MAX}$, $\epsilon$.
\STATE {\bf Initialize:} $\mathcal{L}^0=\{(\phi, \phi)\}, \mathcal{L}^1 = \cdots = \mathcal{L}^n=\phi$;
\FORALL{$k=1, \ldots, n$}
\FORALL{$(S,T)\in \mathcal{L}^{k-1}$}
\STATE Add two partitions $(\{X_k, S\}, T)$ and $(S, \{X_k, T\})$ to $\mathcal{L}^k$.
\ENDFOR
\STATE Call Algorithm \ref{alg:trim} to reduce the number of partitions in $\mathcal{L}^k$ by setting $\mathcal{L}^k\leftarrow \text{TRIM}(\mathcal{L}^k, \epsilon, \text{MAX})$.
\ENDFOR
\STATE Return $\mathcal{L}^n$.
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[h]
\begin{algorithmic}
\caption{TRIM($\mathcal{L}, \epsilon, \text{MAX}$)
\label{alg:trim}
}
\STATE {\bf Initialize:} $\rho :=\left(1-\frac{\epsilon }{2n}\right)$, $max:=\max_{(S,T)\in \mathcal{L}} V(T)$,
and
$J:=\max\{j: \rho^j max\ge \frac{\epsilon}{2n}\text{MAX}\}.$
\STATE Divide the partitions in $\mathcal{L}$ into $J+1$ buckets as
\begin{center}$\mathcal{B}_j := \{(S,T): \rho^{j}
max < V(T) \le \rho^{j-1} max
\}, \text{ for } j=1,\ldots, J$
\end{center}
$$\mathcal{B}_0 := \{(S, T): T=\phi\}$$
\STATE Set $(S^j, T^j) := \arg \max_{(S,T)\in \mathcal{B}_j} V(S)$, for $j=0,1,\ldots, J$.
\STATE Return $\mathcal{L} := \{(S^j, T^j)\}_{j=0}^J$.
\end{algorithmic}
\end{algorithm}
We prove the following theorem regarding Algorithm \ref{alg:fptas}.
The proof is in the appendix.
\begin{theorem}[FPTAS for three-point distributions with same left and right endpoint]
\label{th:FPTAS-same}
Given a set of $n$ random variables $X_1,\ldots, X_n$, where each random variable $X_i, i=1,\ldots, n$ has three-point distribution with support on $\{0,m_i,1\}$ for some $m_i\in [0,1]$.
Then, Algorithm \ref{alg:fptas} runs in time $O(\frac{n^4}{\epsilon^2})$ and finds an ordering $\sigma$ such that $\text{ALG}=V_\sigma \ge (1-\epsilon)\text{OPT}$. Here, $\text{OPT}:=V_{\sigma^*}$ denotes the optimal expected reward at stopping time under an optimal ordering $\sigma^*$.
\end{theorem}
Now we are ready to prove Theorem~\ref{th:FPTAS-different} using an extension of Algorithm \ref{alg:fptas}.
\paragraph{\bf Proof of Theorem \ref{th:FPTAS-different}}
Let $q_i:=P(X_i=1)$, $p_i:=P(X_i=m_i)$ and so $P(X_i=0)=1-p_i-q_i$. From Lemma \ref{lem:LEP}, we know that there exists an optimal ordering $\sigma$ with the property that $Pr(X_{\sigma(i)}\leq V_\sigma(i+1))>0$ for $i=1,\ldots,n-1$. This implies that $a_{\sigma(i)}\leq V_\sigma(i+1)$ and, as a consequence of the DP thresholds, if $a_{\sigma(i)}$ is ever realized from $X_{\sigma(i)}$ in this ordering, then the variable would be rejected. Define $X_{\sigma(i)}'$ to have support $\{0, m_{\sigma(i)}, 1\}$ and probabilities $\{1-p_{\sigma(i)}-q_{\sigma(i)}, p_{\sigma(i)}, q_{\sigma(i)}\}$ for $i=1,\ldots n-1$, and $X_{\sigma(n)}':=E[X_{\sigma(n)}]$. Then, $V(X_{\sigma(1)},\ldots, X_{\sigma(n)}) = V(X_{\sigma(1)}',\ldots, X_{\sigma(n)}')$.
Now, if we knew $\sigma(n)$, then we could transform $X_{\sigma(i)}$ into $X_{\sigma(i)}'$ for $i=1,\ldots,n$ and then, since $X_{\sigma(i)}'$ has support on $\{0,m_i,1\}$, we can use Algorithm~\ref{alg:fptas} and Theorem~\ref{th:FPTAS-same}. Since we do not know $\sigma(n)$, we run our FPTAS in Algorithm~\ref{alg:fptas} $n$ times. Here, in the $i^{th}$ iteration, we define $X_i':=E[X_i]$ and $X_j'$ to have support $\{0, m_j, 1\}$ and probabilities $\{1-p_j-q_j,p_j, q_j
|
\}$ for $j\neq i$. \blue{And the algorithm is run on $X'_j$ variables instead of $X_j$s}. \blue{In one of these iterations (specifically the iteration where $X_i':=E[X_i]$, for $i=\sigma(n)$)
the ordering found by the algorithm will satisfy the required guarantees}.
\section{Hardness of optimal ordering}
\label{sec:hardness}
We show that the problem of finding an optimal ordering of the random variables is NP-hard even for the highly restricted special case in which each random variable $X_i$ is supported on exactly three points: $0$, $1$, and $m_i \in (0,1)$.
Let $q_i := P(X_i = 1)$ and $p_i := P(X_i = m_i) $, so that $P(X_i = 0) = 1 - p_i - q_i$. We assume that $p_i > 0$, $q_i > 0$, and $p_i + q_i < 1$ for all $i$, so that each random variable can assume each value in its support with positive probability.
\begin{theorem}
\label{th:hardness}
The problem of optimal ordering for optimal stopping (refer to Definition \ref{def:prob}) is NP-hard for the case when for each $i=1,\ldots, n$, the random variable $X_i$ has a three-point distribution with support on $\{0,m_i,1\}$ for some $m_i \in (0,1)$.
\end{theorem}
To prove the hardness result, we shall prove some useful properties on the structure of optimal orderings and optimal stopping rules for such random variables.
Fix any ordering $\sigma$ of the random variables $X_1, X_2, \ldots, X_n$. In this case, the optimal stopping policy essentially partitions the $n$ variables into two categories:
those which are accepted on being probed if and only if they realize their right endpoint $1$, i.e., the (ordered) subset ${S}^\sigma:=\{X_{\sigma(i)}, i\in [n]: V_{\sigma(i+1)}> m_{\sigma(i)}\}$; and the remaining (ordered) subset ${T}^\sigma:= \{X_{\sigma(1)}, \ldots, X_{\sigma(n)}\} \setminus {S}^\sigma$. Note that since the last random variable is always accepted irrespective of its value, the last variable will always be in $T^\sigma$.
We claim that in {\em any} optimal ordering $\sigma$, the random variables in $S^\sigma$ appear before the random variables in $T^\sigma$. Further, the random variables in $T^\sigma$ must be ordered in weakly descending order of $E_i$, where
$$E_i := E[X_i | X_i > 0] \; = \; \frac{m_i p_i + q_i}{p_i+q_i}$$
\begin{claim}
\label{claim:ST}
Given random variables $X_1,\ldots, X_n$ that have three-point distributions with support $\{0,m_i, 1\}, $, and probabilities $\{1-p_i-q_i, p_i, q_i\}$ such that $m_i\in (0,1), p_i>0, q_i>0, p_i+q_i<1 $, for $i=1, \ldots, n$. Then, an ordering $\sigma$ is optimal if and only if (i) $S^\sigma$ precedes $T^\sigma$; (ii) the random variables in $S^\sigma$ are arranged arbitrarily; and (iii) the random variables in $T^\sigma$ appear in weakly decreasing order of $E_i$.
In particular, if $E_1 = E_2 = \ldots = E_n$, then the random variables in $T^\sigma$ can be arranged arbitrarily as well.
\end{claim}
\begin{proof}
A complete proof of this lemma is provided in Appendix \ref{app:hardness}. The proof uses an interchange argument, where we show that if in the optimal ordering $\sigma$, the ordering of any pair of variables $X_i, X_j$ violates the stated conditions, then they can be interchanged to increase $V_\sigma$ which would be a contradiction to the optimality of the ordering $\sigma$.
\end{proof}
\begin{remark}
\label{rem:ST}
In fact, if the conditions $m_i\in (0,1), p_i>0, q_i>0, p_i+q_i<1 $ are not satisfied, e.g., if there are some variables with $m_i\in \{0,1\}$, or if the probability of one or more of the three support points is $0$, then proof of Claim \ref{claim:ST} can be modified to show that the conditions (i), (ii), (iii) are still sufficient (though not necessary) for an ordering $\sigma$ to be optimal.
\end{remark}
\begin{definition}[Ordered partitions and Optimal partitioning problem]
\label{def:op}
A sequence $(S,T)$ of $n$ random variables is an {\it ordered partition} if $S,T$ partitions the set of variables $X_1,\ldots, X_n$, $T$ is non-empty, the variables within $S$ are ordered arbitrarily, and the variables within $T$ are ordered in weakly descending order of $E_i$.
The {\it optimal partitioning problem} is defined as the problem of finding an ordered partition $(S,T)$ that maximizes $V(S,T)$ among all ordered partitions.
\end{definition}
\begin{corollary} A corollary of Claim \ref{claim:ST} is that in the three-point distribution case considered here, the optimal ordering problem is equivalent to the optimal partitioning problem.
\end{corollary}
We prove the NP-hardness of the optimal ordering problem by showing that finding an optimal partition is NP-hard. To that end, we consider the problem \textsc{Subset Product}, which is a multiplicative analog of the \textsc{Subset Sum} problem that is known to be NP-complete (see \cite{ng2010product}):
\begin{problem}\label{subset_product}
\textsc{Subset Product}: Given integers $a_1,\ldots, a_n$ with each $a_i > 1$ and a positive integer $B$, is there a subset $T\subseteq N$ such that $\prod_{i\in T}a_i = B$?
\end{problem}
\begin{prop}
The optimal partitioning problem (refer to Definition \ref{def:op}) is NP-hard when each of random variables $X_1,\ldots, X_n$ have three-point distributions with support $\{0,m_i, 1\}$, and probabilities $\{1-p_i-q_i, p_i, q_i\}$ such that $m_i\in (0,1), p_i>0, q_i>0, p_i+q_i<1 $, for $i=1, \ldots, n$.
\end{prop}
\begin{proof}
Given an instance of \textsc{Subset Product}, consider the following collection of random variables. Associated with each element $a_i$ is a random variable $X_i$ with distribution shown in Table~\ref{tab:xidist}.
\begin{table}[h]
\centering
\begin{tabular}{r ccc}
\hline \\
Value & 0 & $\frac{B^2 - a_i}{B^2+1}$ & 1\\ \\% Entering row contents
Probability & $\frac{1}{a_i^2}$ & $\frac{a_i-1}{a_i^2}$& $\frac{a_i-1}{a_i}$\\ \\
\hline
\end{tabular}
\caption{Distribution of $X_i$}
\label{tab:xidist}
\end{table}
Notice that the $X_i$ has support $0$, and $1$, and $m_i = (B^2 - a_i) / (B^2 + 1)$. Notice also that as $a_i > 1$ for all $i$, $m_i \in (0,1)$, $0 <p_i < 1$, $0 <q_i < 1$ and $p_i + q_i < 1$ for all $i$. Finally, observe that
$$
E_i \; := \; E[X_i|X_i > 0] \; = \; \frac{(\frac{B^2-a_i}{B^2+1}) (\frac{a_i-1}{a_i^2})+ \frac{a_i-1}{a_i}}{1 - \frac{1}{a_i^2}} \; = \; \frac{B^2}{B^2+1},$$
which is independent of $i$. Thus, in any ordered partition $(S,T)$ for this instance, the ordering within $S$ and the ordering within $T$ is irrelevant.
To avoid cumbersome notation, we let $S$ and $T$ denote a partition of the indices $\{1,2, \ldots, n\}$. The expected reward $V(S,T)$ for an ordered partition $(S, T)$ can be written as
\begin{eqnarray}
V(S,T) & = & 1 - \Pi_{i \in S} (1-q_i) + \bigg(\Pi_{i \in S} (1-q_i) \bigg) \bigg(1 - \Pi_{j \in T} (1-p_j-q_j) \bigg) \frac{B^2}{B^2+1}
\label{eq1}
\end{eqnarray}
An easy way to see why is to exploit the irrelevance of the relative ordering within $S$ and $T$: the decision maker earns 1 whenever any random variable in $S$ is observed to take on a value of 1; if none of the random variables in $S$ is accepted, the conditional expected value of any accepted random variable in $T$ is the same, and this possibility occurs unless every one of the random variables in $T$ is observed to be zero. Using the fact that $q_i = 1 - 1/a_i$ and $p_i = 1/a_i^2 - 1/a_i$, we can rewrite Eq.~(\ref{eq1}) as
$$V(S,T) \; = \; 1-\prod_{i\in S}\frac{1}{a_i} +\bigg(\prod_{i\in S}\frac{1}{a_i}\bigg)
\bigg(1-\prod_{i\in T}\frac{1}{a_i^2}\bigg)\bigg(\frac{B^2}{B^2+1}\bigg).$$
Let $\gamma :=\prod_{i=1}^n a_i$, $\gamma_T:=\prod_{i\in T} a_i$, $\gamma_S:=\prod_{i\in S} a_i$. Then $\gamma_S = \gamma / \gamma_T$ and $V(S,T)$ can be written solely as a (one-dimensional) function of $\gamma_T$ as follows:
$$V(S,T)=f(\gamma_T) := 1-\frac{\gamma_T}{\gamma} + \frac{\gamma_T}{\gamma}\bigg(1-\frac{1}{\gamma_T^2}\bigg)\frac{B^2}{B^2+1}$$
Differentiating $f(\cdot)$ with respect to $\gamma_T$ twice, we see that
$$
f'(\gamma_T) = -\frac{1}{\gamma} +\bigg(\frac{B^2}{B^2+1}\bigg)\bigg(\frac{1}{\gamma} + \frac{1}{\gamma\gamma_T^2}\bigg)
$$
and
$$f''(\gamma_T) = \frac{-2B^2}{\gamma\gamma_T^3(B^2+1)} < 0$$
Thus, $f(\gamma_T)$ is strictly concave in $\gamma_T$ and achieves its maximum when $f'(\cdot) = 0$, which occurs when $\gamma_T = B$.
To complete the argument, we observe the following: given any instance of \textsc{Subset Product}, we construct the corresponding instance of our optimal partitioning problem and solve it to optimality. The optimal partition $(S,T)$ has $\gamma_T=B$ if and only if the given instance of \textsc{Subset Product} is a ``yes'' instance. Thus, the NP-completeness of \textsc{Subset Product} implies the NP-hardness of our optimal partitioning problem.
\end{proof}
\sacomment{TODO add some comments about strong NP-hardness?}
\section{Introduction}\label{Introduction}
Consider a player who can probe a sequence of $n$ independent random variables $X_1, \ldots, X_n$ with known distributions. After observing (the realized value of) $X_i$, the player needs to decide whether to stop and earn reward $X_i$, or reject the reward and probe the next variable $X_{i+1}$. The goal is to maximize the expected reward at the stopping time. This is an instance of the optimal stopping problem, which is a fundamental problem studied from many different aspects in mathematics, statistics, and computer science, and has found a wide variety of applications in sequential decision making and mechanism design.
When the order in which the random variables $X_1, \ldots, X_n$ are probed is fixed, the optimal stopping strategy can be found by solving a simple dynamic program. Under this strategy, at every step $i$, the player would compare the realized value of the current random variable $X_i$ to the expected reward (under the optimal strategy for the remaining subproblem) from the remaining variables $X_{i+1}, \ldots, X_n$, and stop if the former is greater than the latter.
The celebrated prophet inequalities compare the expected reward of the optimal stopping strategy to $E[\max(X_1, X_2, \ldots, X_n)]$, where the latter can be interpreted as the expected reward of a prophet who can foresee (or `prophesize') the values of all random variables in advance and therefore always stops at the random variable with maximum value.
A seminal result of Krengel and Sucheston~\cite{krengel1977semiamarts} upper bounds the ratio of the prophet's expected reward and that of an optimal stopping strategy by $2$, for an arbitrary sequence of $n$ random variables.
Furthermore, they show that this bound is tight even for $n=2$.
Surprisingly, Samuel-Cahn~\cite{samuel1984comparison} shows that we can achieve an approximation ratio of 2 using simpler stopping strategies (rather than an optimal stopping strategy); subsequent work by Chawla et al.~\cite{chawla2010multi} and Kleinberg and Weinberg~\cite{kleinberg2012matroid} identifies other stopping strategies that establish the same bound. Prophet inequalities have since been used to design simple, sequential, posted-price mechanisms that guarantee a constant fraction of the social welfare or revenue achievable by any mechanism~ (see e.g. \cite{chawla2010multi, hajiaghayi2007automated}).
In this paper, we focus on the relatively less studied question of optimizing the {\it order} in which the random variables should be probed. Specifically, besides choosing a stopping strategy, if the player is free to choose the order in which the random variables are probed, then which ordering would maximize the expected reward at the stopping time? This question is motivated both by practical considerations and theoretical observations about the optimal stopping problem.
In practice, many decision-making settings allow the player such a choice of ordering. Consider for example, the problem of sequentially interviewing candidates for a position, which is often presented as a canonical example of the optimal stopping problem. Assuming that the decision of hire/no-hire needs to be made for each candidate immediately after the interview, the interviewer wants to stop interviewing on reaching the candidate with the highest quality. In such a setting, the interviewer may have the liberty to decide the ordering in which the candidates are invited for an interview. Intuitively, given statistical information about the quality of each candidate (based on their resume for example), some orderings of the candidates can make the decision problem easier and can ensure higher expected quality of the hired candidate. This study aims to formalize this intuition by studying the question of finding an optimal ordering, as well as quantifying the gains to the expected reward from this additional degree of freedom.
Some insights into the impact of ordering can be obtained by considering prophet inequalities. For an arbitrary ordering, the prophet inequality cannot be improved beyond the approximation factor of $2$.
On the other hand, a prophet inequality with a much improved factor of $\approx 1.342$ has been shown for i.i.d. random variables \cite{abolhassani2017beating,correa2017posted}. This gap between the identical and the non-identical case can be closed through better ordering. An example is the study in \cite{esfandiari2017prophet} that showed prophet inequality with a factor of $e/(e-1) \approx 1.6$ for random ordering (and later beaten by \cite{azar2018prophet})
Optimal orderings have potential to close this gap even further.
Consider the special case when all distributions have supports on two or fewer points (hence-forth referred to as the 2-point case). For arbitrary ordering, the prophet inequality cannot be improved beyond the factor of $2$ even in this special case. Here is a simple example (similar to the example in \cite{esfandiari2017prophet}) that demonstrates this limitation. Let $X_1$ be a random variable that takes values $\{0,1\}$, with probabilities $\{1-\epsilon, \epsilon\}$ respectively, and $X_2$ takes value $\epsilon$ with probability $1$. Then, the prophet's expected reward is $E[\max(X_1, X_2)]=\epsilon + (1-\epsilon)\epsilon = 2\epsilon-\epsilon^2$. However, for the ordering $(X_2, X_1)$, the reward earned by the player has an expected value $\epsilon$ whether or not $X_2$ is accepted, yielding only a 2-approximation. On the other hand, for the ordering $(X_1,X_2)$, an expected reward that is the same as the prophet can be achieved by the following strategy: probe $X_1$, stop if $X_1=1$, otherwise probe $X_2$. Thus, in this example, the best ordering is better than the worst-case ordering by a factor of $2$ and the best ordering attains the same value as the prophet. Furthermore, in this example, choosing a random ordering results in an expected reward of $\approx \frac{3}{2} \epsilon$; and therefore, the best ordering is better than the random ordering by a factor of $4/3$.
These observations motivate our investigation into optimal orderings.
\blue{The optimal ordering problem has been previously studied in mathematics and statistics literature in the 80's (e.g., \citet{gilat1987best}, \citet{hill1985selection}, and \citet{hill1983prophet}). However the focus there has been on analytically characterizing the optimal order for some special structured cases (like Bernoulli and exponential distributions). Our focus is on understanding the computational complexity and devising tractable algorithms. One difficulty in such a study is that the nature of this problem changes significantly depending on the type of distributions considered. For example, when distributions are Bernoulli or exponential, the optimal ordering can be found analytically \citep{hill1985selection}, but, the problem remains nontrivial for uniform distributions, and as we show in this paper, even for distributions with very small support.}
\blue{Unlike the fixed ordering case, the problem of finding an optimal ordering for optimal stopping cannot be easily solved in polynomial time by dynamic programming. The ordering problem is an instance of the more general {\it stochastic dynamic programs}. Recently, \citet{fu2018ptas} provided a polynomial time approximation scheme (PTAS) for a class of stochastic dynamic programs under the assumption that all the distributions involved are supported on a constant number of points. The ordering problem studied here can be formulated as a problem in their class of stochastic dynamic program. In fact the optimal ordering problem is a special case of what they refer to as the committed Pandora's box problem, a variant of the Pandora's box problem \cite{weitzman1979optimal}.
}
\blue{In this work, we delve deeper into the optimal ordering problem when the distributions involved have a small support. What is the computational hardness of this problem? Can the problem can be solved to optimality under distributions with very small suport (2 or 3)? Does there exist an FPTAS\footnote{Fully Polynomial Time Approximation Scheme} ? Given that most interesting counter-examples and lower bounds for prophet inequalities and impact of ordering (some of which were discussed above) have been shown for distributions with just 2-point support, the problem appears to be nontrivial even in these special cases.}
\noindent \paragraph{\bf Our contributions.}\blue{We show that the optimal ordering problem is NP-hard even under a special case of 3-point distributions where the highest and lowest points of the support are the same for all the distributions. This is surprising, especially since our optimal ordering problem is a special case of the committed Pandora's box problem, which is a slight variant of the Pandora's box problem. And for the latter, an efficient optimal adaptive strategy is known for arbitrary distributions \cite{weitzman1979optimal}. In fact, the hardness of the committed Pandora's box problem was not understood before our result, even for arbitrary distributions \cite{fu2018ptas}}.
\blue{Among positive results, we present an FPTAS for a special case of 3-point distributions. We also devise an efficient polynomial time algorithm for finding an optimal ordering in the case of 2-point distributions. Further, we show that in this case, under an optimal ordering the prophet inequality holds with a significantly better approximation factor than that under an arbitrary ordering.}
Our results are summarized as follows:
\begin{itemize}
\item {\bf NP hardness for 3-point distributions (Theorem \ref{th:hardness} in \S\ref{sec:hardness}).} Through a reduction from the subset product problem, we show that the problem of finding an optimal ordering is NP-hard even when each random variable $X_i$ is restricted to be a 3-point distribution with support on $\{0,m_i,1\}$ for some $m_i \in (0,1)$, and \blue{$E[X_i|X_i > 0] = E[X_j|X_j>0]$ for all $i, j$}. \sacomment{TODO: add comments about strong NP-hardness?}
\item {\bf Optimal ordering for 2-point distributions (Theorem \ref{th:Two-Point_ordering} in \S\ref{sec:two-point-ordering}).} We show that there exists a simple quadratic time algorithm for finding an optimal ordering in the 2-point case.
\item {\bf New prophet inequality for 2-point distributions (Theorem \ref{th:prophet} in \S\ref{sec:prophet}).}
We prove that given any set of variables with 2-point distributions, under the {\it optimal ordering}, the prophet inequality holds with a much improved factor of $1.25$ as compared to $2$ for an arbitrary ordering. And further, our prophet inequality is tight for 2-point distributions. This illustrates the significance of the ability to choose an ordering.
\item {\bf FPTAS for 3-point distributions (Theorem \ref{th:FPTAS-different} in \S\ref{sec:fptas}).} We provide an FPTAS for the optimal ordering problem for the case when each random variable $X_i, i=1,\ldots, n$ has a three-point distribution with support on $\{a_i,m_i,1\}$ for some $a_i, m_i\in [0,1]$.
\end{itemize}
\subsection{Related Work}
Our work builds on the large body of work, starting from the early work on the classical prophet inequality, on finding an optimal ordering for optimal stopping, and on Pandora's box problem, to more recent work on the prophet secretary problem and variations. We briefly survey this literature and position our contributions in context.
\noindent \textbf{Pandora's box problem and stochastic dynamic programs.} The optimal stopping problem considered here is similar in spirit to a well-studied---but substantially easier--- problem called ``Pandora's box problem''~\cite{weitzman1979optimal}. As in our model, there are $n$ random variables $X_1, X_2, \ldots, X_n$ with distributions is known to the decision maker. Also as in our model, the decision maker is free to probe any random variable at any stage. Unlike in our model, however, in the Pandora's box problem, the decision maker is allowed to choose the value of {\em any} random variable that has been probed; and the feature that makes that problem non-trivial is that a random variable can be probed only at a (known) cost. Weitzman~\cite{weitzman1979optimal} constructs an ``index'' policy for that model and shows that such a policy is optimal; these indices are closely related to Gittins indices for the famous multi-armed bandit problem~\cite{gittins2011multi}. \blue{Our problem is more directly related to a variant called the {\it Committed Pandora's box problem} \cite{fu2018ptas}. Like in our setting, there the decision maker is committed to only choosing the value of the last probed random variable. Committed Pandora's box can be formulated as a problem in the class of stochastic dynamic program studied in \cite{fu2018ptas}, There the authors provide a general PTAS for any such problem when the distributions have a support on constant number of points. Our hardness result for $3$-point distributions therefore provides a very strong computational hardness result for such stochastic dynamic programs.}
\noindent \textbf{Optimal Ordering for optimal stopping:} As mentioned earlier, the problem we consider was studied earlier by Hill~\cite{hill1983prophet} and Hill \& Hordijk~\cite{hill1985selection}. In \cite{hill1985selection}, the authors provide simple ordering rules for some families of random variables; this includes the case when every random variable $X_i$ is uniformly distributed between $0$ and some positive number $\alpha_i$, and some very specific cases of two-point distributions. Our result on optimal order for general two-point distributions requires significantly more work, and will reduce to their results in the specific cases studied there.
More importantly, these earlier papers~\cite{hill1983prophet, hill1985selection} also give examples for which simple rules of thumb---ordering based on mean or variance; stochastic ordering, assuming the variables are all stochastically ordered---do not work. Our NP-hardness result suggests that such heuristic rules are unlikely to be optimal.
\noindent \textbf{Prophet Inequalities:} The work of Krengel and Sucheston~\cite{krengel1977semiamarts} generated a lot of interest in developing prophet inequalities and in obtaining simpler proofs; an important contribution is the work of Samuel-Cahn~\cite{samuel1984comparison}, who provided a simple rule that achieves the same approximation ratio. A good summary of this early work on classic prophet inequalities is the survey of Hill and Kertz~\cite{hill1992survey}. The work of Hajiaghayi et al.~\cite{hajiaghayi2007automated} and Chawla et al.~\cite{chawla2010multi} led to several novel applications of these ideas in designing mechanisms with provably good guarantees on social welfare and revenue. Since these papers, the community has generalized and extended the classical model to richer feasibility domains~\cite{kleinberg2012matroid, feldman2014simple, rubinstein2016beyond, rubinstein2017combinatorial, alaei2014bayesian, ehsani2018prophet, dutting2017prophet}. A good overview of this line of work is the survey paper of Lucier~\cite{lucier2017economic}.
\noindent \textbf{Prophet Secretary Problem:} In addition to extending the classical prophet inequality, researchers have also developed new models to better understand the role of the various assumptions of the basic model. As an example, one can ask if the approximation ratio can be improved if the order in which the random variables are drawn is itself chosen uniformly at random. This model, dubbed the {\em prophet secretary} problem, has been explored actively since its introduction by Esfandiari et al.~\cite{esfandiari2017prophet}, who showed that an improved approximation ratio of $e/(e-1) \approx
1.58$ can be achieved. They achieved this by using a sequence of non-increasing thresholds on the random variables: in such a policy, we accept the $j$th random variable if its value exceeds the threshold $T_j$, regardless of the identity of the random variable being observed. Subsequent work by Correa et al.~\cite{correa2017posted} showed that the same result can be achieved by threshold policies in which the thresholds depend only on the random variable being observed, but independent of when it is observed. Somewhat surprisingly, in very recent work, Ehsani et al.~\cite{ehsani2018prophet} show that one can recover this bound using a {\em single} threshold combined with a carefully chosen (randomized) tie-breaking rule. That the bound of $e/(e-1) \approx 1.58$ is not optimal has been demonstrated in a series of papers, first by Azar et al.\cite{azar2018prophet} to approximately $1.576$ and then by Correa et al.~\cite{correa2019prophet} to approximately $1.503$. Interestingly, Correa et al.~\cite{correa2019prophet} also prove an upper bound of $(\sqrt3 + 1) /2 \; \approx 1.366$ on the approximation ratio achievable by any algorithm, even for the case of 2-point distributions.
\noindent \textbf{Order Selection:} \label{order-secretary intro} A few recent papers address the order selection version of the prophet inequality problem as well. The focus is not on finding an optimal ordering of the random variables, but on establishing (improved) bounds on the performance of an optimal ordering relative to that of a prophet. For example, Yan~\cite{yan2011mechanism} proves a bound of $e/(e-1)$ for this problem and later \cite{beyhaghi2018improved} improves the bound to $1.528$. Note that any bound on the prophet secretary problem automatically carries over to this case. Interestingly, our approximation ratio of $1.25$ for the case of 2-point distributions {\em cannot} be achieved for the prophet secretary version of the problem. This is implied by our simple example presented in the introduction.
\noindent \textbf{IID Instances:} We close by mentioning the recent developments on prophet inequalities for the case in which all the random variables are drawn from the same distribution. Note that order selection is irrelevant in this case, so that all three versions of the classic problem---worst case order, random order, and best case order---coincide. Hill and Kertz~\cite{hill1982comparisons} constructed a worst-case family of instances for each $n$ and later showed that the approximation ratio for these instances is at least $\approx 1.342$; this has been shown to be tight in a recent paper of Correa et al.~\cite{correa2017posted}. Because this is smaller than the worst-case bound for the prophet secretary model, we now know a separation between these two models. Interestingly, we do not know if the worst-case instances for the order selection problem are i.i.d. instances.
\section{Preliminaries}\label{Preliminaries}
Throughout this paper, we assume that all random variables are non-negative and independent. In addition, when we talk about an ordering $(X_1,\ldots, X_n)$, we mean that if $i<j$, then $X_i$ arrives before $X_j$.\\
The following definitions and lemma are from \cite{hill1983prophet} and \cite{hill1985selection}. We assume that all random variables are integrable throughout this paper.
\begin{definition}
Let $X_i$ be a independent random variables and $(X_1,X_2,\ldots,X_n)$ be an ordering of those random variables. Then $V(X_1,\ldots, X_n) = \sup E[X_t]$ where the supremum is taken over all stopping rules for $X_1,\ldots,X_n$.
\end{definition}
We allow $X_i$ to be degenerate random variables. For example, sometimes we will have an ordering $(X, c)$ where $c\in\mathbb{R}$. This merely means that first we observe $X$, and if we reject $X$, we are offered a reward of $c$. We will also frequently write $V(X, c)$ where $c$ is just treated as a degenerate random variable.
\begin{definition}
Let the \textbf{worth} of random variables $\mathcal{X} = \{X_1,\ldots,X_n\}$, denoted as $W(X_1,\ldots,X_n)$ or $W(\mathcal{X})$, represent the optimal expected reward for a player free to choose the order of the random variables.
\end{definition}
\begin{lemma}\label{optimal_thresholds}
Let $X_1,\ldots, X_n$ be independent random variables. Then
\begin{enumerate}
\item $V(X_j,\ldots, X_n) = E[\max\{X_j, V(X_{j+1},\ldots, X_n)\}]$ for $j=1\ldots n-1$
\item if $t^*$ is the stop rule defined by $t^*=j \Leftrightarrow \{t^* > j-1 \text{ and } X_j\geq V(X_{j+1}, \ldots, X_n)\}$, then $E[X_{t^*}] = V(X_1,\ldots,X_n)$.
\end{enumerate}
\end{lemma}
Lemma \ref{optimal_thresholds} states that when the ordering is fixed, the optimal stopping criteria to use is found by backward recursion. Therefore, the difficulty lies in finding the optimal ordering and not when to stop.\\
Here we prove a few useful lemmas that we will use throughout the paper.
\begin{lemma}\label{monotonicity}
\textbf{Monotonicity:} Fix a random variable $X$. If $c_1\geq c_2$, then $V(X, c_1)\geq V(X, c_2)$.
\end{lemma}
\begin{proof}
Suppose $c_1 \geq c_2$. It is easy to see that whatever stopping rule we use to attain $V(X, c_2)$ can also be used on the order $(X, c_1)$, and this policy will attain at least $V(X, c_2)$. Thus, $V(X, c_1)\geq V(X, c_2)$.
\end{proof}
The following two lemmas are useful for simplifying random variables as it implies that both additive and multiplicative scaling does not change the optimal ordering.
\begin{lemma}\label{scaling}
\textbf{Additive Scaling:} Let $X_1,\ldots X_n$ be non-negative random variables and let $c\in\mathbb{R}$ be a constant such that $Y_i:= X_i + c$ is still a non-negative random variable. Let $\sigma$ be a permutation. Then $V(Y_{\sigma(1)}, \ldots, Y_{\sigma(n)}) = V(X_{\sigma(1)}, \ldots, X_{\sigma(n)}) + c$.
\end{lemma}
\begin{proof}
We prove by induction. For one variable, $V(Y) = E[Y] = E[X + c] = V(X) + c$.
We denote for convenience $c':= V(X_{\sigma(2)}\ldots, X_{\sigma(k+1)})$. For the inductive step.
\begin{eqnarray*}
V(Y_{\sigma(1)},\ldots, Y_{\sigma(k+1)}) &=& V(Y_{\sigma(1)}, V(Y_{\sigma(2)}\ldots, Y_{\sigma(k+1)}))
|
narrow linewidth and high-fidelity, offset-free entanglement simultaneously. Compared to an earlier version of the source \cite{Arenskoetter2017,KuceraPhD}, we also optimised the locking stability and the output mode structure. The experimental setup is shown in Fig.~\ref{fig:schema_source} and will be discussed in the following.
In order to address the D$_{5/2}$-P$_{3/2}$ transition in $^{40}$Ca$^+$ at 854\,nm, we use a frequency-stable laser at 427 nm to pump the SPDC process in a periodically poled KTP crystal with type-II phase-matching. The polarized pump light is split on a non-polarizing 50:50 beam splitter (BS), and the two beams are coupled into the non-linear crystal in opposite directions. The crystal is placed inside a signal- and idler-resonant 3-mirror ring resonator (FSR$_H=2\pi\cdot1.85\,$GHz, FSR$_V=2\pi\cdot 1.83\,$GHz). Down-converted photons from both pump directions leave the resonator at the same outcoupling mirror, $M_\text{out}$, under different angles. The polarizations of the signal and idler photons in one of the output arms are interchanged by a half-wave plate. Then, the photons of both output arms are overlapped on a polarizing beam splitter (PBS). This arrangement erases all distinguishability between the two photons of a pair. Moreover, it avoids the background of unsplit pairs that is inherent in single-direction SPDC generation. The photonic 2-qubit state at 854\,nm produced by the source is a polarization Bell state \cite{Braunstein1992}
\begin{equation}
\ket{\Psi} = \frac{1}{\sqrt{2}}\left(\ket{H_\text{A} V_\text{B}}-e^{i\varphi}\ket{V_\text{A} H_\text{B}}\right)
\label{eq:photonState}
\end{equation}
where $A$ and $B$ refer to the output ports of the PBS.
As shown in Fig. \ref{fig:schema_source} we use a chopped locking beam, injected via a glass plate, to stabilize one mode of the resonator (the H-polarized mode) to the ion transition by the Pound-Drever-Hall (PDH) technique. The frequency of the corresponding V-polarized mode is shifted by $\sim 480$\,MHz because of the birefringence of the crystal. Due to the conversion bandwidth of the crystal (200 GHz) and the small polarisation dispersion of the resonator, several pairs of modes at different frequencies exhibit cavity-enhanced SPDC (see Supplement). Filtering out the ion-resonant mode and its partner mode is effected by two monolithic Fabry-Pérot filters (FPI), one in each output. The filters are described in more detail in the Methods section.
The SHG light that is produced by the locking beam is detected at the BS that splits the pump beam and is used for the stabilization of the Mach-Zehnder-type interferometer formed between this BS and the PBS that combines the SPDC output arms. By tuning the phase of this interferometer, we tailor the phase $\varphi$ of the photonic 2-qubit state, Eq.~\eqref{eq:photonState}. For the experiments described below, we set this phase to $\varphi=270^{\circ}$. A more detailed account of the relation between $\varphi$ and the interferometer phase is given in the Supplement.
With the described setup we reach a generated photon pair rate of $R_{\text{pair}}= 4.7\cdot10^4\frac{\text{pairs}}{\text{s mW}}\times P_{\text{pump}}$. In the experiments below, we used a maximum pump power of $P_{\text{pump}}=20$\,mW.
The SPDC photons are available, after spectral filtering and fiber coupling, with 31\% (21.4\%) efficiency in port A (B).
For analysis of the photonic polarization state, we use a projection setup consisting of half-wave plate, quarter-wave plate, and polarizer (film polarizer or Wollaston prism) in each output arm. As single-photon detectors we use two APDs (\textit{Excelitas Technologies}) with approximately $10~\text{s}^{-1}$ dark counts. Detected photons are time-tagged, and time-resolved coincidence functions between the two arms are evaluated for various combinations of polarization settings.
\begin{figure}
\centering
\includegraphics[scale=1]{Plots/photonSBR}
\caption{\textbf{Photon wave packet}. Measured signal-idler coincidence function with a bin size of 1 ns. The red dashed line is a double exponential fit to the data.}
\label{fig:photonSBR}
\end{figure}
For the characterization of the temporal shape of the photon wavepacket, we measure the polarization-indiscriminate coincidence function between the photons of the two output arms \cite{Glauber1963}. Fig.~\ref{fig:photonSBR} shows the result, measured by summing the coincidences for the four settings HV, VH, HH, and VV. The decay times (or wave packet widths) of the two photons differ slightly because of different loss of the two polarization modes in the cavity: we get values of $\tau_H=15.78$\,ns for the H-polarized photons and $\tau_V=12.94$\,ns for the V-polarized photons.
An important figure of merit is the signal-to-background ratio (SBR) of the coincidence functions. The grey-shaded area in Fig.~\ref{fig:photonSBR}, representing the coincidences in the time-window $\Delta t$ around zero, is taken as the signal ($S$). The background ($B$) is determined using the same time window size but at a delay $\tau>150$\,ns (red shaded area). For our setup the background is dominated by accidental coincidences, originating from the temporal overlap of the generated photons.
Theoretically, the number of polarization-indiscriminate coincidence counts in a time interval $T$ and for a given pair rate $R_\text{pair}$ is given by
\begin{equation}
S = \eta_1 \eta_2 R_\textrm{pair} (1-e^{-\Delta t/2\tau_\text{mean}}) T~,
\label{eq:Signal}
\end{equation}
where $\eta_1$ and $\eta_2$ are the detection efficiencies for the two outputs, and $\tau_\text{mean}=\frac{1}{2}(\tau_H+\tau_V)$ is the mean wavepacket width of the photons. The theoretical value for the background is
\begin{equation}
B = \eta_1 \eta_2 R_\text{pair}^2 \Delta t\,T~.
\label{eq:Bgd}
\end{equation}
Hence the SBR of the source is expected to be given by \cite{KuceraPhD}
\begin{equation}
\text{SBR} = \frac{S}{B} = \frac{1-e^{-\Delta t/2 \tau_\text{mean}}} {R_\text{pair} \Delta t}
\label{eq:SBR}
\end{equation}
Fig.~\ref{fig:powerSBR} shows a comparison between this theoretical expression and our measurement, for fixed pump power of 20\,mW and variable coincidence window $\Delta t$. When $\Delta t$ is similar to the 1/e width of the photon wavepacket, we reach an SBR of $\sim$30, whereas for a coincidence window that covers 99.97\% of the photon, we find an SBR of $\sim$10.
Fig.~\ref{fig:powerSBR} also shows how the coincidence rate varies with $\Delta t$. From fitting Eq.~(\ref{eq:Signal}) to the data, with $\eta_{1,2}$ independently measured, we derive the pair rate $R_\text{pair}$ as mentioned before.
The SBR enters into the maximally reachable fidelity of the photon pair state to the ideal Bell state of eq. \eqref{eq:photonState}. The fidelity of a measured (i.e., reconstructed) density matrix $\rho$ to a given state $\ket{\Psi}$ is calculated as
\begin{equation}
F = \bra{\Psi} \rho \ket{\Psi} = \frac{\frac{1}{4} + F_{\text{w/o BG}} \cdot \text{SBR}} {1+\text{SBR}}
\label{eq:theoFideltiy}
\end{equation}
where $F_{\text{w/o BG}}$ corresponds to the fidelity of the photonic state when the entire background is subtracted \cite{KuceraPhD}. This formula is used below for the theoretical curves in Fig.~\ref{fig:FidelitySBR}.
\begin{figure}
\centering
\includegraphics[scale=1]{Plots/tauSBR20mW}
\caption{\textbf{Signal-to-background ratio and coincidence rate of the photon pair source}. The SBR and the coincidence rate are plotted in dependence of the coincidence time window $\Delta t$, for a pump power of 20 mW. The error bars are calculated from the Poissonian noise of the measured coincidences. The dashed lines show the theoretical calculations according to Eq.~\eqref{eq:SBR} for the SBR and and Eq.~\eqref{eq:Signal} for the coincidence rate, with $R_\text{pair}$ the only fit parameter.}
\label{fig:powerSBR}
\end{figure}
\vspace*{.5cm}
\subsection{Polarization Preserving Frequency Conversion}
\begin{figure*}
\begin{subfigure}[c]{0.45\textwidth}
\caption{}
\includegraphics[width=\textwidth]{Figures/Aufbau_Konverter_final.pdf}
\label{fig:Schema_Konverter}
\end{subfigure}
\begin{subfigure}[c]{0.45\textwidth}
\caption{}
\includegraphics[width=\textwidth]{Plots/conversionEfficiency}
\label{fig:Konversionseffizienz}
\end{subfigure}
\caption{\textbf{a) Schematic quantum frequency conversion setup.} Details are found in the main text. TDFL: Thulium-doped fiber laser, LPF: low-pass filter, BPF: band-pass filter, VBG: volume-Bragg grating, DM: dichroic mirror, HWP: half-wave plate, SNSPD: superconducting nanowire single photon detector; \textbf{b) Device efficiency of the converter for different pump powers and input polarizations.} The maximum efficiencies of the H- and V-polarizations are slightly different.}
\end{figure*}
The setup for the polarization-preserving frequency conversion (Fig.~\ref{fig:Schema_Konverter}) is located in a second lab and is connected to the photon-pair source via 90\,m of fiber. We use difference frequency generation (DFG) in a nonlinear PPLN waveguide to convert the 854\,nm photons to the telecom C-band at 1550\,nm \cite{Krutyanskiy2017}. As the DFG efficiency in the waveguide is strongly polarization dependent, the setup is realized in a Sagnac configuration \cite{Ikuta2018, vanLeent2020}. First, the 854\,nm signal is filtered by an input bandpass filter to prevent background photons being coupled back into the source setup and overlapped with the strong, diagonally polarized pump field at 1904\,nm on a dichroic mirror. Both fields are then split into their H- and V-polarization component on a PBS. To achieve polarization preserving operation, the H-components are rotated to V by an achromatic waveplate inside the Sagnac loop (HWP). All beams are now V-polarized for optimum conversion and are coupled in a counterpropagating way into the waveguide. After exiting the waveguide, the converted 1550-nm light from the original V-component also passes the achromatic waveplate and is thereby rotated to H. Finally, the two converted polarization components are coherently overlapped on the PBS, which closes the Sagnac loop. The light then passes the dichroic mirror a second time where unconverted (or back-converted) light at 854\,nm is split off. The same happens to SHG light from the pump that is parasitically generated in the waveguide; it is then blocked by the input bandpass filter. The converted light that is transmitted through the first DM together with the pump light is separated with a second DM and coupled into a telecom fiber, directing it to the detection setup.
Apart from enabling polarization-independent conversion, an additional advantage of the Sagnac configuration is that all fields have the same optical path, such that no phase difference between the split polarization components occurs, and hence no active phase stabilization is needed. At the same time, the configuration facilitates bi-directional conversion without compromising the conversion efficiency. An experimental demonstration of bi-directional operation will be presented below.
The conversion-induced background in this process mainly originates from anti-Stokes Raman scattering of the pump field \cite{Zaske2011, Krutyanskiy2017, Kuo2018}. As this is spectrally very broad compared to the converted signal, we can significantly reduce the background count rate with a narrowband filtering stage. By combination of a bandpass filter (transmission bandwidth $\Delta \nu_{\text{BPF}} = 1500$\,GHz), a volume Bragg grating (VBG, $\Delta\nu_{\text{VBG}}=25$\,GHz) and a Fabry-Pérot etalon (FSR\,=\,12.5\,GHz, $\Delta\nu_{\text{FPE}}\,=\,250$\,MHz) we achieve broadband suppression outside a 250\,MHz transmission bandwidth. To ensure high transmission through VBG and Etalon, a clean gaussian spatial mode is needed. That is why these filters are included in the detection setup, which is separated from the conversion setup by 10\,m of fiber (or later the fiber link) acting as spatial filter. With this filtering stage, the total conversion-induced background count rate is 24\,s$^{-1}$.
The device efficiency including all losses in optical components and spectral filtering for H and V input polarizations and different pump powers in the corresponding converter arm is shown in Fig. \ref{fig:Konversionseffizienz}. The measurement agrees well with the theoretical curve given by $\eta_\text{ext}(P)= \eta_\text{ext,max}\sin^2(\sqrt{\eta_\text{nor}P}L)$ with the normalized power efficiency $\eta_\text{nor}$ and waveguide length $L$ \cite{Roussev2002}. We achieve a maximum external conversion efficiency of $\eta_\text{ext,max}=\,60.1\%\,(57.2\%)$ for H(V)-polarized input at 660\,mW\,(630\,mW) pump power. The difference is explained by
slightly different mode overlaps between signal and pump field in the two arms. Since for polarization-preserving operation both efficiencies need to be equal, the pump power in the H-arm is reduced via the two HWPs in the pump laser arm to 485\,mW to match 57.2\% conversion efficiency. To verify equal conversion efficiency for arbitrary polarized input, the process matrix \cite{Chuang2008} was measured with attenuated laser light, resulting in a value for the process fidelity of 99.947(2)\%. Thus the preservation of the input polarization state is confirmed with very high fidelity. Note that in fact the converter rotates the input state by a constant factor $\pi/2$ due to the achromatic waveplate. This could be compensated by an additional waveplate at the output, however the polarization also gets arbitrarily rotated in the input and output fibers, so we compensate for all rotations together by rotating the projection bases in all measurements as described in the Methods section. \\
Further details on the conversion setup as well as the background and process fidelity measurements are found in the supplement.
\subsection{Experimental results}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Figures/overview_2}
\caption{\textbf{Overview of the different experimental configurations.} \textbf{a)} The output state is measured directly at the source. \textbf{b)} One of the photons is frequency-converted before its detection. \textbf{c)} 20\,km of fiber is inserted between conversion and detection. \textbf{d)} The 20\,km fiber gets terminated with a retro-reflector. The back reflected photons get back-converted after 40 km of fiber and are extracted with a 50:50 fiber beamsplitter before their detection. The background colors correspond to the colors of the data points in Fig. \ref{fig:FidelitySBR}.}
\label{fig:setup_complete}
\end{figure}
To assess the performance of our quantum photonic interface we analyze the photon pair states via quantum state tomography \cite{Kwiat2001} for four different configurations, as sketched in Fig.~\ref{fig:setup_complete}a-d.
In the first configuration (Fig.~\ref{fig:setup_complete}a) and for calibration purposes, we measure the output state of the photon pair source itself, i.e., without conversion. In the next configuration (Fig.~\ref{fig:setup_complete}b), we include the frequency converter in the detuned output arm of the photon pair source (arm A in Fig.~\ref{fig:schema_source}). Tomography is then performed on the pairs of converted and unconverted photons.
In the third configuration (Fig.~\ref{fig:setup_complete}c), we extend the fiber link between converter and detection setup to 20\,km in order to evaluate the influence of fiber transmission.
We emphasize that the transmission of photons at a wavelength of 854 nm via a suitable single mode fiber would suffer about -70 dB fiber attenuation \cite{Nufern2020} whereas conversion to 1550 nm and using a low-loss telecom fiber results in a total fiber attenuation of -3.4\,dB \cite{Corning2020}.
Finally, in the fourth configuration (Fig.~\ref{fig:setup_complete}d) bi-directional conversion is implemented by terminating the 20\,km fiber with a retroreflector. Thus, the telecom photons are converted back to 854 nm at the second passage of the converter, after 40 km of fiber transmission. To separate the returning back-converted photons from the outgoing ones in the source lab, we use a 50:50 fiber beamsplitter that reflects 50\% of the back-converted photons to the second tomography setup. Note that in this configuration the converter filter stage is not used, but the FPI filter in the 854\,nm tomography setup, as explained in the supplement.\\
\begin{figure*}
\begin{subfigure}{0.49\textwidth}
\includegraphics[scale=1]{Plots/fidelity1.5tau.pdf}
\subcaption{Fidelity for a detection window $\Delta t= 1.5\tau_\text{mean}$}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[scale=1]{Plots/SBR1.5tau}
\subcaption{SBR for a detection window $\Delta t= 1.5\tau_\text{mean}$}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[scale=1]{Plots/fidelity5tau.pdf}
\subcaption{Fidelity for a detection window $\Delta t= 5\tau_\text{mean}$}
\label{fig:Fidelity1.5tau}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[scale=1]{Plots/SBR5tau}
\subcaption{SBR for a detection window $\Delta t= 5\tau_\text{mean}$}
\label{fig:SBR1.5tau}
\end{subfigure}
\caption{\textbf{Fidelity and SBR vs.\ SPDC pump power for the four configurations of Fig.~\ref{fig:setup_complete}.} a) and b) show the measurement results for a detection time window of $1.5\tau_\text{mean}$ which corresponds to 78\% of the photon temporal shape. In a) the MLE results are represented by the data points, the dot-dashed line shows the mean background corrected fidelity, while the dashed line shows the theoretical expectation according to eq. \eqref{eq:theoFideltiy}. The uncertainties are derived from a simulation; see Method section for further details.
b) shows the comparison of the SBR for the four different measurement settings. The dashed line is calculated for the given pair rate according to eq. \eqref{eq:SBR}. The error bars are calculated by including the $\sqrt{N}$-noise of the measured coincidences \cite{Altepeter2005}.
c) and d) show the results when choosing a larger detection window of $5\tau_\text{mean}$ for the evaluation.
}
\label{fig:FidelitySBR}
\end{figure*}
In all four configurations, we use source pump powers of 5\,mW, 10\,mW, 15\,mW and 20\,mW. The detected coincidence rates at 20 mW pump power are: 4168 s$^{-1}$ for setup a), 974 s$^{-1}$ for setup b), 428 s$^{-1}$ for setup c) and 40 s$^{-1}$ for setup d) (also see Supplement).From the time-resolved coincidences in the different detection bases, the density matrix is reconstructed by applying an iterative maximum likelihood estimation (MLE) algorithm \cite{Hradil1997}. From the density matrix, we then calculate all measures that characterize the quantum state, such as fidelity and purity. For each configuration we show a representative density matrix in the supplement. Before performing the tomography, the polarization rotations in both arms, including the converter and the fiber link, were calibrated; details on the procedure are found in the methods part.
The results of the four configurations are summarized in Fig.~\ref{fig:FidelitySBR}.
For all configurations and source pump powers, the fidelities with respect to the maximally entangled state (\ref{eq:photonState}) are above 95\% with and without background correction, for a detection window of $\Delta t = 1.5\,\tau_\text{mean}$ (subfigure a). Thus the entanglement is well-preserved for every scenario of Fig.~\ref{fig:setup_complete}.
For fixed source pump power, the fidelities of the individual configurations differ only slightly outside the error bars, which confirms the high process fidelity of the converter and the low detrimental effect of the fiber link.
With increasing source pump power, the fidelity decreases, which is expected according to eq.~(\ref{eq:theoFideltiy}). The black dashed line shows the theoretically achievable fidelity for the given SBR, considering only accidental coincidences. This line is expected to be an upper bound to the measured values, as additional effects lead to further reductions: (i) errors in the calibration of the polarisation rotation compensation; (ii) drifts in the polarization rotation of the setup away from the calibration point during the measurements; (iii) fluctuations of the photonic state phase resulting from power fluctuations of the locking signal.
The SBR for each individual measurement is shown in Fig.~\ref{fig:FidelitySBR}b. Here the error bars are calculated by taking Poissonian noise into account. As expected from eq.~(\ref{eq:SBR}), the SBR decreases for increasing pump power of the pair source. Again we only see minor differences between the four configurations, as the conversion-induced background rate is negligible compared to the pair rate and source background rate in the relevant time window. For the back-conversion part, the SBR is consistently lower, which is explained by the use of the FPI filter instead of the converter filtering stage.
When choosing a larger detection window, fidelities and SBR decrease due to the temporal overlap of the photons. The results for a detection window of $\Delta t=5\,\tau_\text{mean}$, corresponding to 91\% of the photon, are shown in Fig.~\ref{fig:FidelitySBR}c-d. Due to the lower SBR, here the fidelities are lower by a few percent, while the detected pair rate is increased by a factor of 1.7.
\section{Discussion}
In summary, we have presented and characterized a combined system of entangled photon pair source and bi-directional quantum frequency converter and use it for entanglement distribution over a fiber link. Pair rate
|
and entanglement fidelity of the source are near the theoretical optimum, while efficiency, polarization fidelity and noise performance of the converter are among the best reported values \cite{Krutyanskiy2017,Ikuta2018,Kaiser2019}. The system is tailored as interface to the $^{40}$Ca$^+$ single-ion quantum memory.
The presented measurements demonstrate the preservation of photon entanglement with high fidelity through up to 2 conversion steps and distribution over up to 40\,km of fiber. The high-efficiency bi-directional quantum frequency conversion adds low background to the photon pair source and thus enables a proof-of-principle demonstration of a quantum photonic interface suitable to connect remote quantum network nodes based on $^{40}$Ca$^+$ single-ion memories.
Several future improvements are readily identified. The coincidence rates can be enhanced by replacing lossy fiber-fiber couplings by direct splices and by replacing the fiber beam splitter by an optical circulator. Given the low unconditional background rate of the QFC process, we can further increase photon count rates by lowering the finesse of the monolithic Fabry-Pérot filters, which results in higher transmission. The conversion fidelity can be improved near the theoretical maximum by repeating the polarization calibration more frequently in a fully automated procedure. Furthermore, a redesign of the source-cavity itself would make it possible to shorten the photons to 8\,ns, which corresponds to the transition linewidth of 23\,MHz between the D$_{5/2}$ and P$_{3/2}$ state. Due to less overlap between two subsequent photons, the SBR and the fidelity would improve as well.
Recent developments in quantum networking operations with single ions \cite{Krutyanskiy2022entanglement,Krutyanskiy2022telecom,Drmota2022}, together with progress in ion-trap quantum processors \cite{Pogorelov2021,Noel2022} confirm the importance and the potential of quantum photonic interfaces for quantum information technologies. The interface we demonstrate here enables extension towards further fundamental operations in trapped-ion-based quantum networks. In particular, bi-directional QFC allows for teleportation between remote quantum memories, or for their entanglement via direct exchange of a photon (see e.g. \cite{Luo2022}), employing heralded absorption \cite{KuceraPhD, Kurz2016} of a back-converted telecom photon.
\section{Methods}
\subsection*{State characterization}
We perform full state tomography to reconstruct the photonic two-qubit state. We project each photon to one of the six polarization basis states (horizontal, vertical, diagonal, anti diagonal, left circular, or right circular), which results in 36 possible measurement combinations. In order to reconstruct the 2-qubit density matrix, 16 combinations are sufficient. We therefore measure two-photon correlations in 16 independent polarization settings \cite{James2001} and reconstruct the density matrix by applying a maximum likelihood algorithm to the count rates inside the detection window \cite{Hradil1997}. We infer from the density matrix all characteristic measures such as purity and fidelity.
For calculating the error bars, we use a Monte Carlo simulation \cite{Altepeter2005} where we run the algorithm repeatedly on randomly generated, Poisson-distributed coincidence rates based on the measured rates. From the distribution of the resulting fidelities, we estimate the error bars.
\subsection*{Polarization bases calibration}
Polarization rotation due to the birefringence of optical components in the setup, in particular the fibers, has to be accounted for to ensure projection onto the correct basis states. We model the rotation in each arm by a unitary rotation matrix $M$ that transforms the input polarization $\lambda_{\text{in}}$ to an output polarisation $\lambda_{\text{out}} = M\lambda_{\text{in}}$. Non-unitary effects such as polarization-dependent loss play only a negligible role on the final state fidelity.
By blocking one pump direction of the source, we deterministically generate photons with linear (H) polarization. Additionally, we can insert a quarter-wave plate directly behind the source to rotate the polarization to R. We then perform single-qubit tomography with the detection setups to measure the rotated polarization states. From this we calculate the rotation matrix $M$, and the projection bases are rotated accordingly.
Depending on the experimental configuration (Fig.~\ref{fig:setup_complete}), we measure the rotation matrix less or more frequently during the experiments: for the first two configurations, short fibers are used, thus the polarization is very stable over time and is measured only at the beginning of the experiment. In the last two configurations, although the fiber spool is actively temperature stabilized to minimize polarization drifts, the rotation matrix is measured every 60 minutes.
\subsection*{Fabry-Pérot filter}
The Fabry-Pérot filters which we use for filtering the 854\,nm photons to a single frequency mode are built from single 2 mm thick NBK-7 lenses. Both sides have a high-reflectivity coating ($R=0.9935$). This results in a finesse of 481, a FSR of $\sim 50$~GHz and a FWHM of $\Delta\nu\approx104$ MHz, which is sufficient for filtering a single mode of the SPDC cavity (FSR $\sim 1.84$~GHz). Frequency tuning over a whole FSR is possible by changing the temperature in the range between 20°C and 70°C. The stabilization of the temperature with a precision of 1 mK (corresponding to 3 MHz) is sufficient for stable operation; no further active stabilization is needed.
\section{Data availability}
The underlying data for this manuscript is openly available in Zenodo at \url{https://doi.org/10.5281/zenodo.7313581}.
The evaluation algorithms are available from the corresponding author upon reasonable request.
\section{References}
\subsection*{Supplementary Methods 1: Source}
\subsubsection*{Further details to the experimental setup}
In the following we give further details on the setup of the source. The schematic of the source is shown in Figure 1. The 427\,nm pump light is polarized by a polarizing beam splitter (PBS) and is rotated to the correct polarization by a $\lambda/2$ wave plate (HWP). This ensures that most of the light is used to generate photon pairs, because only one polarization is converted. After that, the pump light is split on a non-polarizing beam splitter (BS). The reflected and the transmitted beams pump the resonator from opposite sides.
Photons generated from the transmitted pump beam leave the resonator at mirror $M_{out}$ under 0° and are separated from the pump by a dichroic mirror (DM). After that, we send the photons to a PBS.
The photons generated from the other pump beam leave the resonator under 30°, passing two lenses for beam-shaping and an additional HWP, which interchange the polarization of signal and idler photon.
The output state can then be written as
\begin{equation}
\ket{\Psi_{\text{out}}} = \frac{1}{\sqrt{C_1^2+C_2^2}}(C_1 \ket{H_AV_B}-C_2e^{i\varphi}\ket{V_AH_B})
\label{eq:state}
\end{equation}
with weight factors $C_1$ and $C_2$.
To generate a Bell state, both pump directions need to be balanced in means of coincidence counts, resulting in C1=C2. We achieved the balancing of coincidence counts with an additional HWP in one of the pump beams by slightly misaligning the polarization of one of the beams.
\subsubsection*{Photon spectrum}
The small polarisation dispersion of the cavity together with the conversion bandwidth of the crystal ($\sim200\,$ GHz) leads to a cluster structure in the produced photon spectrum. This structure was measured by scanning the FPI over a range of $\pm5\,$GHz around the resonance, while measuring the count rate behind it. Supplementary Fig. \ref{fig:cluster} shows the result. We get five peaks in the spectrum with a spacing of $\sim1.8\,$GHz, which is the FSR of the cavity. We infer from the area under the peaks that we get around 60 \% of the power in the central peak.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{Plots/photonCluster.pdf}
\caption{\textbf{Measured cluster structure of the photon pair source} The plots shows the frequency resolved photonic cluster of the photon pair source. The counts are recorded while scanning the FPI filter over the resonance.}
\label{fig:cluster}
\end{figure}
\subsubsection*{Stabilization}
Both, the resonator and the interferometer are actively stabilized. For the resonator, we use the PDH technique. Via a glass plate (GP) we inject ion-resonant 854\,nm light from a reference laser through the outcoupling mirror M$_{\text{out}}$. By choosing diagonal polarization for the light, it couples to both directions of the resonator, which is necessary for generation SHG light in both directions. The HWP again ensures that both directions have the same polarization. The reflection from the resonator is then detected on a photo-diode. Feedback is applied via a piezo-movable mirror.
The interferometer is stabilized in the following manner: the light used for the PDH-lock also generates a small amount of SHG light in both pump directions such that a Mach-Zehnder interferometer is formed between the red PBS and the blue BS. An APD in current mode is attached to the free port of the BS to detect the fringes. Feedback is applied to a pair of piezo-movable mirrors in one of the pump arms. The measured relation between the interferometer phase and the reconstructed photonic state phase $\varphi$ (eq. \eqref{eq:state}) is shown in Supplementary Figure \ref{fig:statePhase}.
At the turning points of the fringe, i.e. the gray shaded areas, no stabilization is possible. This is the reason why we use a phase of 270$^\circ$ instead of $0^\circ$ or $180^\circ$ which would result in the conventional $\Psi^+$ or $\Psi^-$ Bell state. The offset between the state phase and the interferometer phase depends mainly on the path length difference of the two interferometer paths. For the case of equal paths the resulting phase is around 270$^\circ$.
To protect the APDs from laser light, both locks are running in a sequential chopped mode. All beams (pump light, lock light 854\,nm, SHG light, and photons) pass the same chopper to synchronize the operations.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Plots/interferometer_phase-state_phase.pdf}
\caption{\textbf{Relation between interferometer phase and state phase.} The plot shows the measured state phase for different interferometer phases. In the upper plot the theoretical interferometer fringe is displayed. The grey shaded areas indicate regions where no locking is possible.}
\label{fig:statePhase}
\end{figure}
\subsection*{Supplementary Methods 2: Converter
}
\subsubsection*{Waveguide Coupling}
For optimal conversion efficiency, signal and pump field are focused onto their respective fundamental WG mode. For this, we employ aspheric lenses (AL), which at the same time couple out the converted light. Due to chromatic aberration, the focal length is different for all three wavelengths. Thus it is not possible to couple all three fields simultaneously when using collimated beams. To deal with this, we generate non-collimated beams with appropriate beam parameters at the WG coupling lens by varying the distances between the WG coupling lens and the fiber coupling lenses as shown in Supplementary Figure \ref{fig:WGCoupling}. As the pump laser has an output collimator, we use an additional spherical lens (SL) for beam shaping.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Figures/Abstandsoptimierung}
\caption{\textbf{Schematic of the distances to be optimized for high coupling efficiency. }}
\label{fig:WGCoupling}
\end{figure}
Instead of finding the optimal distances empirically we optimize them numerically. For that we first calculate the fundamental WG mode for all 3 wavelengths. From the initial beam parameters given by the fiber MFDs and the beam profile of the pump laser we calculate for each wavelength the field distribution at the WG facette as a function of the three distances $d_{1,\lambda_i}, d_{2,\lambda_i}, d_{3}$ by matrix optics for each wavelength. The overlap between the field distribution and the WG fundamental mode then gives a measure for the coupling efficiency. We then numerically maximize the overlap for all 3 wavelengths by varying $d_{1,\lambda_i}, d_{2,\lambda_i}, d_{3}$ within bounds reasonable for the setup. Note that $d_3$ is a shared parameter as all fields are coupled to the WG with the same lens. Due to the large diameter pump beam, it was not possible to achieve maximum coupling efficiency for all 3 fields simultaneously. In the optimization we therefore weight the pump field coupling efficiency less than the single photon fields as we have enough pump power available. The simulation was repeated for a number of commercially available lenses where the chromatic focal shift data was available. The best combination of distances and focal lengths results in a coupling efficiency of 99.5\% for 854\,nm and 1550\,nm respectively and 91.9\% for 1904\,nm. The achieved experimental values are: For 854\,nm 96.2\%(97.1\%) for the H(V)-arm, for 1904\,nm 89.4\% (88.9\%), for the 1550\,nm fiber coupling we get 93.5\% where reflection losses at the fiber facets are included. Other possible factors leading to deviations from the simulated results are aberrations in the coupling lenses and non-perfect WG geometries which distort the WG modes. For further details see \cite{BockPhDS}.
\subsubsection*{Frequency Converter Losses}
\begin{table}
\caption{ Transmission and Efficiencies of individual components of the conversion setup}
\label{tab:Efficiencies}
\begin{tabular}{c c c} & H & V \\
\hline
\hline
Optical elements transmission, 854 & 92.3\% & 93\% \\
AL transmission, 854 & \multicolumn{2}{c}{93\%} \\
WG coupling efficiency & 96.2\% & 97.1\% \\
AL transmission 1550 (3x) & \multicolumn{2}{c}{94.4\%} \\
Bandpass filter transmission & \multicolumn{2}{c}{96\%} \\
Fiber in-/out coupling efficiency & \multicolumn{2}{c}{93.5\%} \\
VBG transmission & \multicolumn{2}{c}{98.5\%} \\
Etalon transmission & \multicolumn{2}{c}{93.4\%} \\
Internal efficiency + Opt. el. tr., 1550\qquad\qquad & 93.3\% & 87.4\% \\
\hline
\vspace{1cm}
Device Efficiency & 60.1\%& 57.2\%\\
\end{tabular}
\end{table}
The total device efficiency of the converter is composed of the internal efficiency of the conversion process, coupling efficiencies, and losses induced by all optical elements. A detailed list of transmissions and efficiencies of all components is shown in Supplementary Table I. The 854\,nm signal passes the aspheric fiber coupling lens, a dielectric mirror, a dichroic mirror, the PBS and two silver mirrors. In total, the setup transmission up to the WG coupling lens is 92.3\%(93\%) for the H(V)-arm. Although the WG coupling lens is custom AR-coated for all three wavelengths, it shows a transmission of only 93\% for 854\,nm. It was not possible to also measure the total setup transmission for the converted 1550\,nm light. As the converted light is overlapped with the strong pump laser behind the WG either a dichroic mirror to reflect the pump light out of the 1550\,nm path or a seperate 1550\,nm laser would be needed which weren't available. Only the aspheric lens transmission could be measured for 1550\,nm. The total transmission through all 3 lenses in the 1550\,nm path (WG coupling lens and fiber in and out coupling lenses) is 94.4\%. The bandpass filter shows a transmission of 96\%. The fiber between conversion and detection setup is AR-coated only on the input side, the combined in and out coupling efficieny is 93.5\%. For the VBG and Etalon we get transmissions of 98.5\% and 93.4\% respectively. From these numbers and the device efficiency we can now infer the combined internal efficiency and 1550\,nm setup transmission which is 93.3\%(87.4\%) for the H(V)-arm.
It is apparent, that the device efficiency is not limited by a single element with low transmission, but rather the sum of many components with already low individual loss. For a significant improvement of the device efficiency we therefore need to replace several components.
For future experiments we will therefore use custom dielectric mirrors with reflectivities > 99.5\% for the single photon wavelengths instead of the currently used silver mirrors for WG coupling. The WG coupling lens will be replaced by one with transmission > 99\% for 854\,nm and 1550\,nm. For connecting converter and detection setup, we will use a fiber with AR coating on both ends. Thus, a device efficiency of up to 70\% is within reach.
\subsubsection*{Laser Process Tomography}
To characterize the converter polarization preservation for arbitrary polarized light, quantum process tomography with attenuated laser light is used.
For that, first the polarization rotation matrix of the conversion and detection setup is measured and compensated as described in the Methods section. Here we prepare 37 polarization states at the converter input with a linear polarizer and two waveplates and measure the corresponding output polarization with the single photon detectors. We then prepare the polarizations H,V,D,A,R and L at the converter input and perform quantum state tomography on the output states. From the resulting photon count rates in the different detection basis settings, the process matrix in the Pauli-basis ($\sigma_0$,$\sigma_1$,$\sigma_2$, $\sigma_3$) is reconstructed with a maximum likelihood algorithm \cite{BockPhDS}. The result is shown in Supplementary Figure \ref{fig:Prozessmatrix}. We only get a significant contribution from the first matrix entry, which corresponds to the process fidelity of 99.947(2)\% where the error is calculated with a Monte Carlo simulation assuming Poissonian photon statistics.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Figures/Prozessmatrix.pdf}
\caption{\textbf{Process matrix for the conversion setup.} The absolute values of the individual components are shown. The corresponding process fidelity is 99.947(2)\%.}
\label{fig:Prozessmatrix}
\end{figure}
\subsubsection*{Conversion Induced Background}
For our DFG-process, the dominant noise source is Anti-Stokes Raman noise \cite{Zaske2011S, Krutyanskiy2017S, Kuo2018S}. To reduce the noise, we chose a waveguide with a comparably low phase-matching temperature of 19°C and the narrowband filtering stage.
For quantifying the conversion induced noise, the 854\,nm input light was blocked. The remaining count rate was integrated over 15 minutes for different pump powers. The dark-count corrected count rates measured on both SNSPD channels are shown in Supplementary Figure \ref{fig:Noiserate}. The actually generated noise rate is inferred from these numbers by dividing by the detection efficiencies of the detectors (31\%/35\%, here the detection efficiency was set to a different value than in the main experiment), the transmission through the waveplates and the Wollaston prism (98\%) and the coupling efficiency to the detector fibers (90\%). The larger error bars for the total noise rate stem from the uncertainty in the detection efficiency of 3\%. The theoretical curve assumes backconversion of 1550\,nm noise photons to 854\,nm which then get blocked by the filtering stage \cite{Maring2018S}. Thus, the curve does not show a linear behaviour as expected for ASR-noise but flattens at higher pump powers which matches well with the experimental data. The resulting noise rate at the working point of 1.1\,W is 24(4)\,photons/s.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Plots/noiserate}
\caption{\textbf{Conversion induced noise rate.} Shown are the measured dark-count subtracted count rates for SNSPD channel 1 (blue) and channel 2 (red) as well as the total generated noise-rate. }
\label{fig:Noiserate}
\end{figure}
\subsection*{Supplementary Notes 1: Setup Losses}
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{Plots/rate1.5tau}
\caption{\textbf{Measured coincidence rate}. Comparison of the measured coincidence rate for the four different measurement setups taking $1.5\tau$ of the photon wave packet.}
\label{fig:coincidence rate}
\end{figure}
\begin{table}[h]
\caption{Transmission and Efficiencies of individual components in setup d)}
\begin{tabular}{c c}
\hline
\hline
Fiber-fiber couplings & 76\% \\
Fiber-BS (2x) & 50\% \\
Transmission 854nm fiber (2x) & 87\% \\
Conversion (2x) & 60\% \\
Transmission 20km fiber (2x)& 42\% \\
Retroreflector & 87\%\\
\hline
Overall: source output to APD & 0.8\%
\end{tabular}
\label{tab:Efficiencies}
\end{table}
In Supplementary Figure \ref{fig:coincidence rate} the measured coincidence rates for the four configurations are shown. From that, we estimate the total setup efficiencies. The individually measured component transmissions and efficiencies are shown in Supplementary Table \ref{tab:Efficiencies} for case 4, the back-conversion setup. Fiber to fiber couplings in the source lab account for 76\% transmission. The fiber beam splitter separating the back-converted photons have a transmission of 25\% as the photons pass it twice. The fiber connecting source and converter lab has a transmission of 87\% including coupling efficiencies. As the filtering stage is not used, the conversion efficiency is at 60\%—the 20\,km of fiber and the retroreflector account for 42\% and 87\%, respectively. In total, that leads to an overall setup transmission of 0.8\%. The differences in the setup transmissions inferred from Supplementary Figure \ref{fig:coincidence rate} can be explained by the drifting of the pump laser of the photon pair source.
\subsection*{Supplementary Notes 2: Density matrices}
In Supplementary Figure \ref{fig:density_matrices}, reconstructed density matrices for the different setups are shown exemplary.
\begin{figure*}
\centering
\begin{subfigure}{0.85\textwidth}
\includegraphics[width=\textwidth]{Plots/DensityMat_20mW_5tau_Source_woBGCorr_.pdf}
\subcaption{source}
\end{subfigure}
\begin{subfigure}{0.85\textwidth}
\includegraphics[width=\textwidth]{Plots/DensityMat_20mW_5tau_Converter_woBGCorr_.pdf}
\subcaption{source + converter}
\end{subfigure}\\
\begin{subfigure}{0.85\textwidth}
\includegraphics[width=\textwidth]{Plots/DensityMat_20mW_5tau_Converter_20km_woBGCorr_.pdf}
\subcaption{source + converter + 20km}
\end{subfigure}\\
\begin{subfigure}{0.85\textwidth}
\includegraphics[width=\textwidth]{Plots/DensityMat_20mW_5tau_Converter_backref_woBGCorr_.pdf}
\subcaption{source + converter + back reflection + 40km}
\end{subfigure}
\caption{\textbf{Density matrices for the four different setups.} The shown density matrices corresponds to a time window $\Delta t = 5\tau_\text{mean}$ and a pump power of 20 mW.}
\label{fig:density_matrices}
\end{figure*}
\clearpage
\subsection*{Supplementary References}
|
\section{Introduction}
In this work we are concerned with the numerical solution of linear systems with a Kronecker sum-structured coefficient matrix of the form:
\begin{equation}\label{eq:lin-sys}
\left(A_1\otimes I\otimes\dots\otimes I+\dots + I\otimes\dots I\otimes A_d\right)x=b,
\end{equation}
where the matrices $A_t\in\mathbb R^{n_t\times n_t}$ are symmetric and positive definite
(SPD) with spectrum contained in $[\alpha_t, \beta_t]\subset \mathbb R^+$,
and have low-rank off-diagonal blocks, for $t=1,\dots,d$.
By reshaping $x,b\in\mathbb R^{n_1\dots n_d}$ into $d$-dimensional tensors $\T X,\T{B}\in\mathbb R^{n_1\times \dots n_d}$, we rewrite \eqref{eq:lin-sys} as the tensor Sylvester equation
\begin{equation}\label{eq:tens-sylv}
\T X\times_1 A_1+\dots+\T X\times_d A_d=\T B,
\end{equation}
where $\times_j$ denotes the $j$-mode product for tensors \cite{kolda}.
Such problems arise naturally when discretizing $d$-dimensional
Laplace-like operators by means
of tensorized grids that respect the separability of the operator
\cite{townsend2015automatic,townsend-fortunato,strossner2021fast,palitta2016matrix,massei2021rational}.
In the case
$d = 2$, we recover the well-known case of matrix Sylvester equations, that also
plays a dominant role in the model reduction of dynamical systems
\cite{antoulas-approximation}.
Several methods for solving matrix and tensor Sylvester equations assume
that the right-hand side has some structure, such as being low-rank. This is a necessary
assumption for dealing with large scale problems. In this paper, we only consider unstructured right-hand sides $\T B$, for which the cases of interest are those where the parameters $n_t$ have
small to medium size. On the other hand, the structure in
the coefficients $A_t$ (which are SPD and have off-diagonal blocks of low-rank),
will be crucial to improve the complexity of the solver with respect to the
completely unstructured case. We also remark that when the $A_t$s
arise from the discretization of elliptic differential operators,
the structure assumed in this work is often present
\cite{hackbusch,borm}.
\subsection{Related work}
In the matrix case (i.e., $d = 2$ in \eqref{eq:tens-sylv}) there are two main
procedures that make no assumptions on $A_1, A_2,$ and $\T B$: the Bartels-Stewart algorithm~\cite{bartels-stewart,recursive-sylvester-kagstrom} and
the Hessenberg-Schur method~\cite{hessenberg-schur}. These are based on taking the coefficients $A_1,A_2$
to either Hessenberg or triangular form, and then solving the linear system by (block)
back-substitution. The idea has been generalized to $d$-dimensional tensor
Sylvester equations in \cite{chen2020recursive}. In the case
where $n = n_1 = \ldots = n_d$, the computational complexity of these
approaches is $\mathcal O(dn^3 + n^{d+1})$ flops.
When the right-hand side $\T B$ is a low-rank matrix or is representable in a low-rank
tensor format (Tucker, Tensor-Train, Hierarchical Tucker, \ldots) the tensor equation
can be solved much more efficiently, and the returned approximate solution is low-rank,
which allow us to store it in a low-memory format. Indeed, in this case it is possible
to exploit tensorized Krylov (and rational Krylov) methods \cite{Simoncini2016,druskin2011analysis,kressner2009krylov}, or the
factored Alternating Direction
Implicit Method (fADI) \cite{benner2009adi}. The latter methods build a rank $s$ approximant
to the solution $\T X$ by solving a $\mathcal O(s)$
shifted linear systems with the matrices $A_t$. This is very effective when
also the coefficients $A_t$ are structured. For instance, when the $A_t$
are sparse or hierarchically low-rank, this often brings the cost of approximating $X$
to $\mathcal O(sn \log^an)$ for $a \in \{0,1,2\}$ \cite{hackbusch}. In the tensor
case, another option is to rely on methods adapted to the low-rank structure
under consideration:
AMEn \cite{dolgov2014alternating} or TT-GMRES \cite{dolgov2013tt}
for Tensor-Trains, projection methods in the
Hierarchical Tucker format \cite{ballani2013projection}, and other approaches.
In this work, we consider an intermediate setting, where the coefficients $A_t$
are structured, while the right-hand side $\T B$ is not. More specifically, we assume
that the $A_t$ are SPD and efficiently representable in
the Hierarchical SemiSeparable format (HSS) \cite{xia2010fast}. This implies that
each coefficient $A_t$ can be partitioned in a $2 \times 2$ block matrix
with low-rank off-diagonal blocks, and diagonal blocks with the same recursive structure.
A particular case of this setting has been considered in
\cite{townsend-fortunato}, where the $A_t$
are banded SPD (and therefore have low-rank off-diagonal blocks), and a nested
Alternating Direction Implicit (ADI) solver is applied to a 3D tensor equation
with no structure in $\T B$. The complexity of the algorithm is quasi-optimal
$\mathcal O(n^3 \log^3 n)$, but the hidden constant is very large, and the approach is
not practical already for moderate $n$; see \cite{strossner2021fast} for a comparison
with methods with a higher asymptotic complexity.
We remark that the tensor equation \eqref{eq:tens-sylv} can be solved by diagonalization of the $A_t$s
in a stable way, as described in the pseudocode of Algorithm~\ref{alg:diag}. Without further assumptions, this costs $\mathcal O(dn^3 + n^{d+1})$
when all dimensions are equal.
\begin{algorithm}[H]
\small
\caption{}\label{alg:diag}
\begin{algorithmic}[1]
\Procedure{lyapnd\_diag}{$A_1,A_2,\dots, A_d,\T B$}
\For{$i=1,\dots,d$}
\State $[S_i, D_i]=\texttt{eig}(A_i)$
\State $\T B \gets \T B \times_i S_i^*$
\EndFor
\For{$i_1=1,\dots, n_1,\dots, i_d=1,\dots,n_d$}
\State $\T F(i_1,\dots,i_d)\gets ([D_1]_{i_1i_1}+\dots + [D_d]_{i_di_d})^{-1}$
\EndFor
\State $\T X\gets \T B \circ \T F$
\For{$i=1,\dots,d$}
\State $\T X \gets \T X \times_i S_i$
\EndFor
\State\Return $\T X$
\EndProcedure
\end{algorithmic}
\end{algorithm}
If the $A_t$ can be efficiently diagonalized,
then Algorithm~\ref{alg:diag} attains a quasi-optimal complexity. For instance,
in the case of finite difference discretizations of the $d$-dimensional Laplace operator,
diagonalizing the matrices $A_t$ via the fast sine or cosine transforms (depending on the boundary conditions) yields the complexity
$\mathcal O(n^d \log n)$.
Recently, it has been shown that positive definite HSS enjoy a structured eigendecomposition
\cite{ou2022superdc}, that can be retrieved in $\mathcal O(n \log^2 n)$ time. In addition, multiplying a vector by the eigenvector matrix costs only $\mathcal O(n\log(n))$ because the latter can be represented as a product of permutations and Cauchy-like matrices, of logarithmic length. These features
can be exploited into Algorithm~\ref{alg:diag} to obtain an efficient solver.
The approach proposed in this work has the same $\mathcal O(n^d \log n)$
asymptotic complexity but,
as we will demonstrate in our numerical experiments,
will result in significantly lower computational costs.
\subsection{Main contributions}
The main contribution is the design and analysis of an algorithm
with $\mathcal O(\prod_{j = 1}^d n_j \log(\max_{j} n_j))$
complexity
for
solving the tensor equation \eqref{eq:tens-sylv} with HSS and SPD coefficients
$A_t$. The algorithm is based on a divide-and-conquer scheme, where \eqref{eq:tens-sylv}
is decomposed into several tensor equations that have either a
low-rank right-hand side, or a small dimension. In the tensor case, the low-rank
equations are solved exploiting nested calls to the $(d-1)$-dimensional solver.
Concerning the theoretical analysis, we provide the following contributions:
\begin{itemize}
\item An error analysis that, assuming a bounded residual error on the low-rank
subproblems, guarantees the accuracy on the final result of the divide-and-conquer scheme
(Theorem~\ref{thm:2d-accuracy} and Lemma~\ref{lem:residual-d>2}).
\item A novel a priori error analysis for the use of fADI with inexact solves;
more precisely, in Theorem~\ref{thm:adi-res-inex} we provide an explicit bound for the difference
between the residual norm after $s$ steps of fADI in exact arithmetic, and the one obtained
by performing the fADI steps with inexact solves. This enables us to control the residual
norm of the error based only on the number of shifts used in all calls to fADI
in our solver (Theorem~\ref{thm:accuracy-d}).
These results are very much related to those in \cite[Theorem 3.4 and Corollary 3.1]{kurschner2020inexact}, where also the convergence of fADI with inexact solves is analyzed. Nevertheless, the assumptions and the techniques used in the proofs of such results are quite different. The goal of \cite{kurschner2020inexact} is to progressively increase the level of inexactness, along the iterations of fADI, ensuring that the final residual norm remain below a target threshold. The authors proposed an adaptive relaxation strategy that requires the computation of intermediate quantities generated during the execution of the algorithm. In our work, the level of inexactness is fixed and, by exploiting the decay of the residual norm when using optimal shifts, we provide upper bounds for the number of iterations needed to attain the target accuracy.
\item We prove that for a $d$-dimensional problem, the condition number $\kappa$ of the
tensor Sylvester equation
can (in principle) amplify the residual norm by a factor $\kappa^{d - 1}$, when a
nested solver is used.
When the $A_t$ are $M$-matrices, we show that the impact is reduced to
$(\sqrt{\kappa})^{d - 1}$ (Lemma~\ref{lem:m-matrices}).
\item A thorough complexity analysis (Theorem~\ref{thm:2d-complexity} and
\ref{thm:3d-complexity}), where the role of the HSS ranks, the target accuracy,
and the
condition number of the $A_t$s are fully revealed. In particular, we show
that the condition numbers have a mild impact on the computational cost.
\end{itemize}
The paper is organized as follows. In Section~\ref{sec:high-level}, we provide a
high-level description of the proposed scheme, for a $d$-dimensional tensor Sylvester
equation. Section~\ref{sec:2d-case} and Section~\ref{sec:tensors} are
dedicated to the theoretical analysis of
the algorithm for the matrix and tensor case, respectively. Finally,
in the numerical experiments of Section~\ref{sec:numerical-experiments}
we compare the proposed algorithm with Algorithm~\ref{alg:diag}
where the diagonalization is performed with a dense method or with the
algorithm proposed in \cite{ou2022superdc} for HSS matrices.
\subsection{Notation}
Throughout the paper, we denote matrices with capital
letters ($X$, $Y$, \ldots), and tensors with
calligraphic letters
($\T X, \T Y$, \ldots). We use the same letter with different
fonts to denote matricizations of tensors (e.g., $X$ is a matricization of
$\T X$).
The Greek letters $\alpha_t, \beta_t$ indicate the
extrema of the interval $[\alpha_t, \beta_t]$ enclosing the spectrum
of $A_t$, and $\kappa$ is used to denote the upper bound on
the condition number of the
Sylvester operator
$\kappa = (\beta_1 + \ldots + \beta_d) / (\alpha_1 + \ldots + \alpha_d)$.
\section{High-level description of the divide-and-conquer scheme}
\label{sec:high-level}
We consider HSS matrices $A_t$, so that each $A_t$
can be decomposed as $A_t=A_t^{\mathrm{diag}} + A_t^{\mathrm{off}}$
where $A_t^{\mathrm{diag}}$ is block diagonal with square diagonal blocks,
$A_t^{\mathrm{off}}$ is low-rank and
the decomposition applies recursively to the blocks of
$A_t^{\mathrm{diag}}$.
A particular case, where this assumption is satisfied, is when the coefficients $A_t$
are all banded.
In the spirit of divide and conquer solvers for matrix equations~\cite{kressner2019low,massei2022hierarchical}, we
remark that, given the additive decomposition
$A_1=A_1^{\mathrm{diag}}+A_1^{\mathrm{off}}$,
the solution $\T X$ of \eqref{eq:tens-sylv} can
be written as $\T X^{(1)}+\delta \T X$ where
\begin{align}
\T X^{(1)}\times_1 A_1^{\mathrm{diag}}+\T X^{(1)}\times_2 A_2+\dots+\T X^{(1)}\times_d A_d&=\T B,\label{eq:bench}\\
\delta \T X\times_1 A_1+\delta\T X\times_2 A_2+\dots+\delta\T X\times_d A_d&=-\T X^{(1)} \times_1 A_1^{\mathrm{off}}.\label{eq:update}
\end{align}
If $A_1^{\mathrm{diag}}= \left[\begin{smallmatrix}
A^{(1)}_{1,11} \\ &A^{(1)}_{1,22}
\end{smallmatrix}\right]$,
then \eqref{eq:bench} decouples into two tensor equations of the form
\begin{equation}\label{eq:diag}
\T X^{(1)}_{j}\times_1 A_{1,jj}^{(1)}+\T X^{(1)}_{j}\times_2 A_2+\dots+\T X^{(1)}_{j}\times_d A_d=\T B_{j}, \qquad j=1,2,
\end{equation}
with $\T X^{(1)}_{1}$ containing the entries of $\T X^{(1)}$ with the first index restricted to the
column indices of $A_{1,11}^{(1)}$, and $\T X^{(1)}_{2}$
to those of $A_{1,22}^{(1)}$.
Equation~\eqref{eq:update} has the notable property that its
right-hand side is a $d$-dimensional tensor multiplied in the first mode by a low-rank matrix.
Merging the modes from $2$ to $d$ in equation~\eqref{eq:update} yields the matrix Sylvester equation
\begin{equation}\label{eq:reshape-update}
A_1 \delta X + \delta X \left(A_2\otimes I\otimes\dots\otimes I+\dots + I\otimes\dots I\otimes A_d\right)^T= -A_1^{\mathrm{off}}X^{(1)}.
\end{equation}
In particular, the right-hand side of \eqref{eq:reshape-update}
has rank bounded by $\mathrm{rank}(A_1^{\mathrm{off}})$ and the matrix
coefficients of the equation are positive definite. This implies
that $\delta X$ has numerically low-rank and can be efficiently
approximated with a low-rank Sylvester solver such as a rational
Krylov subspace method \cite{Simoncini2016,druskin2011analysis}
or the \emph{alternating direction implicit method} (ADI)
\cite{benner2009adi}.
We note that, applying the splitting simultaneously to all $d$
modes yields an update equation for $\delta \T X$ of the form
\begin{equation} \label{eq:update-all}
\delta \T X\times_1 A_1+\delta\T X\times_2 A_2+\dots+\delta\T X\times_d A_d
=- \sum_{t = 1}^d \T X^{(1)} \times_t A_t^{\mathrm{off}},
\end{equation}
and $2^d$ recursive calls:
\begin{equation} \label{eq:diag-all}
\T X^{(1)}_{j_1, \ldots, j_d} \times_1 A_{1,j_1 j_1}^{(1)} +
\dots +
\T X^{(1)}_{j_1, \ldots, j_d}
\times_d A_{d, j_d j_d} =
\T B_{j_1, \ldots, j_d}, \qquad j_t \in \{ 1,2 \}.
\end{equation}
However, when $d > 2$, the right-hand side of Equation~\eqref{eq:update-all} is not
necessarily low-rank for any
matricization. On the other hand, by additive splitting
of the right-hand side we can write
$\delta \T X := \delta \T X_1 + \ldots + \delta \T X_d$, where
$\delta \T X_t$ is the solution to an equation of the form
\eqref{eq:update}.
In view of the above discussion, we propose the following recursive strategy for solving \eqref{eq:lin-sys}:
\begin{enumerate}
\item if all the $n_i$s are sufficiently small
then solve \eqref{eq:lin-sys} by diagonalization,
\item otherwise split
the equation along all modes as in
\eqref{eq:update-all} and \eqref{eq:diag-all},
\item compute $\T X^{(1)}$ by solving the $2^d$ equations in \eqref{eq:diag-all} recursively,
\item approximate $\delta \T X_t$
by applying a low-rank matrix Sylvester solver
for $t = 1, \ldots, d$.
\item return $\T X = \T X^{(1)}+\delta \T X_1 + \ldots + \delta \T X_d$.
\end{enumerate}
The procedure will be summarized in Algorithm~\ref{alg:dac} of Section~\ref{sec:tensors},
where we will consider the case of tensors in detail.
To address point 4.\ we can use any of the available low-rank
solvers for Sylvester equations \cite{Simoncini2016}; in this work
we consider the fADI and the rational Krylov subspace methods that
are discussed in detail in the next sections. In Algorithm~\ref{alg:dac}
we refer to the chosen method with \textsc{low\_rank\_sylv}. We remark
that both these choices require to have a low-rank factorization of
the mode $j$ unfolding of $\T X_0\times_j A_j^{\mathrm{off}}$ and to
solve shifted linear systems with a Kronecker sum of $d-1$ matrices $A_t$.
The latter task is again of the form \eqref{eq:lin-sys} with $d-1$ modes and is performed recursively with Algorithm~\ref{alg:dac}, when $d>2$; this makes our algorithm a nested solver. At the base of the recursion, when \eqref{eq:lin-sys} has only one mode, this is just a shifted linear system. We discuss this in detail in section~\ref{sec:low-rank}.
\subsection{Notation for Hierarchical matrices}
\label{sec:hierarchical}
The HSS matrices $A_t$ ($t = 1, \ldots, d$) can be partitioned as follows:
\begin{equation} \label{eq:hodlr-split}
A_t = \begin{bmatrix}
A_{t, 11}^{(1)} & A_{t, 12}^{(1)} \\
A_{t, 21}^{(1)} & A_{t, 22}^{(1)} \\
\end{bmatrix} \in \mathbb R^{n_t \times n_t},
\end{equation}
where $A_{t,12}^{(1)}$ and $A_{t,21}^{(1)}$ have low rank,
and $A_{t, ii}^{(1)}$
are HSS matrices. In particular, the diagonal blocks are square and can
be recursively partitioned in the same way $\ell_t - 1$ times.
The depth $\ell_t$ is chosen to ensure that the blocks
at the lowest level of the recursion are smaller than
a prescribed minimal size $n_{\min} \times n_{\min}$.
More formally, after one step of recursion
we partition $I_1^{(0)} = \{ 1, \ldots, n_t \} =
I_1^{(1)} \sqcup I_2^{(1)}$ where $I_1^{(1)}$ and $I_2^{(1)}$ are
two sets of contiguous indices; the matrices
$A_{t, ij}$ in \eqref{eq:hodlr-split} have
$I_i^{(1)}$ and $I_j^{(1)}$ as row and column indices,
respectively, for $i,j = 1,2$.
Similarly, after $h \leq \ell_t$ steps of recursion, one has
the partitioning $\{ 1, \ldots, n_t \} =
I_1^{(h)} \sqcup \ldots \sqcup I_{2^{h}}^{(h)}$, and we denote
by $A_{t,ij}^{(h)}$ with $1 \leq i,j \leq 2^h$ the
submatrices of $A_t$ with row indices $I_i^{(h)}$ and
column indices $I_j^{(h)}$. Note that $A_{t,ii+1}^{(h)}$
and $A_{t,i+1i}^{(h)}$ indicate the low-rank off-diagonal
blocks uncovered at level $h$ (see Figure~\ref{fig:hodlr}).
The quad-tree structure of submatrices of $A_t$, corresponding to the above described splitting of row and column indices, is called the \emph{cluster tree} of $A_t$; see \cite[Definition 1]{massei2022hierarchical} for the rigorous definition. The integer $\ell_t$ is called the \emph{depth of the cluster tree}.
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.8] \small
\coordinate (T18) at (0,0);
\coordinate (T14) at (-4,-1);
\coordinate (T58) at (4,-1);
\coordinate (T12) at (-6,-2);
\coordinate (T34) at (-2,-2);
\coordinate (T56) at (2,-2);
\coordinate (T78) at (6,-2);
\coordinate (T1) at (-7,-3);
\coordinate (T2) at (-5,-3);
\coordinate (T3) at (-3,-3);
\coordinate (T4) at (-1,-3);
\coordinate (T5) at (1,-3);
\coordinate (T6) at (3,-3);
\coordinate (T7) at (5,-3);
\coordinate (T8) at (7,-3);
\node (N18) at (T18) {$I^{(0)}_1$};
\node (N14) at (T14) {$I_1^{(1)}$};
\node (N58) at (T58) {$I_2^{(1)}$};
\node (N12) at (T12) {$I_1^{(2)}$};
\node (N34) at (T34) {$I_2^{(2)}$};
\node (N56) at (T56) {$I_3^{(2)}$};
\node (N78) at (T78) {$I_4^{(2)}$};
\node (N1) at (T1) {$I_1^{(3)}$};
\node (N2) at (T2) {$I_2^{(3)}$};
\node (N3) at (T3) {$I_3^{(3)}$};
\node (N4) at (T4) {$I_4^{(3)}$};
\node (N5) at (T5) {$I_5^{(3)}$};
\node (N6) at (T6) {$I_6^{(3)}$};
\node (N7) at (T7) {$I_7^{(3)}$};
\node (N8) at (T8) {$I_8^{(3)}$};
\draw[->] (N18.south) -- (N14);
\draw[->] (N18.south) -- (N58);
\draw[->] (N14.south) -- (N12);
\draw[->] (N14.south) -- (N34);
\draw[->] (N58.south) -- (N56);
\draw[->] (N58.south) -- (N78);
\draw[->] (N12.south) -- (N1);
\draw[->] (N12.south) -- (N2);
\draw[->] (N34.south) -- (N3);
\draw[->] (N34.south) -- (N4);
\draw[->] (N56.south) -- (N5);
\draw[->] (N56.south) -- (N6);
\draw[->] (N78.south) -- (N7);
\draw[->] (N78.south) -- (N8);
\end{tikzpicture} \\[.5cm]
\begin{tikzpicture}[scale=0.35]
\begin{scope}
[xshift=0cm]
\node [above] at (4,8) {$h=0$};
\draw (0,0) -- (0,8) -- (8,8) -- (8,0) -- cycle;
\node at (4,4) {$A_t$};
\end{scope}
\begin{scope}
[xshift=9cm]
\node [above] at (4,8) {$h=1$};
\draw (0,0) -- (0,8) -- (8,8) -- (8,0) -- cycle;
\draw[fill=blue!25] (0,0) rectangle (4,4);
\draw[fill=blue!25] (4,4) rectangle (8,8);
\draw (0,4) -- (8,4);
\draw (4,0) -- (4,8);
\node at (2,6) {$A_{t,11}^{(1)}$};
\node at (2,2) {$A_{t,21}^{(1)}$};
\node at (6,6) {$A_{t,12}^{(1)}$};
\node at (6,2) {$A_{t,22}^{(1)}$};
\end{scope}
\begin{scope}
[xshift=18cm]
\node [above] at (4,8) {$h=2$};
\draw (0,0) -- (0,8) -- (8,8) -- (8,0) -- cycle;
\draw[fill=blue!25] (4,0) rectangle (6,2);
\draw[fill=blue!25] (0,4) rectangle (2,6);
\draw[fill=blue!25] (6,2) rectangle (8,4);
\draw[fill=blue!25] (2,6) rectangle (4,8);
\draw (0,2) -- (8,2);
\draw (0,4) -- (8,4);
\draw (0,6) -- (8,6);
\draw (2,0) -- (2,8);
\draw (4,0) -- (4,8);
\draw (6,0) -- (6,8);
\foreach \i in {1, ..., 4} {
\foreach \j in {1, ..., 4} {
\node at (2*\i-1, 8-2*\j+1) {\tiny $A_{t,\j\i}^{(h)}$};
}
}
\end{scope}
\begin{scope}
[xshift=27cm]
\node [above] at (4,8) {$h=3$};
\draw (0,0) -- (0,8) -- (8,8) -- (8,0) -- cycle;
\draw[fill=blue!25] (6,0) rectangle (7,1);
\draw[fill=blue!25] (4,2) rectangle (5,3);
\draw[fill=blue!25] (2,4) rectangle (3,5);
\draw[fill=blue!25] (0,6) rectangle (1,7);
\draw[fill=blue!25] (7,1) rectangle (8,2);
\draw[fill=blue!25] (5,3) rectangle (6,4);
\draw[fill=blue!25] (3,5) rectangle (4,6);
\draw[fill=blue!25] (1,7) rectangle (2,8);
\draw (0,1) -- (8,1);
\draw (0,2) -- (8,2);
\draw (0,3) -- (8,3);
\draw (0,4) -- (8,4);
\draw (0,5) -- (8,5);
\draw (0,6) -- (8,6);
\draw (0,7) -- (8,7);
\draw (1,0) -- (1,8);
\draw (2,0) -- (2,8);
\draw (3,0) -- (3,8);
\draw (4,0) -- (4,8);
\draw (5,0) -- (5,8);
\draw (6,0) -- (6,8);
\draw (7,0) -- (7,8);
\end{scope}
\end{tikzpicture}
\caption{Example of the hierarchical low-rank structure
obtained with the recursive partitioning in
\eqref{eq:hodlr-split}. The light blue blocks are the
low-rank submatrices identified at each level.}
\label{fig:hodlr}
\end{figure}
Often, we will need to group together all the diagonal blocks
at level $h$; we denote by $A_t^{(h)}$ such matrix, that is:
\begin{equation} \label{eq:hodlr-level-h}
A_t =
\underbrace{\begin{bmatrix}
A_{t,11}^{(h)}& & & \\
& \ddots & & \\
& & \ddots & \\
& & &A_{t,2^{h} 2^{h}}^{(h)}
\end{bmatrix}}_{A_t^{(h)}} +
\begin{bmatrix}
0 & \star & \dots & \star \\
\star &\ddots & \ddots & \vdots \\
\vdots & \ddots & \ddots & \star\\
\star & \dots & \star & 0
\end{bmatrix}.
\end{equation}
Finally, the maximum rank of the off-diagonal blocks of an HSS matrix is called the \emph{HSS rank}.
\subsection{Representation and operations with HSS matrices}
An $n\times n$ matrix in the form described in the previous section, with HSS rank $k$, can be
effectively stored in the HSS format \cite{xia2010fast}, using only
$\mathcal O(nk)$ memory. Using this structured representation,
matrix-vector multiplications and solution
of linear systems can be performed with $\mathcal O(nk)$ and
$\mathcal O(nk^2)$ flops, respectively.
Our numerical results leverage the implementation of this format
and the related matrix operations
available in \texttt{hm-toolbox} \cite{massei2020hm}.
\section{The divide-and-conquer approach for matrix Sylvester equations}
\label{sec:2d-case}
We begin by discussing the case $d=2$, that is
the matrix Sylvester equation
\begin{equation} \label{eq:sylv2d}
A_1 X + XA_2 = B,
\qquad
B \in \mathbb{C}^{n_1 \times n_2},
\end{equation}
since there is a
major difference with respect to $d>2$ that makes the
theoretical analysis much simpler. Indeed, the call
to \textsc{low\_rank\_sylv} for solving \eqref{eq:reshape-update}
does not need to recursively call the divide-and-conquer scheme,
which is needed when $d > 2$ for generating the
right Krylov subspaces. Moreover, we assume that
$\ell_1 = \ell_2 =: \ell$, that automatically implies that
$n_1$ and $n_2$ are of the same order of magnitude; the
algorithm can be easily adjusted for unbalanced dimensions,
as we discuss in detail in Section~\ref{sec:unbalanced-2d}.
We will denote by
$B^{(h)} = [B_{ij}^{(h)}]$
the matrix $B$ seen as a block matrix partitioned according
to the cluster trees of $A_1$ and $A_2$ at level $h$ for
the rows and columns, respectively.
Splitting both modes at once yields the update equation
\begin{equation}\label{eq:2dcase}
A_1\delta X + \delta X A_2 = -A_1^{\mathrm{off}} X^{(1)} -
X^{(1)} A_2^{\mathrm{off}},\qquad X^{(1)}=\begin{bmatrix}
X^{(1)}_{11}& X_{12}^{(1)} \\
X_{21}^{(1)} & X_{22}^{(1)}
\end{bmatrix}
\end{equation}
where $X_{ij}^{(1)}$ is the solution of
$A_{1,ii}^{(1)} X_{ij}^{(1)} +
X_{ij}^{(1)} A_{2,jj}^{(1)} = B^{(1)}_{ij}$
and $B^{(1)}_{ij}$ is the block in position $(i,j)$ in
$B^{(1)}$.
The right-hand side of \eqref{eq:2dcase} has rank bounded by
$\mathrm{rank}(A_1^{\mathrm{off}})+\mathrm{rank}(A_2^{\mathrm{off}})$,
therefore we can use \textsc{low\_rank\_sylv}
to solve \eqref{eq:2dcase}. The procedure is summarized in Algorithm~\ref{alg:dac2d-balanced}.
\begin{algorithm}[H]
\small
\caption{}\label{alg:dac2d-balanced}
\begin{algorithmic}[1]
\Procedure{lyap2d\_d\&c\_balanced}{$A_1,A_2, B, \epsilon$}
\If{$\max_i n_i\leq n_{\min}$}
\State\Return\Call{lyapnd\_diag}{$A_1,A_2, B$}
\Else
\State $X_{ij}^{(1)} \gets$ \Call{Lyap2D\_D\&C}{$A_{1,ii}^{(1)}, A_{2,jj}^{(1)}, B_{ij}^{(1)}$}, \ for $i,j = 1,2$
\State $X^{(1)} \gets \left[
\begin{smallmatrix}
X_{11}^{(1)} & X_{12}^{(1)} \\
X_{21}^{(1)} & X_{22}^{(1)} \\
\end{smallmatrix}
\right]$
\State Retrieve a low-rank factorization of
$A_1^{\mathrm{off}} X^{(1)} + X^{(1)} A_2^{\mathrm{off}} = UV^T$\label{step:fact}
\State $\delta X\gets \Call{low\_rank\_sylv}{A_1, A_2, U, V, \epsilon}$
\State\Return $X^{(1)}+\delta X$
\EndIf
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{remark}\label{rem:rhs}
Efficient solvers of Sylvester equations with low-rank right-hand sides
need a factorization of the latter; see line~\ref{step:fact}
in Algorithm~\ref{alg:dac2d-balanced}. Once the matrix $X^{(1)}$ is
computed, the factors $U$ and $V$ are retrieved with explicit formula
involving the factorizations of the matrices $A_1^{\mathrm{off}}$ and
$A_2^{\mathrm{off}}$ \cite[Section 3.1]{kressner2019low}. The low-rank
representation can be cheaply compressed via a QR-SVD based procedure;
our implementation always apply this compression step.
\end{remark}
\subsection{Analysis of the equations generated in the recursion}
\label{sec:equations-recursion}
In this section we introduce the notation for all the equations
solved in the recursion, as this will be useful in the error and
complexity analysis.
We denote with capital letters (e.g., $X, \delta X$)
the exact solutions of such equations, and with an additional
tilde
(i.e., $\widetilde{X}$ and $\delta \widetilde{X})$
their inexact counterparts obtained in finite
precision computations.
\subsubsection{Exact arithmetic}
\label{sec:d=2-exact-equations}
We begin by considering the computation, with Algorithm~\ref{alg:dac2d-balanced}, of the solution of
a matrix Sylvester equation, assuming that
all the equations generated during the recursion are solved exactly.
In this scenario, the
solution $X$ admits the additive splitting
\[
X = X^{(\ell)} + \delta X^{(\ell-1)} + \ldots + \delta X^{(0)},
\]
where $X^{(\ell)}$ or $\delta X^{(h)}$ contains all the
solutions determined
at depth $\ell$ and $h$ in the recursion, respectively.
More precisely, $X^{(\ell)}$
takes the form
\begin{equation} \label{eq:xsplitting}
X^{(\ell)} := \begin{bmatrix}
X_{1,1}^{(\ell)} & \dots & X_{1, 2^{\ell}}^{(\ell)} \\
\vdots & & \vdots \\
\T X_{2^{\ell}, 1}^{(\ell)} & \dots & \T X_{2^{\ell},2^{\ell}}^{(\ell)}.
\end{bmatrix},
\end{equation}
where $X^{(\ell)}_{i,j}$ solves the Sylvester equation
$A_{1, ii}^{(\ell)} X^{(\ell)}_{i,j} +
X^{(\ell)}_{i,j} A_{2,jj}^{(\ell)} = B_{i,j}$; for
$h < \ell$, we denote $X^{(h)} := X^{(\ell)} +
\delta X^{(\ell-1)} + \ldots + \delta X^{(h)}$, that solves $A_1^{(h)} X^{(h)} + X^{(h)} A_2^{(h)} = B^{(h)}$.
The matrix $\delta X^{(h)} = [\delta X^{(h)}_{i,j}]$
containing the solutions of the
update equations at level $h < \ell$ is block-partitioned
analogously
to $B^{(h)}$ and $X^{(h)}$.
The diagonal blocks $A_{1, ii}^{(h)}$ can be
in turn
split into their diagonal and off-diagonal parts as follows:
\[
A_{1, ii}^{(h)} = \begin{bmatrix}
A_{1, 2i-1, 2i-1}^{(h + 1)} \\
& A_{1, 2i, 2i}^{(h + 1)} \\
\end{bmatrix} + \begin{bmatrix}
0 & A_{1,2i-1,2i}^{(h + 1)} \\
A_{1,2i,2i-1}^{(h + 1)} & 0\\
\end{bmatrix},
\]
and the same holds for $A_{2, jj}^{(h)}$.
Then
$\delta X^{(h)}_{i,j}$ solves
$A_{1,ii}^{(h)} \delta X^{(h)}_{i,j} +
\delta X^{(h)}_{i,j} A_{2,jj}^{(h)} =
\Xi_{ij}^{(h)}$ where
\begin{equation} \label{eq:correction2levels}
\begin{split}
\Xi_{ij}^{(h)} :=
&- \begin{bmatrix}
0 & A_{1,2i-1,2i}^{(h + 1)} \\
A_{1,2i,2i-1}^{(h + 1)} & 0\\
\end{bmatrix} \begin{bmatrix}
X_{2i-1, 2i-1}^{(h+1)} & X_{2i-1, 2i}^{(h+1)} \\
X_{2i, 2i-1}^{(h+1)} & X_{2i, 2i}^{(h+1)} \\
\end{bmatrix} \\
&- \begin{bmatrix}
X_{2i-1, 2i-1}^{(h+1)} & X_{2i-1, 2i}^{(h+1)} \\
X_{2i, 2i-1}^{(h+1)} & X_{2i, 2i}^{(h+1)} \\
\end{bmatrix} \begin{bmatrix}
0 & A_{2,2j-1,2j}^{(h + 1)} \\
A_{2,2j,2j-1}^{(h + 1)} & 0\\
\end{bmatrix}.
\end{split}
\end{equation}
Since $X^{(h)}_{ij}$ solves
the Sylvester equation
\[
A_{1,ii}^{(h)} X^{(h)}_{ij} +
X^{(h)}_{ij} A_{2, jj}^{(h)} =
B^{(h)}_{ij},
\]
then, by rewriting the above equation as a linear system,
we can bound
\[
\norm{X_{ij}^{(h)}}_F \leq \frac{
\norm{B_{ij}^{(h)}}_F
}{\alpha_{1} + \alpha_{2}}.
\]
Applying this relation in Equation~\eqref{eq:correction2levels}
we get the following bound for the norm of the right-hand side
$\Xi_{ij}^{(h)}$:
\begin{align*}
\norm{\Xi_{ij}^{(h)}}_F &\leq
(\beta_{1} + \beta_2) \left\lVert
\begin{bmatrix}
X_{2i-1, 2i-1}^{(h+1)} & X_{2i-1, 2i}^{(h+1)} \\
X_{2i, 2i-1}^{(h+1)} & X_{2i, 2i}^{(h+1)} \\
\end{bmatrix}
\right\lVert_F \\
&\leq
\frac{\beta_{1} + \beta_2}{\alpha_1 + \alpha_2}
\sqrt{
\norm{B_{2i-1, 2i-1}^{(h+1)}}_F^2 + \norm{B_{2i-1, 2i}^{(h+1)}}_F^2
+ \norm{B_{2i, 2i-1}^{(h+1)}}_F^2 + \norm{B_{2i, 2i}^{(h+1)}}_F^2
} \\
&= \frac{\beta_{1} + \beta_2}{\alpha_1 + \alpha_2} \norm{B_{ij}^{(h)}}_F.
\end{align*}
We define the block matrix $\Xi^{(h)} = [ \Xi_{ij}^{(h)} ]$; collecting all the
previous relation as $(i,j)$ varies, we obtain
$
A_{1}^{(h)} \delta X^{(h)} + \delta X^{(h)} A_2^{(h)} =
\Xi^{(h)}
$ and
$\norm{\Xi^{(h)}}_F \leq \frac{\beta_{1} + \beta_2}{\alpha_1 + \alpha_2} \norm{B}_F$.
\subsubsection{Inexact arithmetic}
In a realistic scenario, the Sylvester equations for
determining $X^{(\ell)}$ and $\delta X^{(h)}$ are solved
inexactly. We make the assumption that all Sylvester
equations of the form $A_1 X + X A_2 = B$
are solved with a residual satisfying
\begin{equation} \label{eq:sylvester-relative-accuracy}
A_1 \widetilde{X} + \widetilde{X} A_2 = B + R, \qquad
\norm{R}_F \leq \epsilon \norm{B}_F.
\end{equation}
Then, the approximate solutions
computed throughout the recursion verify:
\begin{align*}
A_{1, ii}^{(\ell)} \widetilde{X}^{(\ell)}_{i,j} +
\widetilde{X}^{(\ell)}_{i,j} A_{2,jj}^{(\ell)} &= B_{i,j} ^{(\ell)}
+ R_{ij}^{(\ell)} \\
A_{1,ii}^{(h)} \delta \widetilde{X}^{(h)}_{i,j} +
\delta \widetilde{X}^{(h)}_{i,j} A_{2,jj}^{(h)} &=
\tilde{\Xi}_{ij}^{(h)} + R_{ij}^{(h)},
\end{align*}
where $\widetilde{\Xi}_{ij}^{(h)}$ is defined by
replacing $X_{ij}^{(h+1)}$ with
$\tilde{X}_{ij}^{(h+1)}$ in \eqref{eq:correction2levels}.
Thanks to our assumption on the inexact solver, we have that
$\norm{R_{ij}^{(\ell)}}_F \leq \epsilon \norm{B_{ij}^{(\ell)}}_F$;
bounding $\norm{R_{ij}^{(h)}}_F$ for $h<\ell$, is slightly more challenging, since
it depends on the accumulated inexactness.
Let us consider the matrices $R^{(h)} = [ R^{(h)}_{ij} ]$
that correspond to the residuals of the Sylvester equations
\begin{align}
\label{eq:sylv-inexact-0}
A_{1}^{(\ell)} \widetilde{X}^{(\ell)} +
\widetilde{X}^{(\ell)} A_{2}^{(\ell)} &= B^{(\ell)}
+ R^{(\ell)}, \\
\label{eq:sylv-inexact-h}
A_{1}^{(h)} \delta \widetilde{X}^{(h)} +
\delta \widetilde{X}^{(h)} A_{2}^{(h)} &=
\tilde{\Xi}^{(h)} + R^{(h)}.
\end{align}
A bound on $\norm{R^{(h)}}_F$ can be derived
by controlling the ones of $\tilde{\Xi}^{(h)}$.
\begin{lemma}
\label{lem:residuals-rh}
If the Sylvester equations generated
in Algorithm~\ref{alg:dac} are solved with the accuracy
prescribed in \eqref{eq:sylvester-relative-accuracy}
then $\tilde X^{(h)} := \tilde X^{(\ell)} +
\delta \tilde X^{(\ell-1)} + \ldots + \delta \tilde X^{(h)}$
satisfies
\begin{equation} \label{eq:tilde-h}
A_1^{(h)} \tilde X^{(h)} + \tilde X^{(h)}
A_2^{(h)} = B + R^{(\ell)} + \ldots + R^{(h)},
\end{equation}
where $R^{(h)}$ are the residuals of \eqref{eq:sylv-inexact-0}
and \eqref{eq:sylv-inexact-h}.
In addition, if
$\kappa \epsilon < 1$ where $\kappa := \frac{\beta_1 + \beta_2}{\alpha_1 + \alpha_2}$,
then
\[
\norm{R^{(h)}}_F \leq
\kappa \epsilon (1 + \epsilon) (1 + \kappa \epsilon)^{\ell-h-1} \norm{B}_F.
\]
\end{lemma}
\begin{proof}
We start with the proof of \eqref{eq:tilde-h} by induction over $h$.
For $h = \ell$, the claim follows by \eqref{eq:sylv-inexact-0}.
If $h < \ell$, we decompose $\tilde X^{(h)} = \tilde X^{(h+1)} +
\delta \tilde X^{(h)}$ to obtain
\begin{align*}
A_1^{(h)} \tilde X^{(h)} + \tilde X^{(h)} A_2^{(h)}
&=
A_1^{(h)} \tilde X^{(h+1)} + \tilde X^{(h+1)} A_2^{(h)}
+ A_1^{(h)} \delta \tilde X^{(h)} + \delta \tilde X^{(h)} A_2^{(h)} \\
&= \underbrace{ ( A_1^{(h)} - A_1^{(h+1)} ) \tilde X^{(h+1)} +
\tilde X^{(h+1)} ( A_2^{(h)} - A_2^{(h+1)} )}_{-\widetilde{\Xi}^{(h)}} \\
&+ A_1^{(h+1)} \tilde X^{(h+1)} + \tilde X^{(h+1)} A_2^{(h+1)}
+ \widetilde{\Xi}^{(h)} + R^{(h)},
\end{align*}
and the claim follows by the induction step.
We now show the second claim, once again, by induction. For $h = \ell$, we obtain
the result by collecting all the residuals together in a block
matrix:
\[
\norm{R_{ij}^{(\ell)}}_F \leq \epsilon \norm{B_{ij}^{(\ell)}}_F
\implies
\norm{R^{(\ell)}}_F \leq \epsilon \norm{B^{(\ell)}}_F.
\]
Since $\kappa \geq 1$,
we have $\epsilon \leq \kappa \epsilon (1 + \epsilon) (1 + \kappa \epsilon)^{-1}$,
so that the bound is satisfied.
For $h < \ell$ we have
\begin{align*}
\norm{R^{(h)}}_F &\leq \epsilon \norm{\tilde \Xi^{(h)}}_F
\leq \epsilon (\beta_1 + \beta_2) \norm{\tilde {X}^{(h+1)}}_F\\
&\leq \epsilon (\beta_1 + \beta_2) \norm{{X}^{(h+1)}}_F +
\epsilon (\beta_1 + \beta_2) \norm{\tilde {X}^{(h+1)} - X^{(h+1)}}_F.
\end{align*}
By subtracting $A_1^{(h+1)} X^{(h+1)} + X^{(h+1)} A_2^{(h+1)} = B$ from
\eqref{eq:tilde-h} we obtain
\[
A_1^{(h+1)} (\tilde {X}^{(h+1)} - X^{(h+1)}) +
(\tilde {X}^{(h+1)} - X^{(h+1)}) A_2^{(h+1)} =
R^{(\ell)} + \ldots + R^{(h+1)}.
\]
Bounding the norm of the solution of this Sylvester equations by
$\frac{1}{\alpha_1 + \alpha_2}$ times the norm of the right-hand side yields
\[
\norm{R^{(h)}}_F \leq \kappa \epsilon \left(
\norm{B}_F + \sum_{j = h+1}^{\ell} \norm{R^{(j)}}_F
\right).
\]
For $h < \ell$, by the induction step, we have
\begin{align*}
\norm{R^{(h)}}_F &\leq \kappa \epsilon \left(
\norm{B}_F + \norm{R^{(\ell)}}_F + \sum_{j = h+1}^{\ell-1} \norm{R^{(j)}}_F
\right) \\
&\leq \kappa \epsilon \left(
1 + \epsilon +
\kappa \epsilon (1 + \epsilon) \sum_{j = h+1}^{\ell-1}
(1 + \kappa \epsilon)^{\ell-j-1}
\right) \norm{B}_F \\
&= \kappa \epsilon (1 + \epsilon) \left(1 - \kappa \epsilon \frac{1 - (1 + \kappa \epsilon)^{\ell-h-1}}{\kappa \epsilon}\right) \norm{B}_F \\
&= \kappa \epsilon (1 + \epsilon) (1 + \kappa \epsilon)^{\ell-h-1} \norm{B}_F.
\end{align*}
\end{proof}
We can leverage the previous result to bound the residual of the
approximate solution $\tilde X$ returned by
Algorithm~\ref{alg:dac2d-balanced}.
\begin{lemma} \label{lem:residual}
Under the assumptions of Lemma~\ref{lem:residuals-rh},
with the additional constraint $\kappa \epsilon < \frac{2}{\ell}$
the
approximate solution $\tilde{X} := \tilde{X}^{(0)}$ returned by
Algorithm~\ref{alg:dac} satisfies
\[
\norm{A_1 \tilde{X} + \tilde{X} A_2 - B}_F
\leq (\ell + 1)^2 \kappa \epsilon \norm{B}_F.
\]
\end{lemma}
\begin{proof}
In view of Lemma~\ref{lem:residuals-rh} the residual
associated with $\tilde X = \tilde{X}^{(0)}$ satisfies
$$
\norm{A_1 \widetilde{X}^{(0)} + \widetilde{X}^{(0)} A_2 - B}_F
\leq
\norm{R^{(0)}}_F+ \norm{R^{(1)}}_F+\dots+\norm{R^{(\ell)}}_F.
$$
Hence, summing the upper bounds for $ \norm{R^{(h)}}_F$ given
in Lemma~\ref{lem:residuals-rh} we obtain
\begin{align*}
\norm{A_1 \widetilde{X}^{(0)} +
\widetilde{X}^{(0)} A_2 - B}_F
&\leq \left( \frac{\kappa \epsilon (1 + \epsilon)}{1 + \kappa \epsilon}
\sum_{h = 0}^{\ell} (1 + \kappa \epsilon)^{\ell-h} \right) \norm{B}_F \\
&\leq
\left[ (1 + \kappa \epsilon)^{\ell + 1} - 1 \right]
\norm{B}_F =
\left[
\sum_{h = 1}^{\ell + 1} \binom{\ell + 1}{h} (\kappa \epsilon)^h
\right] \norm{B}_F.
\end{align*}
The assumption $\kappa \epsilon < \frac{2}{\ell}$ guarantees that the
dominant term in the sum occurs for $h = 1$, and therefore we have
\[
\norm{A_1 \widetilde{X}^{(\ell)} +
\widetilde{X}^{(\ell)} A_2 - B}_F
\leq
(\ell + 1)^2 \kappa \epsilon \norm{B}_F.
\]
\end{proof}
\subsection{Unbalanced dimensions}
\label{sec:unbalanced-2d}
In the general case, when $n_1 \gg n_2$ and we employ the same
$n_{\min}$ for both cluster trees of $A_1$ and $A_2$, we end up
with $\ell_1 > \ell_2$. We can artificially obtain $\ell_1 = \ell_2$
by adding $\ell_1 - \ell_2$ auxiliary levels on top of the cluster tree
of $A_2$. In all these new levels, we consider the trivial
partitioning $\{ 1, \ldots, n_2 \} = \{ 1, \ldots, n_2 \} \cup \emptyset$.
This choice implies that for the first $\ell_1 - \ell_2$
levels of the recursion only the first dimension is split, so that only $2$
matrix equations are generated by these recursive calls. This allows us to
extend all the results in Section~\ref{sec:error-analysis-2d} by setting
$\ell = \max\{ \ell_1, \ell_2 \}$.
In the practical implementation, for $n_1 \geq n_2$,
this approach is encoded in the following steps:
\begin{itemize}
\item As far as $n_1$ is significantly larger than $n_2$, e.g. $n_1\geq 2n_2$,
apply the splitting on the first mode only.
\item When $n_1$ and $n_2$ are almost equal,
apply Algorithm~\ref{alg:dac2d-balanced}.
\end{itemize}
The pseudocode describing this approach is given in Algorithm~\ref{alg:dac2d}.
\begin{algorithm}[h]
\small
\caption{}\label{alg:dac2d}
\begin{algorithmic}[1]
\Procedure{lyap2d\_d\&c}{$A_1,A_2, B, \epsilon$}
\If{$\max_i n_i\leq n_{\min}$}
\State\Return\Call{lyapnd\_diag}{$A_1,A_2, B$}
\ElsIf{$n_1 \leq 2n_2$ and $n_2 \leq 2n_1$}
\State \Return \Call{lyap2d\_d\&c\_balanced}{$A_1$, $A_2$, $B$, $\epsilon$}
\Comment{Algorithm~\ref{alg:dac2d-balanced}}
\Else
\If{$n_1 > 2n_2$}
\State Partition $B$ as $\left[ \begin{smallmatrix}
B_1 \\ B_2
\end{smallmatrix} \right]$, according to the
partitioning in $A_1^{(1)}$
\State $X_1 \gets$ \Call{lyap2d\_d\&c}{$A_{1,11}^{(1)}, A_2, B_1$},
\ $X_2 \gets$ \Call{lyap2d\_d\&c}{$A_{1,22}^{(1)}, A_2, B_2$}
\State $X^{(1)} \gets \left[
\begin{smallmatrix}
X_1 \\ X_2
\end{smallmatrix}
\right]$
\State Retrieve a low-rank factorization of
$A_1^{\mathrm{off}} X^{(1)} = UV^T$ \label{lin:mult2d}
\ElsIf{$n_2 > 2n_1$}
\State Partition $B$ as $\left[ \begin{smallmatrix}
B_1 & B_2
\end{smallmatrix} \right]$, according to the
partitioning in $A_2^{(1)}$
\State $X_1 \gets$ \Call{lyap2d\_d\&c}{$A_{1}, A_{2,11}^{(1)}, B_1$}, \
$X_2 \gets$ \Call{lyap2d\_d\&c}{$A_1, A_{2,22}^{(1)}, B_2$}
\State $X^{(1)} \gets \left[
\begin{smallmatrix}
X_1 & X_2
\end{smallmatrix}
\right]$
\State Retrieve a low-rank factorization of
$X^{(1)} A_2^{\mathrm{off}} = UV^T$
\EndIf
\State $\delta X\gets \Call{low\_rank\_sylv}{A_1, A_2, U, V, \epsilon}$
\State\Return $X^{(1)}+\delta X$
\EndIf
\EndProcedure
\end{algorithmic}
\end{algorithm}
\subsection{Solving the update equation}\label{sec:low-rank}
The update equation \eqref{eq:2dcase} is
of the form
\begin{equation}\label{eq:lowrank-sylv}
A_1 \delta X+\delta XA_2 =UV^*
\end{equation}
where $A_1, A_2$ are positive definite matrices with spectra
contained in $[\alpha_1,\beta_1]$ and $[\alpha_2, \beta_2]$, respectively,
$U\in\mathbb R^{m\times k}$, $V\in\mathbb R^{n\times k}$ with
$k\ll \min\{m,n\}$. Under these assumptions, the singular values
$\sigma_j(\delta X)$ of the solution $\delta X$ of
\eqref{eq:lowrank-sylv} decay rapidly to zero \cite{beckermann2017,penzl2000eigenvalue}.
More specifically, it holds
$$
\sigma_{1+jk}(\delta X)\leq \sigma_1(\delta X)Z_j([\alpha_1,\beta_2], [-\beta_2, -\alpha_2]),\qquad Z_j(E, F):=\min_{r(z)\in\mathcal R_{j,j}}\frac{\max_E |r(z)|}{\min_F|r(z)|}
$$
where $\mathcal R_{j,j}$ is the set of rational functions of the form
$r(z)=p(z)/q(z)$ having both numerator and denominator of degree at most $j$.
The optimization problem associated with $Z_j(E,F)$ is known in the literature as
\emph{third Zolotarev problem} and explicit estimates for the decay rate of $Z_j(E,F)$, as $j$ increases,
are available when $E$ and $F$ are disjoint real intervals \cite{beckermann2017}.
In particular,
we will make use of the following result.
\begin{lemma}[\protect{\cite[Corollary 4.2]{beckermann2017}}] \label{lem:zol}
Let $E= [\alpha_1,\beta_1]\subset \mathbb R^+$, $F=[-\beta_2, -\alpha_2]\subset\mathbb R^-$ be non-empty
real intervals, then
\begin{equation}\label{eq:zol}
Z_j(E,F)\leq 4\exp\left(\frac{\pi^2}{2\log(16\gamma)}
\right)^{-2j},\qquad \gamma:=\frac{
(\alpha_1+\beta_2)(\alpha_2+\beta_1)
}{
(\alpha_1+\alpha_2)(\beta_1+\beta_2)
}.
\end{equation}
\end{lemma}
Lemma~\ref{lem:zol} guarantees the existence of accurate low-rank approximations
of $\delta X$. In this setting, the extremal rational function for
$Z_j(E,F)$ is explicitly known and close form expressions for its zeros and
poles are available\footnote{Given $E= [\alpha_1,\beta_1]\subset \mathbb R^+$,
$F=[-\beta_2, -\alpha_2]\subset\mathbb R^-$
an expression in terms of elliptic functions
for the zeros and poles
of the extremal rational function is given
in \cite[Eq. (12)]{beckermann2017}. Our implementation is based on
the latter.}. We will see in the next sections that this enables
us to design approximation methods whose convergence rate matches the one
in \eqref{eq:zol}.
\subsubsection{A further property of Zolotarev functions}
\label{sec:zolotarev-properties}
We now make an observation that will be relevant for the
error analysis in Section~\ref{sec:tensors}.
Consider the Zolotarev problem associated with
the symmetric configuration $[\alpha, \beta] \cup [-\beta, -\alpha]$,
and let us indicate with $p_1, \ldots, p_s$ and $q_1, \ldots, q_s$ the
zeros and poles of the optimal Zolotarev rational function. The symmetry
of the configuration yields $p_i = -q_i$, and in turn
the bound $|z - p_i| / |z - q_i| \leq 1$ for all
$z \in [\alpha, \beta]$.
The last inequality also holds for nonsymmetric spectra configurations
$[\alpha_1, \beta_1] \cup [-\beta_2, -\alpha_2]$. Indeed, the
minimax problem is invariant under M\"obius transformations, and
the property holds for the evaluations of rational functions on any
transformed domains. Since the optimal rational Zolotarev
function on $[\alpha_1, \beta_1] \cup [-\beta_2, -\alpha_2]$ can be
obtained by remapping the configuration into a symmetric one,
the inequality holds on $[\alpha_1,\beta_1]$.
\subsubsection{Alternating direction implicit method}
The ADI method iteratively approximates the solution
of \eqref{eq:lowrank-sylv} with the following two-steps scheme,
for given scalar parameters $p_j, q_j \in \mathbb C$:
\begin{align*}
(A_1-q_{s+1}I)\delta X_{s+\frac 12}&= UV^*-\delta X_s(A_2+q_{s+1}I),\\
\delta X_{s+1}(A_2+p_{s+1}I)&=UV^*-(A_1-p_{j+1}I)\delta X_{s+\frac 12}.
\end{align*}
The initial guess is $\delta X_0=0$, and it is easy to see that
$\mathrm{rank}(\delta X_s)\leq sk$.
This property is exploited in a specialized version of ADI, which is
called \emph{factored ADI} (fADI) \cite{benner2009adi}; the latter
computes a factorized form of the update
$\Delta_s := \delta X_s-\delta X_{s-1} = (q_s - p_s) W_s Y_s^*$
with the following recursion:
\begin{align} \label{eq:adi-def}
\begin{cases}
W_{1} = (A_1 - q_1 I)^{-1} U \\
W_{j+1} = (A_1 - q_{j+1})^{-1} (A_1 - p_{j}) W_j \\
\end{cases} \quad
\begin{cases}
Y_{1} = -(A_2 + p_1 I)^{-1} V \\
Y_{j+1} = (A_2 + p_{j+1})^{-1} (A_2 + q_{j}) Y_j \\
\end{cases}.
\end{align}
The approximate solution $\delta X_s$ after $s$ steps of fADI takes
the form
\[
\delta X_s = \Delta_1 + \ldots + \Delta_s = \sum_{j = 1}^s (q_j - p_j) W_j Y_j^*.
\]
Observe that, the most expensive operations when executing $s$ steps of fADI are the solution of
$s$ shifted linear systems with the matrix $A_1$ and the same
amount with the matrix $A_2$. Moreover, the two sequences
can be generated independently.
The choice of the parameters $p_j, q_j$ is crucial to control the
convergence of the method, as highlighted by the explicit expressions
of the residual and the approximation error after $s$ steps
\cite{benner2009adi}:
\begin{align}
\label{eq:err-adi-full}
\delta X - \delta X_s &= r_s(A_1) \delta X r_s(-A_2)^{-1} \\
\label{eq:res-adi-full}
A_1\delta X_s + \delta X_s A_2 - UV^* &=
-r_s(A_1) UV^* r_s(-A_2)^{-1},
\end{align}
where $r_s(z) = \prod_{j=1}^s\frac{z-p_j}{z-q_j}$
and the second identity is obtained by applying the operator
$X \mapsto A_1 X + XA_2$ to $\delta X_s - \delta X$.
Taking norms yields the following upper bound for the approximation error:
\begin{align}
\label{eq:err-adi}
\norm{\delta X_s - \delta X}_F &\leq \frac{
\max_{z\in[\alpha_1,\beta_1]}|r(z)|
}{
\min_{z\in[-\beta_2,-\alpha_2]}|r(z)|
}
\norm{\delta X}_F,
\\
\label{eq:res-adi}
\norm{A_1 \delta X_s - \delta X A_2 - UV^*}_F &
\leq \frac{\max_{z\in[\alpha_1,\beta_1]}|r(z)|}{\min_{z\in[-\beta_2,-\alpha_2]}|r(z)|}\norm{UV^*}_F,
\end{align}
Inequalities \eqref{eq:err-adi} and
\eqref{eq:res-adi} guarantee
that if the shift parameters $p_j,q_j$ are chosen as the zeros and poles
of the extremal rational function for
$Z_s([\alpha_1,\beta_1], [-\beta_2,-\alpha_2])$,
then the approximation error and the residual norm decay
(at least) as prescribed by Lemma~\ref{lem:zol}. This allows us to
give an a priori bound to the number of steps required to
achieve a target accuracy $\epsilon$.
\begin{lemma} \label{lem:adi-res}
Let $\epsilon> 0$, $\delta X$ be the solution of \eqref{eq:lowrank-sylv}
and $\delta X_s$ the solution returned by applying the
fADI method to \eqref{eq:lowrank-sylv} with parameters
chosen as the zeros and poles of the extremal rational
function for $Z_s([\alpha_1,\beta_1],[-\beta_2,-\alpha_2])$. If
\begin{equation}\label{eq:adi-s}
s\geq \frac{1}{\pi^2}\log\left(\frac{4}{\epsilon}\right)\log\left(
16\frac{(\alpha_1+\beta_2)(\alpha_2+\beta_1)}{(\alpha_1+\alpha_2)(\beta_1+\beta_2)}
\right),
\end{equation}
then
\begin{align*}
\norm{A_1\delta X_s+\delta X_s A_2-UV^*}_F&\leq \epsilon \norm{UV^*}_F.
\end{align*}
\end{lemma}
\begin{proof}
Combining Lemma~\ref{lem:zol} and inequality \eqref{eq:res-adi} yields the claim.
\end{proof}
\subsubsection{Rational Krylov (RK) method}
Another approach for solving \eqref{eq:lowrank-sylv} is
to look for a solution in a well-chosen tensorization
of low-dimensional subspaces. Common choices are rational Krylov
subspaces of the form
$\mathcal K_{s,A_1}:=\mathrm{span}\{U, (A_1-p_1I)^{-1}U, \dots, (A_1-p_sI)^{-1}U\}$,
$\mathcal K_{s,A_2}:=\mathrm{span}\{V, (A_2^T+q_1I)^{-1}V, \dots, (A_2^T+q_sI)^{-1}V\}$;
more specifically, one consider an approximate solution
$\delta X_s= Q_{s,A_1}\delta YQ_{s,A_2}^*$ where
$Q_{s, A_1},Q_{s,A_2}$ are orthonormal bases of
$\mathcal K_{s,A_1},\mathcal K_{s,A_2}$,
respectively, and $\delta Y $ solves the projected equation
\begin{equation}\label{eq:proj-sylv}
(Q_{s,A_1}^*A_1 Q_{s,A_1}) \delta Y +\delta Y(Q_{s,A_2}^* A_2 Q_{s,A_2}) = Q_{s,A_1}^*UV^*Q_{s,A_2}.
\end{equation}
Similarly to fADI, when the parameters $q_j,p_j$ are chosen as the zeros
and poles of the extremal rational function for
$Z_s([\alpha_1,\beta_1],[-\beta_2,-\alpha_2])$
then the residual of the approximation can be related to
\eqref{eq:zol} \cite[Theorem 2.1]{beckermann2011}:
\begin{equation}\label{eq:residual-sylv}
\norm{A_1 \delta X_s+\delta X_sA_2-UV^*}_F
\leq 2
\left(1+\frac{\beta_1+\beta_2}{\alpha_1+\alpha_2}\right)
Z_s([\alpha_1,\beta_1], [-\beta_2, -\alpha_2]) \norm{UV^*}_F.
\end{equation}
Based on \eqref{eq:residual-sylv} we can state
the analogous of Lemma~\ref{lem:adi-res} for
rational Krylov.
\begin{lemma} \label{lem:rk-res}
Let $\epsilon> 0$, $\delta X$ be the solution of \eqref{eq:lowrank-sylv}
and $\delta X_s$ the solution returned by applying the
rational Krylov method to \eqref{eq:lowrank-sylv} with shifts
chosen as the zeros and poles of the extremal rational function
for $Z_s([\alpha_1,\beta_1],[-\beta_2 ,-\alpha_2])$. If
\begin{equation}\label{eq:rk-s}
s\geq \frac{1}{\pi^2}\log\left(\frac{
8(\alpha_1+\alpha_2+\beta_1+\beta_2)
}{\epsilon(\alpha_1+\alpha_2)}\right)
\log\left(16\frac{(\alpha_1+\beta_2)(\alpha_2+\beta_1)}{(\alpha_1+\alpha_2)(\beta_1+\beta_2)}\right),
\end{equation}
then
\begin{align*}
\norm{A_1 \delta X_s+\delta X_s A_2-UV^*}_F&\leq \epsilon \norm{UV^*}_F.
\end{align*}
\end{lemma}
\subsection{Error analysis for Algorithm~\ref{alg:dac2d}}
\label{sec:error-analysis-2d}
In Section~\ref{sec:low-rank} we have discussed two possible
implementations of \textsc{low\_rank\_sylv} in Algorithm~\ref{alg:dac2d}
that return an approximate solution of the update equation, with the
residual norm controlled by the choice of the parameter $s$.
This allows us to use Lemma~\ref{lem:residual} to bound
the error in Algorithm~\ref{alg:dac2d}.
\begin{theorem}\label{thm:2d-accuracy}
Let $A_1, A_2$ be symmetric positive definite HSS matrices
with cluster trees of depth at most $\ell$,
and spectra
contained in $[\alpha_1, \beta_1]$ and $[\alpha_2, \beta_2]$,
respectively.
Let $\tilde X$ be the approximate solution
of \eqref{eq:sylv2d} returned by Algorithm~\ref{alg:dac2d}
where \textsc{low\_rank\_sylv} is either fADI or RK
and uses the $s$ zeros and poles of the extremal rational function
for $Z_s([\alpha_{1},\beta_{1}],[-\beta_{2}, -\alpha_{2}])$ as input
parameters.
If $\epsilon > 0$ and $s$ is
such that $s \geq s_{\mathrm{ADI}}$ (when
\textsc{low\_rank\_sylv} is fADI) or
$s \geq s_{\mathrm{RK}}$ (when \textsc{low\_rank\_sylv} is RK)
where \\
\resizebox{.99\textwidth}{!}{
\vbox{
\begin{align*}
s_{\mathrm{ADI}} &=
\frac{1}{\pi^2}\log\left(4\frac{(\beta_{1}+\beta_{2})}{\epsilon(\alpha_{1}+\alpha_{2})}\right)\log\left(16\frac{(\alpha_{1}+\beta_{2})(\alpha_{2}+\beta_{1})}{(\alpha_{1}+\alpha_{2})(\beta_{1}+\beta_{2})}\right), \\
s_{\mathrm{RK}} &= \frac{1}{\pi^2}\log\left(8\frac{
(\beta_{1}+ \beta_{2})(\alpha_{1}+\alpha_{2}+\beta_{1}+\beta_{2})
}{\epsilon(\alpha_{1}+\alpha_{2})^2}\right)\log\left(16\frac{(\alpha_{1}+\beta_{2})(\alpha_{{2}}+\beta_{1})}{(\alpha_{1}+\alpha_{2})(\beta_{1}+\beta_{2})}\right),
\end{align*}}} \\
and
Algorithm~\ref{alg:diag} solves the Sylvester equations
at the base of the recursion with a residual norm bounded
by $\epsilon$ times the norm of the right-hand side, then
the computed solution satisfies:
\begin{align}\label{eq:2d-res}
\norm{A_1 \widetilde{X} + \widetilde{X} A_2- B}_F&\leq
(\ell + 1)^2 \frac{\beta_1+\beta_2}{\alpha_1+\alpha_2} \epsilon\norm{B}_F.
\end{align}
\end{theorem}
\begin{proof}
In view of Lemma~\ref{lem:adi-res} (for fADI)
and Lemma~\ref{lem:rk-res} (for RK), the assumption
on $s$ guarantees that
the norm of the residual of each update
equation is bounded by $\epsilon$ times the norm of its right-hand side.
In addition, the equations at the base of the recursion are assumed to be solved
at least as accurately as the update equations, and the claim
follows by applying Lemma~\ref{lem:residual}.
\end{proof}
We remark that, usually, the cluster trees for $A_1$ and $A_2$
are chosen by splitting the indices in a balanced way; this implies
that their depths verify $\ell_i \sim \mathcal \mathcal O(\log(n_i / n_{\min}))$.
\begin{remark}
We note that $s_{\mathrm{RK}}$ in Theorem~\ref{thm:2d-accuracy}
is larger than $s_{\mathrm{ADI}}$; under the assumption
$\alpha_{1}+\alpha_{2}+\beta_{1}+\beta_{2}\approx \beta_{1}+\beta_{2}$,
we have that
\[
s_{\mathrm{RK}} - s_{\mathrm{ADI}} \approx \frac{2}{\pi^2}
\log \left(2 \frac{\beta_{1} + \beta_{2}}{\alpha_{1} + \alpha_{2}}\right)
\log\left(16\frac{(\alpha_{1}+\beta_{2})(\alpha_{2}+\beta_{1})}{(\alpha_{1}+\alpha_{2})(\beta_{1}+\beta_{2})}\right).
\]
In practice, it is often observed that the rational Krylov method requires fewer shifts than fADI
to attain a certain accuracy, because its convergence is linked
to a rational function which is automatically optimized over a larger
space \cite{beckermann2011}.
\end{remark}
\subsubsection{The case of M-matrices}
The term $\kappa$ in the bound of Theorem~\ref{thm:2d-accuracy}
arises from controlling the norms of $\Xi^{(h)}$
and $\tilde \Xi^{(h)} - \Xi^{(h)}$. In the special
case where $A_j$ are positive definite M-matrices this can be reduced
to $\sqrt{\kappa}$ as shown in the following result.
\begin{lemma} \label{lem:m-matrices}
Let $A_1, A_2$ be symmetric positive definite $M$-matrices. Then,
the right-hand sides $\Xi^{(h)}$ of the intermediate Sylvester
equations solved in exact arithmetic in Algorithm~\ref{alg:dac2d}
satisfy
\[
\norm{\Xi^{(h)}}_F \leq \sqrt{\kappa} \norm{B}_F,
\qquad
\norm{\tilde \Xi^{(h)} - \Xi^{(h)}}_F \leq \sqrt{\kappa}
\norm{R^{(\ell)} + \ldots + R^{(h+1)}}_F,
\]
where $\kappa = (\beta_1 + \beta_2) / (\alpha_1 + \alpha_2)$.
\end{lemma}
\begin{proof}
We note that $\Xi_{ij}^{(h)}$ can be written
as $\Xi_{ij}^{(h)} = \mathcal N \mathcal M^{-1} \T \T B_{ij}^{(h)}$,
where $\mathcal N$ and $\mathcal M$ are the operators
\begin{align*}
\mathcal M Y &:= \begin{bmatrix}
A_{1, 2i-1, 2i-1}^{(h + 1)} \\
& A_{1, 2i, 2i}^{(h + 1)} \\
\end{bmatrix} Y + Y \begin{bmatrix}
A_{2, 2i-1, 2i-1}^{(h + 1)} \\
& A_{2, 2i, 2i}^{(h + 1)} \\
\end{bmatrix} \\
\mathcal N Y &:= -\begin{bmatrix}
& A_{1, 2i-1, 2i}^{(h + 1)} \\
A_{1, 2i, 2i-1}^{(h + 1)} \\
\end{bmatrix} Y - Y \begin{bmatrix}
& A_{2, 2i-1, 2i}^{(h + 1)} \\
A_{2, 2i, 2i-1}^{(h + 1)} \\
\end{bmatrix}
\end{align*}
In addition, $\mathcal M - \mathcal N =
I \otimes A^{(h)}_{1,ii} + A^{(h)}_{2,ii} \otimes I$ is a
positive definite M-matrix, and so is $\mathcal M$. In particular,
$\mathcal M - \mathcal N$ is a regular splitting, and therefore
$\rho(\mathcal M^{-1} \mathcal N) < 1$. Hence,
\begin{align*}
\norm{\Xi_{ij}^{(h)}}_F &\leq
\norm{\mathcal N \mathcal M^{-1}}_2 \norm{\T B_{ij}}_F \\
&\leq
\norm{\mathcal M^{\frac 12}}_2
\norm{\mathcal M^{-\frac 12} \mathcal N \mathcal M^{-\frac 12}}_2
\norm{\mathcal M^{-\frac 12}}_2
\norm{\T B_{ij}}_F
\leq \sqrt{\kappa} \norm{\T B_{ij}}_F,
\end{align*}
where we have used that the matrix $\mathcal M^{-\frac 12} \mathcal N \mathcal M^{-\frac 12}$
is symmetric and similar to $\mathcal N M^{-1}$, and therefore has both
spectral radius and spectral norm bounded by $1$.
For the second relation we have
\[
\tilde \Xi^{(h)} - \Xi^{(h)} =
-(A_1^{(h)} - A_1^{(h+1)}) (\tilde X^{(h+1)} - X^{(h+1)}) -
(\tilde X^{(h+1)} - X^{(h+1)}) (A_2^{(h)} - A_2^{(h+1)}),
\]
and $\tilde X^{(h+1)} - X^{(h+1)}$ solves the Sylvester equation
(see Lemma~\ref{lem:residuals-rh})
\[
A_1^{(h+1)} (\tilde X^{(h+1)} - X^{(h+1)})
+ (\tilde X^{(h+1)} - X^{(h+1)}) A_2^{(h+1)} =
R^{(\ell)} + \ldots + R^{(h+1)}.
\]
Following the same argument used for the first point we obtain
the claim.
\end{proof}
\begin{corollary}\label{cor:accuracy-mmatrices}
Under the same hypotheses and settings of Theorem~\ref{thm:2d-accuracy},
with the additional assumption of $A_1, A_2$ being M-matrices,
Algorithm~\ref{alg:diag} computes an approximate solution
$\tilde X$ that satisfies
\begin{align}\label{eq:2d-res-m}
\norm{A_1 \widetilde{X} + \widetilde{X} A_2- B}_F&\leq
(\ell + 1)^2 \sqrt{ \frac{\beta_1+\beta_2}{\alpha_1+\alpha_2} } \epsilon\norm{B}_F.
\end{align}
\end{corollary}
\subsubsection{Validation of the bounds}
We now verify the dependency of the final residual norm on the condition number of the problem to inspect the tightness of the behavior predicted by Theorem~\ref{thm:2d-accuracy} and Corollary~\ref{cor:accuracy-mmatrices}. We consider two numerical tests concerning the solution of Lyapunov equations of the form $A_{(i)}X+XA_{(i)}=C$, where the $n\times n$ matrices $A_{(i)}$ have size $n=256$ and increasing condition numbers with respect to $i$; the minimal block size is set to $n_{\min}=32$.
In the first example we generate $A_{(i)}= QD_{(i)}Q^*$ where
\begin{itemize}
\item $D_{(i)}= D^{p_i}$ is the $p_i$th power of the diagonal matrix
$D$ containing the eigenvalues of the 1D discrete Laplacian, i.e.
$2+2\cos(\pi j /(n+1))$, $j=1,\dots, n$. The $p_i$ are chosen with
a uniform sampling of $[1, 2.15]$, so that the condition numbers range
approximately between $10^4$ and $10^9$.
\item $Q$ is the Q factor of the QR factorization of a randomly generated matrix with lower bandwidth equal
to $8$, obtained with the MATLAB command \texttt{[Q, \textasciitilde ] = qr(triu(randn(n), -8))}. This choice
ensures that $A_{(i)}$ is SPD and HSS with HSS rank bounded by $8$~\cite{vandebril2007matrix}.
\item $C=QSQ^*$, where $Q$ is the matrix defined in the previous point, and $S$ is diagonal with entries
$S_{ii}= \left(\frac{i-1}{n-1}\right)^{10}$. The latter choice aims at giving more importance to the component
of the right-hand side aligned with the smallest eigenvectors of the Sylvester operator. We note that this
is helpful to trigger the worst case behavior of the residual norm.
\end{itemize}
For each value of $i$ we have performed $100$ runs of Algorithm~\ref{alg:dac2d}, using fADI as
\textsc{low\_rank\_sylv} with threshold $\epsilon=10^{-6}$. The measured residual norms
$\norm{A_{(i)}\widetilde X+\widetilde XA_{(i)}-C}_F/\norm{C}_F$ are reported in the left part of
Figure~\ref{fig:accuracy}. We remark that the growth of the residual norm appears to be proportional
to $\sqrt{\kappa}$ suggesting that the bound from Theorem~\ref{thm:2d-accuracy} is pessimistic.
\begin{figure}
\centering
\begin{tikzpicture}
\begin{loglogaxis}[legend pos = north west, xlabel = $\kappa$, width=.5\linewidth, ymax=1e-1, ymin=2e-8, title={General SPD}]
\addplot[only marks, mark=x, red] table {test_accuracy.dat};
\addplot[domain = 1e4:4e9, dashed, blue]{1e-7 * sqrt(x)};
\legend{Res.\ norm, $\mathcal O(\sqrt{\kappa})$}
\end{loglogaxis}
\end{tikzpicture}~\begin{tikzpicture}
\begin{loglogaxis}[legend pos = north west, xlabel = $\kappa$, width=.5\linewidth, ymax=1e-1, ymin=2e-8, title={SPD M-matrix}]
\addplot[only marks, mark=x, red] table {test_accuracy_mmatrix.dat};
\legend{Res.\ norm}
\end{loglogaxis}
\end{tikzpicture}
\caption{Residual norm behavior for Algorithm~\ref{alg:dac2d}, with respect to the conditioning of problem. On the left, the coefficients of the matrix equation are generic HSS SPD matrices; On the right, the coefficients have the additional property of being M-matrices.}\label{fig:accuracy}
\end{figure}
In the second test, the matrices $A_{(i)}$ are chosen as shifted 1D discrete Laplacian, where the shift is selected to match the same range of condition number of the first experiment; note that, the matrices $A_{(i)}$ are SPD M-matrices, with HSS rank 1 (they are tridiagonal). The right-hand side $C$ is chosen as before, by replacing $Q$ with the eigenvector matrix of the 1D discrete Laplacian. Corollary~\ref{cor:accuracy-mmatrices} would predict an $\mathcal O(\sqrt{\kappa})$ growth for the residual norms; however, as shown in the right part of Figure~\ref{fig:accuracy}, the residual norms seems not influenced by the condition number of the problem.
\begin{remark}
The examples reported in this section have been chosen to display the worst case behavior of the residual norms; in the typical case, for instance by choosing \texttt{C = randn(n)}, the influence of the condition number on the residual norms is hardly visible also in the case of non M-matrix coefficients.
\end{remark}
\subsection{Complexity of Algorithm~\ref{alg:dac2d}}
In order to simplify the complexity analysis of Algorithm~\ref{alg:dac2d}
we assume that $n_1=2^{\ell_1} n_{\min}$
and $n_2=2^{\ell_2} n_{\min}$ and
that $A_1,A_2$ are HSS matrices of HSS rank $k$, with a partitioning
of depth $\ell_1, \ell_2$ obtained by halving the dimension at every level.
We start by considering the simpler case $n := n_1 = n_2$ and therefore
$\ell = \ell_1 = \ell_2$.
We remark that given $s\in\mathbb N$, executing $s$ steps
of fADI or RK for solving \eqref{eq:reshape-update}
with size $n \times n$, requires
\begin{align}\label{eq:sylv-cost}
\mathcal C_{ADI}(n, s, k) &= \mathcal O(k^2sn),&
\mathcal C_{RK}(n, s, k) &=\mathcal O(k^2s^2n),
\end{align}
flops, respectively. Indeed, each iteration of fADI involves the solution
of two block linear systems with $2k$ columns each and an HSS matrix
coefficient; this is performed by computing two ULV factorizations
($\mathcal O(k^2n)$, see \cite{xia2010fast}) and solving linear
systems with triangular and unitary HSS matrices ($\mathcal O(kn)$
for each of the $2k$ columns). The cost analysis of rational Krylov
is analogous, but it is dominated by reorthogonalizing
the basis at each iteration; this requires $\mathcal O(k^2jn)$ flops at
iteration $j$, for $j=1,\dots,s$.
Combining these ingredients yields the following.
\begin{theorem}\label{thm:2d-complexity}
Let $A_1,A_2\in\mathbb R^{n\times n}$, $n=2^\ell n_{\min}$, and assume
that $A_1,A_2$ are HSS matrices of HSS rank $k$, with a partitioning
of depth $\ell$ obtained by halving the dimension at every level.
Then, Algorithm~\ref{alg:dac2d-balanced} requires:
\begin{itemize}
\item[$(i)$] $\mathcal O((k\log(n)+n_{\min}+ sk^2)n^2)$ flops
if \textsc{low\_rank\_sylv} implements $s$ steps of the fADI method,
\item[$(ii)$] $\mathcal O((k\log(n)+n_{\min}+ s^2k^2)n^2)$ flops if \textsc{low\_rank\_sylv} implements $s$ steps of the rational Krylov method.
\end{itemize}
\end{theorem}
\begin{proof}
We only prove $(i)$ since $(ii)$ is obtained with an analogous argument.
Let us analyze the cost of Algorithm~\ref{alg:dac} at level $j$ of the
recursion. For $j=\ell$ it solves $2^\ell \cdot 2^\ell$ Sylvester equations
of size $n_{\min}\times n_{\min}$ by means of Algorithm~\ref{alg:diag};
this requires $\mathcal O(4^\ell\cdot n_{\min}^3)=\mathcal O(n_{\min}n^2)$
flops. At level $j<\ell$, $4^j$ Sylvester equations of size
$\frac{n}{2^j}\times \frac{n}{2^j}$ and right-hand side of rank
(at most) $2k$ are solved via fADI. According to \eqref{eq:sylv-cost},
the overall cost of these is $\mathcal O(2^jk^2sn)$. Finally, we need
to evaluate $4^j$ multiplications at line~\ref{lin:mult2d} which yields
a cost $\mathcal O(4^j k(n/2^j)^2)= \mathcal O(kn^2)$. Summing over
all levels $j=0,\dots,\ell-1$ we get
$\mathcal O(\sum_{j=0}^{\ell-1} (kn^2+2^jk^2sn))= \mathcal O(\ell kn^2 +2^{\ell}k^2s n)$. Noting that $\ell=\mathcal O(\log(n))$ provides the claim.
\end{proof}
We now consider the cost of Algorithm~\ref{alg:dac2d} in the more general case $n_1 \neq n_2$. Without loss of generality, we assume
$n_1=2^{\ell_1}n_{\min}> 2^{\ell_2}n_{\min}=n_2$;
Algorithm~\ref{alg:dac2d} begins with $\ell_1-\ell_2$ splittings
on the first mode. This generates $2^{\ell_1-\ell_2}$ subproblems
of size $n_2\times n_2$, $2^{\ell_1-\ell_2}-1$ calls to
\textsc{low\_rank\_sylv} and $2^{\ell_1-\ell_2}-1$ multiplications
with a block vector at line~\ref{lin:mult}. In particular, we have
$2^j$ calls to \textsc{low\_rank\_sylv} for problems of size
$n_1/2^j\times n_2$, $j=0,\dots, \ell_1-\ell_2-1$; this yields
an overall cost of $\mathcal O((\ell_1-\ell_2)sk^2n_1)$
and $\mathcal O((\ell_1-\ell_2)s^2k^2n_1)$ when fADI and rational Krylov
are employed, respectively. Analogously, the multiplications
at line~\ref{lin:mult2d} of this initial phase require
$\mathcal O((\ell_1-\ell_2)kn_1n_2)$ flops.
Summing these costs to the estimate provided by Theorem~\ref{thm:2d-complexity}, multiplied by $2^{\ell_1-\ell_2}$, yields the following corollary.
\begin{corollary}\label{cor:2d-complexity-diff}
Under the assumptions of Theorem~\ref{thm:2d-complexity}, apart
from $n_1=2^{\ell_1}n_{\min}\geq n_2=2^{\ell_2}n_{\min}$ with
$n_2\geq \log(n_1/n_2)$, we have that Algorithm~\ref{alg:dac2d} costs:
\begin{itemize}
\item[$(i)$] $\mathcal O((k\log(n_1)+n_{\min}+ sk^2)n_1n_2)$ if \textsc{low\_rank\_sylv} implements $s$ steps of the fADI method,
\item[$(ii)$] $\mathcal O((k\log(n_1)+n_{\min}+ s^2k^2)n_1n_2)$ if \textsc{low\_rank\_sylv} implements $s$ steps of the rational Krylov method.
\end{itemize}
\end{corollary}
\section{Tensor Sylvester equations} \label{sec:tensors}
We now proceed to describe the procedure sketched in the
introduction for $d > 2$. Initially, we assume that all
dimensions
are equal,
so that the splitting is applied to all modes at every
step of the recursion.
We identify two major differences with respect to the
case $d = 2$, both concerning the update equation \eqref{eq:update-all}:
\begin{equation*} \label{eq:update-all2}
\delta \T X\times_1 A_1+\delta\T X\times_2 A_2+\dots+\delta\T X\times_d A_d
=- \sum_{t = 1}^d \T X^{(1)} \times_t A_t^{\mathrm{off}}.
\end{equation*}
First, the latter equation cannot be
solved directly with a single call to
\textsc{low\_rank\_sylv}, since the right-hand side
does not have a low-rank matricization. However,
the $t$-mode matricization of
the term $\T X^{(1)} \times_t A_t^{\mathrm{off}}$
has rank bounded by $k$ for $t = 1, \ldots, d$.
Therefore,
the update equation can be addressed by
$d$ separate calls to the low-rank solver,
by splitting the right-hand side, and matricizing the $t$th term
in a way that separates the mode $t$ from the rest, as follows:
\begin{equation} \label{eq:matr-t}
A_t \delta X_t +
\delta X_t \sum_{j \neq t}
I \otimes \cdots \otimes I \otimes A_j \otimes I \otimes \cdots \otimes I
= - A_t^{\mathrm{off}} X^{(1)},
\end{equation}
where, in this case, $X^{(1)}$ denotes
the $t$-mode matricization of $\T X^{(1)}$.
Second, the solution of \eqref{eq:matr-t}
by means of \textsc{low\_rank\_sylv} requires
solving shifted linear systems with a Kronecker sum of
$d - 1$ matrices, that is performed by recursively
calling the divide-and-conquer scheme. This generates
an additional layer of inexactness, that we have to take
into account in the error analysis. This makes the analysis
non-trivial, and requires results on the error propagation
in \textsc{low\_rank\_sylv}; in the next section we address
this point in detail for fADI, that will be the method
of choice in our implementation.
In the case where the dimensions $n_i$ are unbalanced,
we follow a similar approach to the case $d = 2$. More precisely,
at each level
we only split on the $r$ dominant modes which
satisfy $2n_i \geq \max_j n_j $, and which are
larger than $n_{\min}$. This generates $2^r$ recursive
calls and $r$ update equations.
We summarize the procedure in Algorithm~\ref{alg:dac}.
\begin{algorithm}[H]
\small
\caption{}\label{alg:dac}
\begin{algorithmic}[1]
\Procedure{lyapnd\_d\&c}{$A_1,A_2,\dots, A_d,\T B, \epsilon$}
\If{$\max_i n_i\leq n_{\min}$}
\State\Return\Call{lyapnd\_diag}{$A_1,A_2,\dots, A_d,\T B$}
\Else
\State Permute the modes to have
$2n_i \geq \max_j \{ n_j \}$
and $n_i \geq n_{\min}$, for $i \leq r$.
\For{$j_1, \ldots, j_r = 1,2$}
\State $\T Y^{(j_1, \ldots, j_r)} \gets $\Call{lyapnd\_d\&c}{$A_1^{(j_1 j_1)},\dots, A_s^{(j_r j_r)},A_{r+1},\dots, A_d,\T B^{(j_1, \ldots, j_r)}, \epsilon$}
\State Copy $\T Y^{(j_1, \ldots, j_r)}$ in the corresponding
entries of $\T X^{(1)}$.
\EndFor
\For{$j = 1, \ldots, r$}
\State Retrieve a rank $k$ factorization $A_j^{\mathrm{off}}=U\widetilde V^T$
\State $V\gets \Call{reshape}{-\T X^{(1)} \times_j \widetilde V, \prod_{i\neq j}n_i, k}$ \label{lin:mult}
\State $\delta X_j \gets \Call{low\_rank\_sylv}{A_j, \sum_{i\neq j} I\otimes\dots \otimes I\otimes A_i\otimes I\otimes\dots\otimes I , U, V, \epsilon}$ \label{lin:lr-sylv}
\State $\delta \T X_j \gets $ tensorize $\delta X_j$, inverting the $j$-mode matricization.
\EndFor
\State \Return $\T X^{(1)} + \delta\T X_1 + \ldots + \delta \T X_r$.
\EndIf
\EndProcedure
\end{algorithmic}
\end{algorithm}
\subsection{Error analysis}
The use of an inexact solver for tensor Sylvester equations with
$d - 1$ modes in \textsc{low\_rank\_sylv} has an impact on the
achievable accuracy of the method. Under the assumption that all the
update equations at line \ref{lin:lr-sylv} are solved with a relative
accuracy $\epsilon$ we can easily generalize the analysis
performed in Section~\ref{sec:d=2-exact-equations}.
More specifically, we consider the
additive decomposition of $\tilde{\T X} = \tilde{\T X}^{(0)} =
\tilde{\T X}^{(\ell)} + \delta \tilde{\T X}^{(\ell-1)} + \ldots +
\delta \tilde{\T X}^{(0)}$,
where the equations solved by the algorithm are the following:
\begin{align}
\label{eq:sylvd-inexact-0}
\sum_{t = 1}^d \widetilde{\T X}^{(\ell)} \times_{t} A_t^{(\ell)} &= \T B
+ \T R^{(\ell)} \\
\label{eq:sylvd-inexact-h}
\sum_{t = 1}^d \delta \widetilde{\T X}^{(h,j)} \times_t A_t^{(h)}
&= \tilde{\Xi}^{(h,j)} + \T R^{(h,j)},
\end{align}
with $A_t^{(h)}$ are the block-diagonal matrices defined
in \eqref{eq:hodlr-level-h},
and $\delta \tilde {\T X}^{(h)} =
\delta \tilde {\T X}^{(h,1)} + \ldots + \delta \tilde {\T X}^{(h,d)}$,
and $\tilde \Xi^{(h,j)}$ is defined as
follows:
\[
\tilde \Xi^{(h,j)} :=
- \tilde{\T X}^{(h -1)} \times_j (A_j^{(h)} - A_j^{(h+1)}).
\]
We remark that
the matrix $A_j^{(h)} - A_j^{(h+1)}$ contains the off-diagonal blocks
that are present in $A_j^{(h)}$ but not in $A_j^{(h+1)}$.
When the dimensions $n_i$ are unbalanced we artificially
introduce some levels in the HSS partitioning (see Section~\ref{sec:unbalanced-2d}),
some of these may be the zero matrix.
Then, we state the higher-dimensional version of
Lemma~\ref{lem:residual}.
\begin{lemma} \label{lem:residual-d>2}
If the tensor Sylvester equations \eqref{eq:sylvd-inexact-0}
and \eqref{eq:sylvd-inexact-h} are solved
with the relative accuracies
$\norm{\T{R^{(\ell)}}}_F \leq \epsilon \norm{\T B}_F$ and
$\norm{\T{R}^{(h,j)}}_F \leq \epsilon \norm{\tilde \Xi^{(h,j)}}_F$ and
$\kappa \epsilon < 1$,
with $\kappa := \frac{\beta_1 + \ldots + \beta_d}{\alpha_1 + \ldots + \alpha_d}$,
then the approximate solution $\tilde{\T X}$ returned by
Algorithm~\ref{alg:dac}
satisfies
\[
\Big\lVert \sum_{i = 1}^d \tilde{\T X}
\times_i A_i - \T B \Big\rVert_F \leq
(\ell + 1)^2 \kappa \epsilon \norm{\T B}_F.
\]
\end{lemma}
\begin{proof}
Let us consider $\tilde{\T X}^{(h)} = \tilde{\T X}^{(h)}
+ \delta \tilde{\T X}^{(\ell-1)} + \ldots + \delta \tilde{\T X}^{(h)}$.
Following the same argument in the proof of Lemma~\ref{lem:residuals-rh},
we obtain
\[
\sum_{i = 1}^d \tilde{\T X}^{(h)}
\times_i A_i = \T B +
\T R^{(\ell)} + \ldots + \T R^{(h)},
\]
where $\T R^{(h)} := \sum_{j = 1}^d \T R^{(h,j)}$. Hence, we
bound
\begin{align*}
\norm{\T R^{(h)}}_F &\leq \sum_{j = 1}^d \norm{\T R^{(h,j)}}_F
\leq \epsilon \sum_{j = 1}^d \norm{\tilde \Xi^{(h,j)}}_F\\
&\leq \epsilon \sum_{j = 1}^d \beta_j (\norm{\T X^{(h+1)}}_F
+ \norm{\T X^{(h+1)} - \tilde {\T X}^{(h+1)}}_F) \\
&\leq \epsilon \kappa (
\norm{\T B}_F + \norm{\T R^{(\ell)}}_F + \ldots + \norm{\T R^{(h+1)}}_F
).
\end{align*}
By induction one can show that
\[
\norm{\T R^{(h)}}_F \leq \kappa \epsilon (1 + \epsilon)
(1 + \kappa \epsilon)^{\ell-h-1} \norm{\T B}_F,
\]
and the claim follows applying the same reasoning as in the proof
of Lemma~\ref{lem:residual}.
\end{proof}
Lemma~\ref{lem:residual-d>2} ensures that if the update equations are
solved with uniform accuracy the growth in the residual is controlled
by a moderate factor, as in the matrix case.
We now investigate what can be ensured if we perform a constant number
of fADI steps throughout all levels of recursions (including the use of
the nested Sylvester solvers). This requires the development of technical
tools for the analysis of factored ADI with inexact solves.
\subsubsection{Factored ADI with inexact solves}
Algorithm~\ref{alg:dac} solves update equations
that can be matricized as $A_1 \delta X + \delta X A_2 = UV^*$, where the factors $U$ and $V$ are
retrieved analogously to the matrix case, see Remark~\ref{rem:rhs}, and
$A_1$ is the Kronecker sum of $d - 1$ matrices. In particular,
linear systems with $A_1$ are solved inexactly by a nested call to
Algorithm~\ref{alg:dac}.
In this section we investigate how the inexactness affects the
residual of the solution computed with $s$ steps of fADI.
We begin by recalling some properties of fADI.
Assume to have carried out $j+1$ steps
of fADI applied to the equation $A_1 \delta X + \delta X A_2 = UV^*$.
Then, the factors
$W_{j+1}, Y_{j+1}$ can be obtained by running a single
fADI iteration for the equation
\[
A_1 \delta X + \delta X A_2 = -r_j(A_1) UV^* r_j(-A_2)^{-1},
\quad
r_j(z):=\prod_{h=1}^j\frac{z-p_h}{z-q_h},
\]
using parameters $p_{j+1}, q_{j+1}$.
We now consider the sequence $\tilde W_j$ obtained by solving
the linear systems with $A_1$ inexactly:
\begin{equation} \label{eq:Wj}
(A_1 - q_{j+1}I) \widetilde W_{j+1} =
(A_1 - p_{j} I) \widetilde W_j + \eta_{j+1},
\quad
(A_1 - q_1I) \widetilde W_1 = U + \eta_1,
\end{equation}
where $\norm{\eta_{j+1}}_2 \leq \epsilon \norm{U}$.
The following lemma quantifies the distance between
$\tilde W_j$ and $W_j$. Note that we make
the assumption $|z - p_j| \leq |z - q_j|$ over
$[\alpha_{1}, \beta_{1}]$, that is satisfied by the
parameters of the Zolotarev functions,
as discussed in Section~\ref{sec:zolotarev-properties}.
\begin{lemma} \label{lem:tildeWjsmall}
Let $A_1$ be positive definite, and
$p_j, q_j$ satisfying
$|z - p_j| \leq |z - q_j|$ for any
$z \in [\alpha_{1}, \beta_{1}]$.
Let $\widetilde W_j$ be the sequence defined as in
Equation~\eqref{eq:Wj}
Then, it holds
\[
(A - p_{j} I) \widetilde W_j = r_j(A_1) U + M_j, \qquad
\norm{M_j}_F \leq j\epsilon \norm{U}_F, \qquad
r_j(z) := \prod_{i = 1}^j \frac{z - p_i}{z - q_i}.
\]
\end{lemma}
\begin{proof}
For $j = 1$, the claim follows directly from the assumptions.
For $j > 1$, we have
\begin{align*}
(A_1 - p_{j+1}I) \widetilde W_{j+1} &=
(A_1 - p_{j+1}I) (A_1 - q_{j+1}I)^{-1}
\left[
(A_1 - p_{j} I) \widetilde W_j +
\eta_{j+1} \right] \\
&= \frac{r_{j+1}}{r_{j}}(A_1)
\left[
r_j(A_1) U + M_j +
\eta_{j+1} \right] \\
&= r_{j+1}(A_1) U + \frac{r_{j+1}}{r_{j}}(A_1) \left( M_{j} + \eta_{j+1} \right).
\end{align*}
The claim follows by setting
$M_{j+1} :=
\frac{r_{j+1}}{r_{j}}(A_1) \left(M_{j} + \eta_{j+1}\right)$,
and using the property
$r_{j+1}(z) / r_{j}(z) = |z - p_{j+1}| / |z - q_{j+1}| \leq 1$
over the spectrum of $A_1$.
\end{proof}
\begin{remark}
We remark that the level of accuracy required in \eqref{eq:Wj} is guaranteed up to second
order terms in $\epsilon$, whenever the residual norm for the linear systems
$(A_1 - q_{j+1}I) \widetilde W_{j+1}= (A_1 - p_{j} I) \widetilde W_j + \eta_{j+1}$
satisfies $\norm{\eta_{j+1}}\leq \epsilon\norm{(A_1 - p_{j} I) \widetilde W_j}_F$.
Indeed, the argument used in the proof of Lemma~\ref{lem:tildeWjsmall} can be easily modified to get
$$\norm{M_j}_F\leq [(1+\epsilon)^j-1]\norm{U}_F=j\epsilon\norm{U}_F+\mathcal O(\epsilon^2).$$
The slightly more restrictive choice made in \eqref{eq:Wj}, allows us to obtain more readable results.
\end{remark}
We can then quantify the impact of carrying out
the fADI iteration with inexactness in one of the two sequences on the residual norm.
\begin{theorem}\label{thm:adi-res-inex}
Let us consider the solution of equation~\eqref{eq:lowrank-sylv}
by means of the fADI method using shifts
satisfying the property $|z - p_j| \leq |z - q_j|$
for any $z \in [\alpha_{1}, \beta_{1}]$,
and let $\epsilon >0$ such that
the linear systems defining $\widetilde W_j$ are
solved inexactly as in Equation~\eqref{eq:Wj}.
Then, the computed solution
$\delta \widetilde X_s$ verifies:
\[
\norm{A_1 \delta \widetilde X_s + \delta \widetilde X_s A_2 - UV^*}_F \leq
\epsilon_{s,\mathrm{ADI}}
+
2 s \epsilon\norm{U}_F \norm{V}_2,
\]
where
$\epsilon_{s,\mathrm{ADI}} := \norm{A_1 \delta X_s + \delta X_s A_2 - UV^*}_F$
is the norm of the residual
after $s$ steps
of the fADI method in exact arithmetic.
\end{theorem}
\begin{proof}
We indicate with $\delta \widetilde X_j$
the inexact solution computed at step $j$ of fADI.
We note that
$\delta \widetilde X_1$ corresponds to the
outcome of one exact
fADI iteration for the slightly modified right-hand side
$(U + \eta_1)V^*$; hence, by Equation \eqref{eq:res-adi-full}
$\delta \widetilde X_1$ satisfies
the residual equation:
\begin{align*}
A_1 \delta \widetilde X_1 + \delta \widetilde X_1 A_2 - (U + \eta_1) V^*
&=
- r_1(A_1) (U + \eta_1) V^* r_1(-A_2)^{-1}.
\end{align*}
which allows to express the residual on the original equation
as follows:
\begin{align*}
A_1 \delta \widetilde X_1 + \delta \widetilde X_1 A_2 - U V^*
&= -r_1(A_1) (U + \eta_1) V^* r_1(-A_2)^{-1} + \eta_1 V^*.
\end{align*}
We now derive an analogous result for the
update $\Delta_{j+1} := \delta \widetilde X_{j+1} - \delta \widetilde X_j =
(q_{j+1} - p_{j+1}) \widetilde W_{j+1} Y_{j+1}^*$
where $\Delta_1 = \delta \widetilde X_1$ by setting $\delta \widetilde X_0 = 0$.
We prove that for any $j$ the correction $\Delta_{j+1}$ satisfies
\begin{align*}
A_1 \Delta_{j+1} +
\Delta_{j+1} A_2 &=
\left[
(A_1 - p_jI) \widetilde{W}_j + \eta_{j+1}
\right] V^* r_{j}(-A_2)^{-1} \\
& -
\frac{r_{j+1}}{r_j}(A_1) \left[
(A_1 - p_{j} I) \widetilde{W}_{j} + \eta_{j+1}
\right] V^* r_{j+1}(-A_2)^{-1}, \qquad
j \geq 1.
\end{align*}
We verify the claim for $j = 1$; $\widetilde{W}_2$ is defined
by the relation
\[
(A_1 - q_2 I) \widetilde{W}_2 =
(A_1 - p_1 I) \widetilde W_1 + \eta_2
= r_1(A_1) (U + \eta_1) + \eta_2.
\]
Hence, as discussed in the beginning of this proof,
$\widetilde{W}_2$ is the outcome of one
exact iteration of
fADI applied to the equation
$A_1 \delta X + \delta X A_2 - (r_1(A_1) (U + \eta_1) + \eta_2 )V^* r_1(-A_2)^{-1} = 0$.
Hence, thanks to Equation~\eqref{eq:res-adi-full}
the residual of the computed update
$\Delta_2$ verifies
\begin{align*}
A_1 \Delta_2 +
\Delta_2 A_2 &- (r_1(A_1) (U + \eta_1) + \eta_2 )V^*r_1(-A_2)^{-1}\\
&= -\frac{r_2}{r_1}(A_1)
\left[
r_1(A_1) (U + \eta_1) + \eta_2 )V^*
\right]
r_2(-A_2)^{-1}
\end{align*}
Then,
the claim follows for $j = 1$ noting that
$(A - p_1 I) \widetilde W_1 = r_1(A_1) (U + \eta_1)$
using the first relation in
Equation~\eqref{eq:Wj}.
For $j > 1$ we have
$(A_1 - q_{j+1} I) \widetilde{W}_{j+1} =
(A - p_j I) \widetilde W_j + \eta_{j+1}$
and with a similar argument $\Delta_{j+1}$
is obtained as a single fADI iteration in exact arithmetic
for the equation
$A_1 \delta X + \delta X A_2 - ((A - p_j I) \widetilde W_j + \eta_{j+1}) V^* r_j(-A_2)^{-1} = 0$,
and in view of the residual equation~\eqref{eq:res-adi-full}
\begin{align*}
A_1 \Delta_{j+1} + \Delta_{j+1} A_2 &-
\left[
(A_1 - p_j I) \widetilde W_j + \eta_{j+1}
\right] V^* r_j(-A_2)^{-1} \\
&= -
\frac{r_{j+1}}{r_j}(A_1) \left[
(A_1 - p_j I) \widetilde W_j + \eta_{j+1}
\right] V^* r_{j+1}(-A_2)^{-1}.
\end{align*}
We now write $\delta \widetilde X_s = \sum_{j = 1}^{s} \Delta_j$,
so that by linearity
the residual associated with $\delta \widetilde X_s$ satisfies
\begin{align*}
A_1 \delta \widetilde X_s + \delta \widetilde X_s A_2 - UV^* &=
A_1 \Delta_1 + \Delta_1 A_2 - UV^* +
\sum_{j = 1}^{s-1} (A_1 \Delta_{j+1} + \Delta_{j+1} A_2) \\
&= -r_1(A_1) (U + \eta_1) V^* r_1(-A_2)^{-1} + \eta_1 V^* \\
&+ \sum_{j = 1}^{s-1} \Bigg\{ -\frac{r_{j+1}}{r_j}(A_1) \left[
(A_1 - p_j I) \widetilde W_j + \eta_{j+1}
\right] V^* r_{j+1}(-A_2)^{-1} \\
&+ \left[
(A_1 - p_j I) \widetilde W_j + \eta_{j+1}
\right] V^* r_j(-A_2)^{-1} \Bigg\}.
\end{align*}
We now observe that, thanks to Equation~\eqref{eq:Wj}
\[
\frac{r_{j+1}}{r_j}(A_1)
\left[
(A_1 - p_j I) \widetilde W_j -
\eta_{j+1} \right] =
(A_1 - p_{j+1} I) \widetilde W_{j+1}.
\]
Plugging this identity in the summation yields
\begin{align*}
& \sum_{j = 1}^{s-1} \resizebox{.95\linewidth}{!}{$\Bigg\{
\left[
(A_1 - p_j I) \widetilde W_j + \eta_{j+1}
\right] V^* r_j(-A_2)^{-1} -\frac{r_{j+1}}{r_j}(A_1) \left[
(A_1 - p_j I) \widetilde W_j + \eta_{j+1}
\right] V^* r_{j+1}(-A_2)^{-1}
\Bigg\}$} \\
&= \sum_{j = 1}^{s-1} \Bigg\{\left[
(A_1 - p_j I) \widetilde W_j + \eta_{j+1}
\right] V^* r_j(-A_2)^{-1} -(A - p_{j+1} I) \widetilde W_{j+1} V^* r_{j+1}(-A_2)^{-1}
\Bigg\} \\
&=(A - p_1 I) \widetilde W_1 V^* r_1(-A_2)^{-1} -(A_1 - p_sI) \widetilde W_s V^* r_s(-A_2)^{-1}
+ \sum_{j=1}^{s-1} \eta_{j+1} V^* r_{j}(-A_2)^{-1}
\end{align*}
Summing this with the residual yields
\[
A_1 \delta \widetilde X_s + \delta \widetilde X_s A_2 - UV^* =
-(A - p_sI) \widetilde W_s V^* r_s(-A_2)^{-1} +
\sum_{j = 0}^{s-1} \eta_{j+1} V^* r_{j}(-A_2)^{-1},
\]
where we have used the relation
$r_1(A_1)(U + \eta_1) = (A - p_1 I) \widetilde W_1$
from Equation~\eqref{eq:Wj}. In view Lemma~\ref{lem:tildeWjsmall}
we can write $(A - p_sI) \widetilde W_s = r_s(A_1) U + M_s$,
where $\norm{M_s}_F \leq s\epsilon$. Therefore,
\begin{align*}
\norm{ A_1 \delta \widetilde X_s + \delta \widetilde X_s A_2 + UV^* }_F &\leq
\norm{r_s(A_1) UV^* r_s(-A_2)^{-1}}_F + s\epsilon \norm{U}_F \norm{V}_2 \\
&+
\sum_{j = 0}^{s-1} \norm{\eta_{j+1} V^* r_{j}(-A_2)^{-1}}_F \\
&\leq
\epsilon_{ADI, s} + 2 s \epsilon \norm{U}_F \norm{V}_2.
\end{align*}
\end{proof}
\begin{remark} \label{rem:normUV}
We note that, the error in Theorem~\ref{thm:adi-res-inex} depends on
$\norm{U}_F \norm{V}_2$; this may be larger than $\norm{UV^*}_F$,
which is what we need to ensure the relative accuracy of the
algorithm. However, under the additional assumption
that $V$ has orthogonal columns, we have
$\norm{UV^*}_F = \norm{U}_F \norm{V}_2$. We can always
ensure that this condition is satisfied by computing a thin
QR factorization of $V$, and right-multiplying $U$ by $R^*$. This
does not increase the complexity of the algorithm.
\end{remark}
\subsubsection{Residual bounds with inexactness}
We can now exploit the results on inexact fADI to control
the residual norm of the approximate solution returned by
Algorithm~\ref{alg:dac}, assuming that all the update equations
are solved with a fixed number $s$ of fADI steps with optimal shifts.
\begin{theorem} \label{thm:accuracy-d}
Let $A_1,\dots, A_d\in\mathbb R^{n\times n}$, symmetric positive
definite with spectrum contained in $[\alpha, \beta]$, and
$\kappa := \frac{\beta}{\alpha}$. Moreover, assume
that the $A_j$s are HSS matrices of HSS rank $k$, with a partitioning of
depth $\ell$.
Let $\epsilon>0$ and suppose that \textsc{low\_rank\_sylv} uses fADI with
the $s$ zeros and poles of the extremal rational function for
$Z_s([\alpha,\beta],[-(d-1)\beta, -\alpha])$ as input parameters,
with right-hand side reorthogonalized as described in Remark~\ref{rem:normUV}.
If
$$
s\geq \frac{1}{\pi^2}\log\left(2\frac{d\kappa}{\epsilon}\right)\log\left(8\frac{(\alpha+(d-1)\beta)(\alpha+\beta)}{d\alpha\beta}\right)
$$ and Algorithm~\ref{alg:diag} solves the Sylvester equations
at the base of the recursion with residual bounded
by $\epsilon$ times the norm of the right-hand side,
then the solutions $\widetilde{\mathcal X}$ computed by
Algorithm~\ref{alg:dac} satisfies:
\begin{align*}
\norm{\widetilde{\mathcal X}\times_1 A_1+\dots+ \widetilde{\mathcal X}\times_d A_d-\T B}_F\leq
\left(
\left[
\kappa (\ell + 1)^{2} \right]^{d-1} (1 + 2s)^{d-2}\epsilon\right) \norm{\T B}_F.
\end{align*}
\end{theorem}
\begin{proof}
Let $\epsilon_{\mathrm{lr}, d}$ be the relative residual at which the
low-rank update equations with $d$ modes
are solved in the recursion and let $\epsilon_d:= (\ell+1)^2 \kappa \epsilon_{\mathrm{lr}, d}$.
Note that, thanks to Lemma~\ref{lem:residual-d>2}, we have
\[
\norm{\widetilde{\mathcal X}\times_1 A_1+\dots+
\widetilde{\mathcal X}\times_d A_d-\T B}_F
\leq
\epsilon_d \norm{\T B}_F ,
\]
so that $\epsilon_d$ is an upper bound for the relative
residual of the target equation. Moreover, using the error bound for inexact fADI
of Theorem~\ref{thm:adi-res-inex}, we can write $\epsilon_{\mathrm{lr}, d} \leq (1 + 2s) \epsilon_{d-1}$,
which implies
\[
\epsilon_d \leq \begin{cases}
(\ell+1)^2 \kappa \epsilon
& d = 2 \ \ (\text{Theorem~\ref{thm:2d-accuracy}})
\\
(\ell+1)^2 \kappa (1 + 2s) \epsilon_{d-1}, &
d \geq 3.
\end{cases},
\]
where $\epsilon$ is $Z_s([ \alpha, \beta ], [-(d-1)\beta, -\alpha])$.
Expanding the recursion yields the sought bound.
\end{proof}
Theorem~\ref{thm:accuracy-d} bounds the residual error with a constant
depending on $\kappa^{d-1}$, which can often be pessimistic. This term
arises when bounding $\norm{\Xi^{(h,j)}}_F$ with $\norm{A_j^{(h)}}_2$
multiplied by $\norm{\T X^{(h+1)}}_F$. When the $A_j$s are M-matrices,
this can be improved, by replacing $\kappa$ with $\sqrt{\kappa}$.
\begin{corollary}
Under the same hypotheses of Theorem~\ref{thm:accuracy-d} and the
additional assumption that the $A_t$s are symmetric positive definite $M$-matrices, we have
\[
\norm{\widetilde{\mathcal X}\times_1 A_1+\dots+ \widetilde{\mathcal X}\times_d A_d-\T B}_F\leq
\left(
\left[
d \sqrt{\kappa} (\ell + 1)^{2} \right]^{d-1} (1 + 2s)^{d-2}\epsilon\right) \norm{\T B}_F.
\]
\end{corollary}
\begin{proof}
By means of the same argument used in Lemma~\ref{lem:m-matrices}
we have
that $\norm{\Xi^{(h,j)}} \leq \sqrt{\kappa} \norm{\T B}_F$
and $\norm{\Xi^{(h,j)} - \tilde \Xi^{(h,j)}}_F \leq \sqrt{\kappa}
\norm{\T R^{(\ell)} + \ldots + \T R^{(h+1)}}_F$. Plugging these
bounds in the proof of Lemma~\ref{lem:residual-d>2} yields
the inequality
\begin{align*}
\norm{\T R^{(h)}}_F &\leq
\epsilon \sum_{j=1}^d
\left[
\norm{\Xi^{(h,j)}}_F + \norm{\tilde \Xi^{(h,j)} - \Xi^{(h,j)}}_F
\right] \\
&\leq
d \epsilon \sqrt{\kappa} \left[
\norm{\T B}_F + \norm{\T R^{(\ell)}}_F + \ldots + \norm{\T R^{(h+1)}}_F
\right].
\end{align*}
Then, following the same steps of Lemma~\ref{lem:residual-d>2} yields
\[
\norm{\T R^{(h)}}_F \leq
d \sqrt{\kappa} \epsilon (1 + \epsilon) (1 + d\sqrt{\kappa} \epsilon)^{\ell-h-1}
\norm{\T B}_F.
\]
Using this bound in Theorem~\ref{thm:accuracy-d} yields the claim.
\end{proof}
\subsection{Complexity analysis}
Theorem~\ref{thm:2d-complexity} can be generalized to the $d$-dimensional
case, providing a complexity analysis when nested solves are used.
\begin{theorem} \label{thm:3d-complexity}
Let $A_i \in\mathbb R^{n_i\times n_i}$
$n_i = 2^{\ell_i} n_{\min}$, with $n_1\geq n_2\geq \dots \geq n_d$, $n_d\geq skd$, $n_d\geq \log(n_1/n_d)$, and assume that $A_i$ are
HSS matrices of HSS rank $k$,
with a partitioning of depth $\ell_i$ obtained by halving
the dimension at every level, for $i=1,\dots,d$. Then, Algorithm~\ref{alg:dac} costs:
\begin{itemize}
\item[$(i)$] $\mathcal O((k(d + \log(n_1))+n_{\min}+ sk^2)n_1\dots n_d)$ if \textsc{low\_rank\_sylv} implements $s$ steps of the fADI method,
\item[$(ii)$] $\mathcal O((k(d + \log(n_1))+n_{\min}+ s^2k^2)n_1\dots n_d)$ if \textsc{low\_rank\_sylv} implements $s$ steps of the rational Krylov method.
\end{itemize}
\end{theorem}
\begin{proof}
We only prove $(i)$ because $(ii)$ is completely analogous.
Let us assume that we have $r$ different mode sizes and each of those occurs $d_h$ times, i.e.:
\begin{align*}
\underbrace{n_1=n_2=\dots=n_{i_1}}_{d_1}> \underbrace{n_{i_1+1}=\dots=n_{i_2}}_{d_2}>\dots >\underbrace{n_{i_{r-1}+1}=\dots=n_{i_r}}_{d_r}.
\end{align*} We proceed by (bivariate) induction over
$d\geq 2$ and $\ell_1\geq 0$; the cases $d=2$ and $\ell_1\geq 0$
are given by Theorem~\ref{thm:2d-complexity} and
Corollary~\ref{cor:2d-complexity-diff}. For $d>2$ and $\ell_1=0$,
we have that $n:=n_1=\dots=n_d=n_{\min}$ and
Algorithm~\ref{alg:dac} is equivalent to Algorithm~\ref{alg:diag}
whose cost is $\mathcal O(n^{d+1})= \mathcal O(n_{\min}n^d)$; so also
in this case the claim is true. For $d>2$ and $\ell_1>0$, the algorithm
begins by halving the first $d_1$ modes generating $2^{d_1}$
subproblems
having dominant size $n_1/2=2^{\ell_1-1}n_{\min}$. By induction these
subproblems cost
\begin{align*}
\mathcal O&\left(2^{d_1}(k(d + \log(n_{1}/2))+n_{\min}+sk^2)(n_{i_1}/2)^{d_1}n_{i_2}^{d_2}n_{i_3}^{d_3}\dots n_{i_r}^{d_r}\right)\\
&=O\left((k(d + \log(n_{1}))+n_{\min}+sk^2)\prod_{i=1}^dn_i\right).
\end{align*}
Then, we focus on the cost of the update equations and of the tensor times (block) vector multiplications,
in this first phase of Algorithm~\ref{alg:dac}.
The procedure generates $d_1$ update equations of size $n_1\times\dots\times n_d$.
The cost of each call to \textsc{low\_rank\_sylv} is dominated by the complexity of solving $sk$ (shifted)
linear systems with a Kronecker sum structured matrix with $d-1$ modes. By induction,
the cost of all the calls to \textsc{low\_rank\_sylv} is bounded by
\begin{align*}
&\mathcal \mathcal O\left( skd_1(k(d + \log(n_1))+n_{\min}+sk^2)\prod_{j=1}^{d-1}n_j\right)\\
&=\mathcal O\left( (k(d+\log(n_1))+n_{\min}+sk^2)\prod_{j=1}^{d}n_j\right).
\end{align*}
Finally, since the tensor times (block) vector multiplications are in one-to-one correspondence
with the calls to \textsc{low\_rank\_sylv}, we have that the algorithm generates $d_1$ products of
complexity $\mathcal O(kn_1\dots n_d)$. Adding the contribution $\mathcal O(dkn_1\dots n_d)$ to the
cost of the subproblems provides the claim.
\end{proof}
\section{Numerical experiments} \label{sec:numerical-experiments}
We now test the proposed algorithm against some implementations of Algorithm~\ref{alg:diag}
where the explicit diagonalization of the matrix coefficients
$A_t$ is either done in dense arithmetic or via the algorithm proposed in \cite{ou2022superdc}.
Note that, the dense solver delivers accuracy close to machine precision while the other approaches
aim at a trading some accuracy for a speedup in
the computational time.
We assess this behavior on 2D and 3D examples.
\subsection{Details of the implementation}
An efficient implementation of Algorithm~\ref{alg:dac2d} and
Algorithm~\ref{alg:dac} takes some care. In particular:
\begin{itemize}
\item In contrast to the numerical tests of Section~\ref{sec:error-analysis-2d},
the number of Zolotarev shifts for fADI and RK is adaptively chosen on each level of
the recursion to ensure the accuracy described in \eqref{eq:sylvester-relative-accuracy}.
More precisely, this requires estimates of the spectra of the matrix coefficients $A_t^{(h)}$ at all
levels of recursion $h$; this is done via the MATLAB built-in function \texttt{eigs}.
However, since $A^{(h)}_i$ appears in $2^{(d-1)h}$ equations,
estimating the spectra in each recursive call would incur in redundant
computations. Therefore, we precompute estimates of the spectra for each
block before starting Algorithm~\ref{alg:dac2d} and \ref{alg:dac}, by walking
the cluster tree.
\item Since the $A_t$s are SPD, we remark that the
correction equation can be slightly modified to obtain a right-hand side
with half of the rank. This is obtained by replacing $A_j^{\mathrm{off}}$
with a low-rank matrix having suitable non-zero diagonal blocks. See
\cite[Section~4.4.2]{kressner2019low} for the details on this idea. This
is crucial for problems with higher off-diagonal ranks, and is used in the
2D Fractional cases described below.
\item A few operations in Algorithm~\ref{alg:dac} are well suited for parallelism:
the solution of the Sylvester equations at the base of the recursions
are all independent, and the same holds for the
body of the for loop at lines 11-14. We exploit this fact by computing all
the solutions in parallel using multiple cores in a shared memory
environment.
\item When the matrices $A_t$ are both HSS and banded, they are represented within the sparse format and the sparse direct solver of MATLAB (the backslash operator) is used for the corresponding system solving operations. Note that, in this case the peculiar location of the nonzero entries makes easy to construct the low-rank factorizations of the off-diagonal blocks.
\end{itemize}
In addition to fADI and rational Krylov, we consider another popular
low-rank solver for Sylvester equations: the \emph{extended Krylov}
method~\cite{simoncini2007new} (EK). The latter corresponds to the rational
Krylov method where the shift parameters alternate between the values
$0$ and $\infty$.
In particular, EK's iteration leverages the precomputation of the Cholesky
factorization of the coefficient matrices, as the shift parameters does not vary.
A slight downside is that we do not have a priori bounds on the error, and
we have to monitor the residual norm throughout the iterations to detect
convergence.
An implementation of the proposed algorithms is freely available at
\url{https://github.com/numpi/teq_solver},
and requires \texttt{rktoolbox}\footnote{\url{https://rktoolbox.org}}
\cite{berljafa2015generalized}
and \texttt{hm-toolbox}\footnote{\url{https://github.com/numpi/hm-toolbox}}
\cite{massei2020hm} as external
dependencies. The repository contains the numerical experiments
included in this document, and includes a copy of the SuperDC
solver\footnote{\url{https://github.com/fastsolvers/SuperDC}} \cite{ou2022superdc}.
The experiments have
been run on a server with two Intel(R) Xeon(R) E5-2650v4 CPU with 12 cores and 24 threads
each, running at 2.20 GHz, using MATLAB R2021a with the Intel(R) Math Kernel Library
Version 11.3.1. The examples have been run using the SLURM scheduler,
allocating $8$ cores and $240$ GB of RAM.
\subsection{Laplace and fractional Laplace equations}
\label{sec:2d-laplace-test}
In this first experiment we validate the asymptotic complexity of
Algorithm~\ref{alg:dac2d} and compare the performances of various
low-rank solvers for the update equation. As case studies we select
two instances of the matrix equation $AX + XA= B$. In one
case $A\in\mathbb R^{n\times n}$ is chosen as the usual central
finite difference discretization of the 1D Laplacian. In the other case
$A$ is the Gr\"unwald-Letnikov finite difference discretization of the 1D
fractional Laplacian with order $1.5$, see \cite[Section 2.2.1]{massei2019fast}. In both cases the
reference solution $X$ is randomly generated by means of the MATLAB command
\texttt{randn(n)}, and $B$ is computed as $B := AX + XA$.
We remark that for both equations the matrix $A$ is
SPD and HSS; in particular for the Laplace equation $A$ is tridiagonal and
stored in the sparse format, while for the fractional Laplace equation it
has the rank structure depicted in Figure~\ref{fig:hss-fract}. For the Laplace
equation we set $n_{\min}=512$. In view of the higher off-diagonal ranks of the
fractional case, we consider the larger minimal block size $n_{\min}=2048$.
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{fractional-structure}
\caption{Hierarchical low-rank structure of the 1D fractional
Laplace operator discretized through Gr\"unwald-Letnikov finite
differences. The blue blocks are dense, while the gray blocks
are stored as low-rank matrices whose rank is indicated by the
number in the center.}
\label{fig:hss-fract}
\end{figure}
In the next section we will investigate how varying this parameter affects the performances.
We consider increasing sizes $n=2^j$, $j=10,\dots, 15$ and the following solvers:
\begin{description}
\item[diag] Algorithm~\ref{alg:diag}
with explicit diagonalization of the matrix $A$ performed in dense arithmetic.
\item[dst] Algorithm~\ref{alg:diag} incorporating the fast diagonalization
by means of the Dicrete Sine Transform (DST). This approach is only considered
for the 2D Laplace equation.
\item[superdc] solver implementing Algorithm~\ref{alg:diag} using the fast diagonalization
for SPD HSS matrices described in \cite{ou2022superdc}.
\item[dc\_adi] Algorithm~\ref{alg:dac2d} where the fADI iteration is used as \textsc{low\_rank\_sylv}.
\item[dc\_rk] Algorithm~\ref{alg:dac2d} where the rational Krylov method is used as \textsc{low\_rank\_sylv}.
\item[dc\_ek] Algorithm~\ref{alg:dac2d} where the extended Krylov method is used as \textsc{low\_rank\_sylv}.
\end{description}
The shifts used in \textbf{dc\_adi} and \textbf{dc\_rk} are the optimal
Zolotarev zeros and poles. The number of shifts is
chosen to obtain similar residual norms
$\mathrm{Res}:=\norm{A\widetilde X + \widetilde X A -B}_F/\norm{B}_F$
of about $10^{-10}$.
We start by comparing the different implementation of Algorithm~\ref{alg:dac2d}
with \textbf{diag}. A detailed comparison with \textbf{dst} and \textbf{superdc}
is postponed to Section~\ref{sec:superdc}.
The running times and residuals are reported in Table~\ref{tab:2d-laplace}
for the Laplace equation. The fractional case is reported
in Table~\ref{tab:2d-fractional}, for which we do not report the timings for $n=1024,2048$
since our choice of $n_{\min}$ makes Algorithm~\ref{alg:dac2d} equivalent to Algorithm~\ref{alg:diag}.
For both experiments, using fADI as low-rank solver yields the cheapest method. We remark that,
in the fractional case, \textbf{dc\_ek} outperforms \textbf{dc\_rk} since the precomputation of the
Cholesky factorization of $A$ (and of its sub blocks) makes the iteration of extended Krylov significantly
cheaper than the one of rational Krylov.
In Figure~\ref{fig:hist_2D_Laplace} and \ref{fig:hist_2D_Fractional_Laplace}
we display how the time is distributed among the various subtasks of \textbf{dc\_adi}, i.e.,
the time spent on solving dense matrix equations, computing the low-rank updates,
forming the RHS of the update equation and updating the solution, and
estimating the spectra.
\begin{filecontents}[overwrite]{test1.dat}
512 0.099554 1.2005e-13 0.18573 1.2005e-13 0.055668 1.2005e-13 0.050743 1.2005e-13
1024 0.2207 2.9074e-13 0.57544 2.4926e-10 0.5524 1.4229e-10 0.54659 3.0559e-10
2048 0.94992 3.8668e-13 1.0375 3.1349e-10 1.9686 1.4744e-10 2.4169 3.987e-10
4096 5.8621 1.2454e-12 3.5119 3.4297e-10 8.4206 1.4802e-10 10.535 4.3992e-10
8192 41.883 3.3708e-12 14.924 3.6532e-10 35.467 1.4917e-10 43.186 4.6517e-10
16384 320.76 7.8448e-12 61.409 3.7013e-10 149.63 1.4913e-10 181.48 4.8035e-10
32768 2511.3 8.8846e-12 258.06 3.7227e-10 625.83 1.4902e-10 759.95 4.9323e-10
\end{filecontents}
\begin{table}
\caption{Timings and residuals for the solution of the 2D
Laplace equation of size $n \times n$ by means of Algorithm~\ref{alg:dac2d}
using different low rank solvers, and $n_{\min} = 512$.}
\resizebox{\textwidth}{!}{
\pgfplotstabletypeset[skip rows between index={0}{1},
every head row/.style={
before row={
\toprule
\multicolumn{1}{c|}{}&
\multicolumn{2}{c|}{\textbf{diag}}&
\multicolumn{2}{c|}{\textbf{dc\_adi}}&
\multicolumn{2}{c|}{\textbf{dc\_rk}}&
\multicolumn{2}{c}{\textbf{dc\_ek}}
\\
\multicolumn{1}{c|}{}&
\multicolumn{2}{c|}{}&
\multicolumn{2}{c|}{$n_{\min} = 512$}&
\multicolumn{2}{c|}{$n_{\min} = 512$}&
\multicolumn{2}{c}{$n_{\min} = 512$}
\\
},
after row = \midrule,
},
columns/0/.style = {column name = $n$, column type=c|},
columns/1/.style = {column name = Time,precision=1,zerofill, fixed},
columns/2/.style = {column name = Res,precision=1,zerofill, column type=c|},
columns/3/.style = {column name = Time,precision=1,zerofill, fixed},
columns/4/.style = {column name = Res,precision=1,zerofill, column type=c|},
columns/5/.style = {column name = Time,precision=1,zerofill, fixed},
columns/6/.style = {column name = Res,precision=1,zerofill, column type=c|},
columns/7/.style = {column name = Time,precision=1,zerofill, fixed},
columns/8/.style = {column name = Res,precision=1,zerofill, column type=c},
]{test1.dat}}
\label{tab:2d-laplace}
\end{table}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[
title = \text{2D Laplace, $n_{\min} = 512$},
x tick label style={
/pgf/number format/1000 sep=},
ylabel=\small{Percentage of time},
width = .75\linewidth, height = .25\textheight,
symbolic x coords={$n=4096$, $n=8192$, $n=16384$},
xtick = data,
tick label style={font=\scriptsize},
enlargelimits=0.3,
legend style={anchor=north},
ybar,
legend style={
font=\scriptsize,
cells={anchor=west}
},
legend pos = outer north east,
]
\addplot coordinates { ($n=4096$,61.25) ($n=8192$,56.38) ($n=16384$,58.30) };
\addplot coordinates { ($n=4096$,11.19) ($n=8192$,12.10) ($n=16384$,11.03) };
\addplot coordinates { ($n=4096$,17.73) ($n=8192$,24.50) ($n=16384$,25.26) };
\addplot coordinates { ($n=4096$,6.25) ($n=8192$,3.38) ($n=16384$,2.08) };
\legend{Dense,Low-rank,RHS+Sol,Spectra}
\end{axis}
\end{tikzpicture}
\caption{Distribution of the time spent in the different tasks in
Algorithm~\ref{alg:dac2d}. The results are for some instances
of the 2D Laplace equation considered in
Section~\ref{sec:2d-laplace-test} with fADI as low-rank solver
and $n_{\min} = 512$.}
\label{fig:hist_2D_Laplace}
\end{figure}
\begin{filecontents}[overwrite]{test1_bis.dat}
512 0.088435 4.1337e-15 0.22348 4.1337e-15 0.083585 4.1337e-15 0.08796 4.1337e-15
1024 0.35876 5.1055e-15 0.36213 5.1055e-15 0.37264 5.1055e-15 0.31666 5.1055e-15
2048 2.0805 6.2894e-15 2.3068 6.2894e-15 2.0386 6.2894e-15 2.397 6.2894e-15
4096 15.974 8.2916e-15 20.927 4.5479e-11 24.723 1.6174e-11 19.112 1.2235e-10
8192 137.15 1.7874e-14 99.444 5.8513e-11 120.69 2.4795e-10 97.786 1.2255e-10
16384 1061.1 2.303e-14 439.02 6.8474e-11 593.23 2.1602e-10 471.29 1.5583e-10
32768 8297 3.1923e-14 1998 3.0864e-10 2948.1 3.557e-10 2467.1 2.9557e-10
\end{filecontents}
\begin{table}
\caption{Timings and residuals for the solution of the 2D
fractional Laplace equation of size $n \times n$ by means of Algorithm~\ref{alg:dac2d}
using different low rank solvers, and $n_{\min} = 2048$.}
\resizebox{\textwidth}{!}{
\pgfplotstabletypeset[skip rows between index={0}{3},
every head row/.style={
before row={
\toprule
\multicolumn{1}{c|}{}&
\multicolumn{2}{c|}{\textbf{diag}}&
\multicolumn{2}{c|}{\textbf{dc\_adi}}&
\multicolumn{2}{c|}{\textbf{dc\_rk}}&
\multicolumn{2}{c}{\textbf{dc\_ek}}
\\
\multicolumn{1}{c|}{}&
\multicolumn{2}{c|}{}&
\multicolumn{2}{c|}{$n_{\min} = 2048$}&
\multicolumn{2}{c|}{$n_{\min} = 2048$}&
\multicolumn{2}{c}{$n_{\min} = 2048$}
\\
},
after row = \midrule,
},
|
columns/0/.style = {column name = $n$, column type=c|},
columns/1/.style = {column name = Time,precision=1,zerofill, fixed},
columns/2/.style = {column name = Res,precision=1,zerofill, column type=c|},
columns/3/.style = {column name = Time,precision=1,zerofill, fixed},
columns/4/.style = {column name = Res,precision=1,zerofill, column type=c|},
columns/5/.style = {column name = Time,precision=1,zerofill, fixed},
columns/6/.style = {column name = Res,precision=1,zerofill, column type=c|},
columns/7/.style = {column name = Time,precision=1,zerofill, fixed},
columns/8/.style = {column name = Res,precision=1,zerofill, column type=c},
]{test1_bis.dat}}
\label{tab:2d-fractional}
\end{table}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[
title = \text{Fractional 2D Laplace, $n_{\min} = 2048$},
x tick label style={
/pgf/number format/1000 sep=},
ylabel=\small{Percentage of time},
width = .75\linewidth, height = .25\textheight,
symbolic x coords={$n=4096$, $n=8192$, $n=16384$},
xtick = data,
tick label style={font=\scriptsize},
enlargelimits=0.3,
legend style={anchor=north},
ybar,
legend style={
font=\scriptsize,
cells={anchor=west}
},
legend pos = outer north east,
]
\addplot coordinates { ($n=4096$,38.42) ($n=8192$,32.50) ($n=16384$,28.14) };
\addplot coordinates { ($n=4096$,42.81) ($n=8192$,54.37) ($n=16384$,59.57) };
\addplot coordinates { ($n=4096$,2.27) ($n=8192$,4.04) ($n=16384$,5.60) };
\addplot coordinates { ($n=4096$,13.44) ($n=8192$,5.53) ($n=16384$,3.05) };
\legend{Dense,Low-rank,RHS+Sol,Spectra}
\end{axis}
\end{tikzpicture}
\caption{Distribution of the time spentin the different tasks in
Algorithm~\ref{alg:dac2d}. The results are for some instances
of the 2D Fractional Laplace equation considered in
Section~\ref{sec:2d-laplace-test} with fADI as low-rank solver
and $n_{\min} = 2048$.}
\label{fig:hist_2D_Fractional_Laplace}
\end{figure}
\subsection{Varying the block size}
We perform numerical tests similar to the ones of the
previous section, but we only consider the low-rank solver
fADI for the update equations and, instead, we vary the minimal
block size, aiming at determining the best block-size for each
test problem. Table~\ref{tab:2d-laplace}
and Table~\ref{tab:2d-fractional} report the results concerning
$n_{\min}\in\{256, 512, 1024\}$ for the 2D Laplace equation, and
$n_{\min}\in\{2048, 4096, 8192\}$ for the fractional
Gr\"unwald-Letnikov finite differences case.
The results indicate that the choice $n_{\min} = 512$ is ideal for the
2D Laplace case, and $n_{\min} = 4096$ for the fractional case.
We expect that problems involving larger off-diagonal ranks will need
a larger choice of $n_{\min}$.
\begin{filecontents}[overwrite]{test2_2D.dat}
2048 1.0882 5.2158e-13 1.9669 3.6599e-10 0.98486 3.1171e-10 0.9854 1.789e-10
4096 5.8803 1.3431e-12 4.9768 3.8945e-10 3.5436 3.3681e-10 3.6807 2.0215e-10
8192 42.044 3.2897e-12 19.642 4.1506e-10 14.74 3.6683e-10 16.504 2.4277e-10
16384 319.9 7.8397e-12 80.409 4.1732e-10 59.833 3.7037e-10 67.058 2.5e-10
32768 2512.5 8.8771e-12 340.49 4.1909e-10 259.14 3.7272e-10 283.31 2.5534e-10
\end{filecontents}
\begin{table}
\caption{Timings and residuals for the solution of the 2D
Laplace equation of size $n \times n$ by means of Algorithm~\ref{alg:dac2d}
using fADI as low rank solver, with different choices of
$n_{\min}$.}
\resizebox{\textwidth}{!}{
\pgfplotstabletypeset[skip rows between index={0}{0},
empty cells with={\ensuremath{-}},
every head row/.style={
before row={
\toprule
\multicolumn{1}{c|}{}&
\multicolumn{2}{c|}{\textbf{diag}}&
\multicolumn{2}{c|}{\textbf{dc\_adi}}&
\multicolumn{2}{c|}{\textbf{dc\_adi}}&
\multicolumn{2}{c}{\textbf{dc\_adi}} \\
\multicolumn{1}{c|}{}&
\multicolumn{2}{c|}{}&
\multicolumn{2}{c|}{$n_{\min} = 256$}&
\multicolumn{2}{c|}{$n_{\min} = 512$}&
\multicolumn{2}{c}{$n_{\min} = 1024$}
\\
},
after row = \midrule,
},
columns/0/.style = {column name = $n$, column type=c|},
columns/1/.style = {column name = Time,precision=1,zerofill, fixed},
columns/2/.style = {column name = Res,precision=1,zerofill, column type=c|},
columns/3/.style = {column name = Time,precision=1,zerofill, fixed},
columns/4/.style = {column name = Res,precision=1,zerofill, column type=c|},
columns/5/.style = {column name = Time,precision=1,zerofill, fixed},
columns/6/.style = {column name = Res,precision=1,zerofill, column type=c|},
columns/7/.style = {column name = Time,precision=1,zerofill, fixed},
columns/8/.style = {column name = Res,precision=1,zerofill, column type=c},
]{test2_2D.dat}}
\label{tab:2d-laplace-nmin}
\end{table}
\begin{filecontents}[overwrite]{test3_2D.dat}
2048 2.1748 6.3687e-15 NaN NaN NaN NaN NaN NaN
4096 16.802 8.1799e-15 21.826 5.353e-11 NaN NaN NaN NaN
8192 139.29 1.7944e-14 108.87 1.2156e-10 100.64 5.509e-11 NaN NaN
16384 1036.1 2.3063e-14 524.6 1.4613e-10 478.93 8.5513e-11 652.59 5.4519e-11
32768 7934.2 3.1935e-14 2337.5 1.7103e-10 2034.1 1.1976e-10 2741.5 8.1899e-11
\end{filecontents}
\begin{table}
\caption{Timings and residuals for the solution of the 2D
fractional Laplace equation of size $n \times n$ by means of Algorithm~\ref{alg:dac2d}
using fADI as low rank solver, with different choices of
$n_{\min}$.}
\resizebox{\textwidth}{!}{
\pgfplotstabletypeset[skip rows between index={0}{0},
empty cells with={\ensuremath{-}},
every head row/.style={
before row={
\toprule
\multicolumn{1}{c|}{}&
\multicolumn{2}{c|}{\textbf{diag}}&
\multicolumn{2}{c|}{\textbf{dc\_adi}}&
\multicolumn{2}{c|}{\textbf{dc\_adi}}&
\multicolumn{2}{c}{\textbf{dc\_adi}} \\
\multicolumn{1}{c|}{}&
\multicolumn{2}{c|}{}&
\multicolumn{2}{c|}{$n_{\min} = 2048$}&
\multicolumn{2}{c|}{$n_{\min} = 4096$}&
\multicolumn{2}{c}{$n_{\min} = 8192$}
\\
},
after row = \midrule,
},
columns/0/.style = {column name = $n$, column type=c|},
columns/1/.style = {column name = Time,precision=1,zerofill, fixed},
columns/2/.style = {column name = Res,precision=1,zerofill, column type=c|},
columns/3/.style = {column name = Time,precision=1,zerofill, fixed},
columns/4/.style = {column name = Res,precision=1,zerofill, column type=c|},
columns/5/.style = {column name = Time,precision=1,zerofill, fixed},
columns/6/.style = {column name = Res,precision=1,zerofill, column type=c|},
columns/7/.style = {column name = Time,precision=1,zerofill, fixed},
columns/8/.style = {column name = Res,precision=1,zerofill, column type=c},
]{test3_2D.dat}}
\label{tab:2d-fractional-nmin}
\end{table}
\subsection{Comparison with 2D state-of-the art solvers}
\label{sec:superdc}
In this section we compare Algorithm~\ref{alg:dac2d} with solvers based
on fast diagonalization strategies (\textbf{dst} and \textbf{superdc}). The fast diagonalization
procedure of \textbf{superdc} requires to set a
minimal block size. We have chosen $n_{\min} = 2048$, which yields the best
results on the cases of study.
The results are shown in Table~\ref{tab:2d-laplace-superdc} and \ref{tab:2d-lapfrac-superdc}.
Both \textbf{dst} and \textbf{superdc} have the quasi-optimal complexity $\mathcal O(n^2 \log n)$,
as Algorithm~\ref{alg:dac2d}. Algorithm~\ref{alg:dac2d} significantly
outperforms \textbf{superdc} in all examples, with an acceleration of more than
10x on the largest examples. The performances in the 2D Laplace case are
comparable with those of \textbf{dst}. On the other hand, \textbf{dst} only
applies to the specific case of the 2D Laplace equation with constant
coefficients.
\begin{filecontents}[overwrite]{test7.dat}
1024 0.57544 2.4926e-10 0.225640 6.345830e-14 nan nan
2048 1.0375 3.1349e-10 0.853807 1.144327e-13 nan nan
4096 3.5119 3.4297e-10 3.270838 3.271964e-13 15.51 2.9332e-12
8192 14.924 3.6532e-10 13.497730 1.110817e-12 95.311 4.5167e-11
16384 61.409 3.7013e-10 55.122466 2.684789e-12 593.36 9.5937e-11
32768 258.06 3.7227e-10 240.604803 2.353214e-12 3342.6 1.5423e-10
\end{filecontents}
\begin{table}
\caption{Timings and residuals for the solution of the 2D
Laplace equation of size $n \times n$ by means of Algorithm~\ref{alg:dac2d}
using fADI and $n_{\min} = 512$, \textbf{superdc} and \textbf{dst}.}
\centering
\pgfplotstabletypeset[skip rows between index={0}{0},
every head row/.style={
before row={
\toprule
\multicolumn{1}{c|}{}&
\multicolumn{2}{c|}{\textbf{dc\_adi}}&
\multicolumn{2}{c|}{\textbf{dst}}&
\multicolumn{2}{c}{\textbf{superdc}}
\\
\multicolumn{1}{c|}{}&
\multicolumn{2}{c|}{$n_{\min} = 512$}&
\multicolumn{2}{c|}{}&
\multicolumn{2}{c}{$n_{\min} = 2048$}
\\
},
after row = \midrule,
},
columns/0/.style = {column name = $n$, column type=c|},
columns/1/.style = {column name = Time,precision=1,zerofill, fixed},
columns/2/.style = {column name = Res,precision=1,zerofill, column type=c|},
columns/3/.style = {column name = Time,precision=1,zerofill, fixed},
columns/4/.style = {column name = Res,precision=1,zerofill, column type=c|},
columns/5/.style = {column name = Time,precision=1,zerofill, fixed},
columns/6/.style = {column name = Res,precision=1,zerofill, column type=c},
]{test7.dat}
\label{tab:2d-laplace-superdc}
\end{table}
\begin{filecontents}[overwrite]{test7-1.dat}
4096 21.826 5.353e-11 NaN NaN 91.405 4.0442e-09
8192 108.87 1.2156e-10 100.64 5.509e-11 685.72 5.3405e-09
16384 524.6 1.4613e-10 478.93 8.5513e-11 4715 5.7253e-09
32768 2337.5 1.7103e-10 2034.1 1.1976e-10 27546 5.8792e-09
\end{filecontents}
\begin{table}
\caption{Timings and residuals for the solution of the 2D
Fractional Laplace equation of size $n \times n$ by means of Algorithm~\ref{alg:dac2d}
using fADI and $n_{\min} = 2048, 4096$, and \textbf{superdc}.}
\centering
\pgfplotstabletypeset[skip rows between index={0}{0},
every head row/.style={
before row={
\toprule
\multicolumn{1}{c|}{}&
\multicolumn{2}{c|}{\textbf{dc\_adi}}&
\multicolumn{2}{c|}{\textbf{dc\_adi}}&
\multicolumn{2}{c}{\textbf{superdc}}
\\
\multicolumn{1}{c|}{}&
\multicolumn{2}{c|}{$n_{\min} = 2048$}&
\multicolumn{2}{c|}{$n_{\min} = 4096$}&
\multicolumn{2}{c}{$n_{\min} = 2048$}
\\
},
after row = \midrule,
},
columns/0/.style = {column name = $n$, column type=c|},
columns/1/.style = {column name = Time,precision=1,zerofill, fixed},
columns/2/.style = {column name = Res,precision=1,zerofill, column type=c|},
columns/3/.style = {column name = Time,precision=1,zerofill, fixed},
columns/4/.style = {column name = Res,precision=1,zerofill, column type=c|},
columns/5/.style = {column name = Time,precision=1,zerofill, fixed},
columns/6/.style = {column name = Res,precision=1,zerofill, column type=c},
]{test7-1.dat}
\label{tab:2d-lapfrac-superdc}
\end{table}
\subsection{3D Laplace equation}
We now test the 3D version of the Laplace solver described in
Section~\ref{sec:2d-laplace-test}. More precisely, we solve
the tensor equation
\begin{equation} \label{eq:3d-test}
\T X \times_1 A_1 + \T X \times_2 A_2 + \T X \times_3 A_3 = \T B,
\end{equation}
where $A_t$ are finite difference discretization of the 1D Laplacian
with zero Dirichlet boundary condition of sizes $n_i \times n_i$.
The reference solution $\T X$ is randomly generated by means
of \texttt{randn(n1,n2,n3)}, and $\T B$ is set evaluating \eqref{eq:3d-test}.
We remark that in the 3D case we can choose two different block sizes: one
for the recursion in the tensor equation, and one for the recursive
calls to the 2D solver. In this section we indicate the former
with $n_{\min}$ and the latter is set to $1024$ in all the examples,
with the only exception of the scaling test in Section~\ref{sec:3d-scaling},
where all block sizes (2D and 3D) are set to $32$.
The low-rank solver for the update equations is fADI,
and the tolerance $\epsilon$ in Algorithm~\ref{alg:dac}
is set to $\epsilon = 10^{-6}$.
We consider two test cases.
\paragraph{Test 1}
We choose $n = n_1 = n_2 = n_3$ ranging in
$\{ 256, 512, 1024 \}$ and the considered block sizes
are $n_{\min} \in \{ 128, 256, 512 \}$. Note that, in this
test case all the 2D problems in the recursion are solved with
the dense method, in view of our choice of the minimal block size.
The results are reported in Table~\ref{tab:3d-laplace-nmin}.
The results show that the dense method is faster for all choices
of $n$ and $n_{\min}$, although the scaling
suggests that a breakeven point should be reached around
$n=2048$ and $n_{\min}$ $1024$. However, this is not achievable with
the computational resources at our disposal, since in the case
$n = 2048$ the solution cannot be stored in the system memory.
\begin{filecontents}[overwrite]{test4_3D.dat}
256 1.0351 9.857e-15 4.8024 1.6281e-08 NaN NaN NaN NaN
512 11.52 1.1286e-14 25.356 2.1031e-08 18.123 1.3272e-08 NaN NaN
1024 144.12 1.8032e-14 258.01 2.3509e-08 242.47 1.6936e-08 193.43 1.052e-08
\end{filecontents}
\begin{table}
\caption{Timings and residuals for the solution of the 3D
Laplace equation of dimension $n \times n \times n$
by means of Algorithm~\ref{alg:dac2d}
using fADI as low rank solver, with different choices of
$n_{\min}$ for the 3D splitting. The $n_{\min}$ used in the recursive
2D solver is fixed to $n_{\min} = 1024$.}
\resizebox{\textwidth}{!}{
\pgfplotstabletypeset[skip rows between index={0}{0},
empty cells with={\ensuremath{-}},
every head row/.style={
before row={
\toprule
\multicolumn{1}{c|}{}&
\multicolumn{2}{c|}{\textbf{diag}}&
\multicolumn{2}{c|}{\textbf{dc\_adi}}&
\multicolumn{2}{c|}{\textbf{dc\_adi}}&
\multicolumn{2}{c}{\textbf{dc\_adi}} \\
\multicolumn{1}{c|}{}&
\multicolumn{2}{c|}{}&
\multicolumn{2}{c|}{$n_{\min} = 128$}&
\multicolumn{2}{c|}{$n_{\min} = 256$}&
\multicolumn{2}{c}{$n_{\min} = 512$}
\\
},
after row = \midrule,
},
columns/0/.style = {column name = $n$, column type=c|},
columns/1/.style = {column name = Time,precision=1,zerofill, fixed},
columns/2/.style = {column name = Res,precision=1,zerofill, column type=c|},
columns/3/.style = {column name = Time,precision=1,zerofill, fixed},
columns/4/.style = {column name = Res,precision=1,zerofill, column type=c|},
columns/5/.style = {column name = Time,precision=1,zerofill, fixed},
columns/6/.style = {column name = Res,precision=1,zerofill, column type=c|},
columns/7/.style = {column name = Time,precision=1,zerofill, fixed},
columns/8/.style = {column name = Res,precision=1,zerofill, column type=c},
]{test4_3D.dat}}
\label{tab:3d-laplace-nmin}
\end{table}
\paragraph{Test 2}
We choose the unbalanced dimensions $n_1 \times 512 \times 512$ with
$n_1 = 2^j$ for $j = 10, \ldots, 14$. We choose $n_{\min} = 256$
for the 3D splitting. Since the recursion is structured to split
larger dimensions first, also in this case the recursive 2D problems
are solved with the dense solver. The results in Table~\ref{tab:3d-laplace-n1}
confirm the expected linear scaling with respect to $n_1$, and
the approach is faster than the dense solver from dimension
$n_1 = 4096$.
\begin{filecontents}[overwrite]{test5_3D.dat}
1024 25.674 1.2733e-14 40.951 1.3719e-08
2048 68.523 1.4607e-14 87.925 1.4077e-08
4096 201.95 1.436e-14 187.54 1.422e-08
8192 680.83 1.523e-14 404.03 1.4287e-08
16384 2417.9 1.6778e-14 975.12 1.4319e-08
\end{filecontents}
\begin{table}
\caption{Timings and residuals for the solution of the 3D
Laplace equation of dimension $n_1 \times 512 \times 512$
by means of Algorithm~\ref{alg:dac2d}
using fADI as low rank solver, with
$n_{\min} = 256$ for the 3D splitting, and $n_{\min} = 1024$ for the
recursive 2D solver.}
\centering
\pgfplotstabletypeset[skip rows between index={0}{0},
empty cells with={\ensuremath{-}},
every head row/.style={
before row={
\toprule
\multicolumn{1}{c|}{}&
\multicolumn{2}{c|}{\textbf{diag}}&
\multicolumn{2}{c}{\textbf{dc\_adi}} \\
\multicolumn{1}{c|}{}&
\multicolumn{2}{c|}{}&
\multicolumn{2}{c}{$n_{\min} = 256$}
\\
},
after row = \midrule,
},
columns/0/.style = {column name = $n_1$, column type=c|},
columns/1/.style = {column name = Time,precision=1,zerofill, fixed},
columns/2/.style = {column name = Res,precision=1,zerofill, column type=c|},
columns/3/.style = {column name = Time,precision=1,zerofill, fixed},
columns/4/.style = {column name = Res,precision=1,zerofill, column type=c},
]{test5_3D.dat}
\label{tab:3d-laplace-n1}
\end{table}
\subsection{Asymptotic complexity in the 3D case}
\label{sec:3d-scaling}
The previous experiment
provides too few data points to assess the expected cubic complexity.
In addition, the use of the dense solver does not allow to validate the error
analysis that we have performed, and that
guarantees that the inexact solving of the subproblems does not
destroy the final accuracy.
To validate
the scaling and accuracy of Algorithm~\ref{alg:dac} as $n$ grows,
we set the minimal block size
to the small value $n_{\min} = 32$, and we measure the timings
for problems of size between $n = 64$ and $n = 1024$.
The results are reported in Figure~\ref{fig:3d-scaling} and
confirm the predicted accuracy and cubic scaling.
In addition, in the right part of Figure~\ref{fig:3d-scaling} we
display the time spent in the various parts of Algorithm~\ref{alg:dac}.
This highlights that the solution of the update equations dominates the
other costs, which is expected in view of the small $n_{\min}$. We
remark that the latter
include the calls to the 2D solver described in Algorithm~\ref{alg:dac2d}.
\begin{filecontents}[overwrite]{test6_3D.dat}
64 0.55549 2.2607e-08
128 1.7255 2.9292e-08
256 11.526 3.3761e-08
512 89.08 3.6305e-08
1024 839.79 7.7682e-08
\end{filecontents}
\begin{figure}
\centering
\begin{minipage}{.35\linewidth}
\pgfplotstabletypeset[skip rows between index={0}{0},
empty cells with={\ensuremath{-}},
every head row/.style={
before row={
\toprule
\multicolumn{1}{c|}{}&
\multicolumn{2}{c}{\textbf{dc\_adi}} \\
\multicolumn{1}{c|}{}&
\multicolumn{2}{c}{$n_{\min} = 32$}
\\
},
after row = \midrule,
},
columns/0/.style = {column name = $n$, column type=c|},
columns/1/.style = {column name = Time,precision=1,zerofill, fixed},
columns/2/.style = {column name = Res,precision=1,zerofill, column type=c},
]{test6_3D.dat}
\end{minipage}~\begin{minipage}{.64\linewidth}
\begin{tikzpicture}
\begin{axis}[
title = \text{3D Laplace, $n_{\min} = 32$},
x tick label style={
/pgf/number format/1000 sep=},
ylabel=\small{Percentage of time},
width = .82\linewidth, height = .2\textheight,
symbolic x coords={$n=256$, $n=512$, $n=1024$},
xtick = data,
tick label style={font=\scriptsize},
enlargelimits=0.3,
ybar,
bar width = .25cm,
legend style={
font=\scriptsize,
at={(0.99,-0.25)},
legend columns = -1,
cells={anchor=west}
},
]
\addplot coordinates { ($n=256$,6.93) ($n=512$,6.79) ($n=1024$,5.27) };
\addplot coordinates { ($n=256$,74.80) ($n=512$,72.68) ($n=1024$,73.52) };
\addplot coordinates { ($n=256$,12.13) ($n=512$,14.00) ($n=1024$,14.15) };
\addplot coordinates { ($n=256$,1.16) ($n=512$,0.20) ($n=1024$,0.04) };
\legend{Dense,Low-rank,RHS+Sol,Spectra}
\end{axis}
\end{tikzpicture}
\end{minipage}
\caption{On the left, timings and residuals for the 3D Laplace example
in Section~\ref{sec:3d-scaling}, with $n_{\min} = 32$ and fADI
as a low-rank solver for the update equations. On the right, the
distribution of the time spent in the different subtasks of
Algorithm~\ref{alg:dac}. }
\label{fig:3d-scaling}
\end{figure}
\section{Conclusions}
We have proposed a new solver for positive definite tensor Sylvester equation with hierarchically low-rank
coefficients that attains the quasi-optimal complexity $\mathcal O(n^d\log(n))$. Our procedure is based on a
nested divide-and-conquer paradigm. We have developed an error analysis that reveals the relation between the level
on inexactness in the solution of the nested subproblems and the final accuracy.
The numerical results demonstrate that the proposed solver can significantly speed up the solution of
matrix Sylvester equations of medium size. In the 3D-tensor case with equal size,
the method is slower than the dense solver based on diagonalization, when addressing sizes
up to $1024\times1024\times1024$. On the other hand, the performances are quite close and we expect
that running the simulations in a distributed memory environment or on a machine with a high level
of performance would uncover a breakeven point around $2048\times2048\times2048$.
A further speed up might be reached employing a relaxation strategy for the inexactness
of the linear system solving in fADI or RK \cite{kurschner2020inexact}. This may provide significant
advantages for $d>2$, where the time spent on solving the equations with low-rank right-hand side is above
the $70\%$ of the total.
Another promising direction is to adapt the method to exploit block low-rank structures in the
right-hand side; this will subject of future investigations.
\bibliographystyle{siam}
|
\section{Introduction}\vspace{-.3cm}\label{se:introduction}
\label{Introduction}
Cataloguing and organizing science often involves taxonomies, ontologies, and knowledge graphs, but most often research topics are categorized in hierarchical trees~\cite{amsClassification,Effendy17}; see Fig.~\ref{fig:topicsnetwork}. For example, ``Hardware" and ``computer systems organization" are subfields of ``computer science."
Knowledge graphs make it possible to see more of the connections between topics than can be embedded in a tree, however, the ability to show clearly the underlying hierarchical structures is compromised.
\begin{figure}[t]
\hfil \includegraphics[width=1\textwidth]{topicsnetwork.pdf} \hfil
\vspace{-.3cm}\caption{Part of the ACM classification (left) and a more realistic network showing different types of connections between topics (right).\vspace{-.5cm}
\label{fig:topicsnetwork}
\end{figure}
Maps have guided human exploration for many centuries and recently there have been several efforts to visualize scholarly knowledge and research expertise using topic maps. Basemaps of science can be generated, for example, by analyzing citation links between publications and placing similar records next to each other.
Such maps can be used to compare expertise profiles, to understand career trajectories, and to communicate emerging areas, as illustrated in the special {\it PNAS} issue on ``Mapping Knowledge Domains"~\cite{mapkd}, and B\"orner's ``Atlas of Science"~\cite{a1} and ``Atlas of Knowledge"~\cite{a2}.
Most maps of science are really node-link diagrams with one level of detail; a few support two or more levels (e.g., the UCSD map of science which is the current standard
has two levels of detail)~\cite{borner_design_2012}. However, people have difficulty reading large-scale networks~\cite{is-sensem} and few can derive knowledge from multi-level representations of networks.
Given encouraging results about the effectiveness of map-like visualization of large graphs~\cite{nlg,saket2015map}, we adopt the Graph-to-Map (GMap) framework~\cite{Gansner10} to visualize and explore our research topics map.
The GMap visualization of relational data was introduced in the context of visualizing recommendations,
where the underlying data is TV shows and the similarity between
them~\cite{Gansner09} and has already been used to visualize research topics in computer science publications~\cite{Fried14}.
Our research topics map system, {\tt rtopmap}, covers all research topics indexed by google scholar and provides the ability to show
human resource investments (e.g., number of researchers in different areas) and scholarly output (e.g., citation counts in different areas) of different universities.
The system supports zooming/panning/searching and other google-maps-based interactive features, including several map overlays showing what parts of the map are associated with different academic departments, or with specific documents; see
Fig.~\ref{fig:overview} for an overview of the system.
We gather data from google scholar and then
clean, split, merge, and correct the research topics (which become the nodes in the graph). We next compute a similarity matrix based on co-occurrence of topics in scholar profiles, which is used to place edges between topics that are frequently listed together. This gives us the topic network. We reduce the size of the graph by removing rarely occurring topics and weak connections. We then use a multi-level force-directed placement, node overlap removal, and clustering algorithms to represent the graph as a map.
Nodes, node labels, polygon colors, and edges are transformed into google map objects, which are then
drawn in the browser using the google maps API. Eight different level-of-detail (zoom levels) are precomputed, determining which nodes are present on a given level, computing label font sizes, and ensuring no label overlaps.
Different overlays are added on demand.
\begin{figure}
\hfil \vspace{-.3cm}\includegraphics[width=1\textwidth]{diagram.pdf} \hfil
\vspace{-.35cm}\caption{Overview of the {\tt rtopmap} system.}\vspace{-.6cm}
\label{fig:overview}
\end{figure}
\section{Related work}
\vspace{-.3cm}
Today, the most comprehensive map of science and classification uses ten years of paper-level data from Thomson Reuters' Web of Science and Elsevier's Scopus to group about 25,000 journals into 554 subdisciplines that are further aggregated into 13 disciplines; see data and detailed procedure in~\cite{BKP12}. However, the two-level map of 13 disciplines and 554 subdisciplines is too coarse for organizing, navigating, managing, and making sense of millions of publications.
Microsoft's Academic Graph database has 50,000 fields of study (FOS)~\cite{sinha2015overview}. Three levels of relationships are present among the fields, although field importance is not measured or quantified. A FOS score based on researcher and citation counts has been proposed for computer science~\cite{Effendy17}.
Hug {\it et al.} analyze FOS and report that they tend to be dynamic and too specific, while field hierarchies are incoherent~\cite{HugOB16}.
Liu {\em et al.}~\cite{liu2014hierarchical} use a hierarchical latent tree model (HLTM) to extract a set of hierarchical topics to summarize a corpus at different levels of abstraction. In HLTM, a topic is determined by identifying words that appear with high frequency in the
topic and with low frequency in other topics. Yang {\em et al.}~\cite{Yang2017Qu} use a HLTM in their visual analytics system, VISTopic.
Mane and B\"orner~\cite{Mane04} visualize 50 frequent and bursty words in their analysis of publication of the Proceeding of the National Academy of Sciences
Words from paper titles have also been used as indicators
for the content of a research topic, and visualizations based on this approach have been studied~\cite{van2006mapping,Fried14,Zhou2016}.
Many earlier approaches focus on analyzing specific journals, conferences, or research areas, e.g., analyzing computer science conferences and journals~\cite{Fried14}, trends in computer science research~\cite{Effendy17}, the International Conference on Data Mining (ICDM)~\cite{Misue14}, publications in data visualization~\cite{Heimerl16}.
Domenico {\it et al.}~\cite{DeDomenico2016} quantify attractive topics (i.e., topics that attract researchers from different areas).
Sun {\it et al.}~\cite{Sun16} build a network, with computer science conferences as nodes and edges between two conferences with common authors.
Map-based visualization has been used for document visualization~\cite{Wise95,skupin2002cartographic}.
Citations are considered an important contribution measurement~\cite{zhao2015analysis} and are used in visualizations of scholar profiles~\cite{Portenoy2017} and paper recommendation systems~\cite{West16}.
Citation analysis with data from the Web of Science~\cite{perianes2015university} and from Microsoft's Academic Graph~\cite{HugOB16} have been considered.
CiteRivers~\cite{Heimerl16} and CiteVIS~\cite{stasko2013citevis} analyze and visualize IEEE VIS conference citations, as do Ke {\em et al.}~\cite{ke2004major}.
Also related to our work are many of the graph visualization techniques and tools.
Graph layout algorithms are also provided in several libraries, such as GraphViz~\cite{graphviz}, OGDF~\cite{chimani2011}, MSAGL~\cite{nachmanson2008}, and VTK~\cite{schroeder2000}, which however, do not support interaction, navigation, and data manipulation. Visualization toolkits such as Prefuse~\cite{heer2005}, Tulip~\cite{auber2012}, Gephi~\cite{bastian2009}, and yEd~\cite{yed} support visual graph manipulation, and while they can handle large graphs,
their rendering does not: even for graphs with a few thousand vertices, the amount of information rendered statically on the screen makes the visualization difficult to use.
There are research papers that describe interactive multi-level interfaces for exploring large graphs such as ASK-GraphView~\cite{abello2006}, topological fisheye views~\cite{GKN05}, and Grokker~\cite{rivadeneira21}. Software applications such as Pajek~\cite{de2011} for social networks, and Cytoscape~\cite{shannon2003} for biological data provide limited support for multi-level network visualization. These approaches rely on meta-graphs, made out of meta-vertices and meta-edges, which make interactions such as semantic zooming, searching, and navigation counter-intuitive. Not many of the tools and systems above provide browser-level navigation and interaction for large graphs.
Our work differs from earlier related work in several important ways: (1) we collect the underlying data using a bottom-up approach, based on self-reported data from the actual researchers, rather than using top-down taxonomies and ontologies, (2) our visualization provides map functionality (multiple zoom levels, searching, overlays), and (3) the ability to customize both the underlying base map and the overlays.
\vspace{-.2cm}
\section{Network Generation}\vspace{-.3cm}
The set of research topics is not fixed or even well defined, as new topics are continuously created while old ones fade away. Automatically extracting keyword based topics from the research literature is a popular approach~\cite{Yang2017Qu,Fried14}, but has many limitations, such as identifying general topics (e.g., mathematics, physics) and specific sub-topics (e.g., graph drawing, network visualization).
We use self-reported research topics from google scholar.
Before we can build the topic graph, we scrape the data and then
clean, split, merge, and correct the research topics.
Next we build a similarity matrix $M$ with topics as rows and columns. The value $M(i,j)$ represents the similarity between the pair of topics $(i,j)$, based on co-occurrence of the two topics in scholar profiles.
The complete network is quite large, containing about 35,000 topics and 646,000 edges
We reduce the size of the network by removing nodes and edges with low weights. Node weight is directly proportional to the number of scholar profiles that contain that topic, and edge weights are directly proportional to the number of scholar profiles that contain both topics.
We remove a large number of infrequent topics, topics that contain typos, and topics listed in languages other than English.
\medskip \noindent{\bf Data Scraping:}
While some analysis of google scholar data exists~\cite{jacso2005google,falagas2008comparison,bar2007h}, there is not much work based on data extracted from google scholar.
Data retrieval is laborious due to the lack of API and metadata scarcity~\cite{bornmann2016application}.
We started with a list of 1,000 universities~\cite{top1000} and then requested google scholar IDs for each university (e.g., MIT's ID is 16345133980181568013). We next collect research profiles from each university, by scraping the URL associated with that university (e.g., https://scholar.google.com/citations? view\_op=view\_org \&org= 16345133980181568013 \&hl=en\&oi=io). Finally, we extract the name, affiliation, citations, and research topics of each individual researcher at that university, using a regular expression to match the relevant fields from the html file.
\medskip\noindent{\bf Data Cleaning:}
In the early days, researchers creating google scholar profiles manually created their own research topics. This might account for the large number of typos and acronyms in the dataset. These days google auto-suggests relevant topics and allows up to five research topics per profile.
We use comma as the primary topics separator and a regular expression to replace other separators (e.g., ... / ; . \#) with a comma. For html tags we use beautifulsoup~\cite{bsoup}, a python package for cleaning up html tags.
\medskip\noindent{\bf Topic Splitting and Merging: }
Once the data above is collected and analyzed, it is easy to see that many topics should be split; see Table~\ref{split-label}.
We split topics by pattern-matching conjunctions (i.e., or, and).
\begin{table}[]
\centering
\begin{tabularx}{1\textwidth}{|X|}
\hline
\dots methods for longitudinal \textbf{or} clustered data,
statistics \
|
textbf{for} neuroscience, \dots\\ \hline
\dots data \textbf{and} model management, data mining, bioinformatics, algorithms \dots \\ \hline
\dots new energy materials, Supercapacitor, photo \textbf{and} electro-catalysis of water \dots \\ \hline
\dots Thyroid, Nuclear Cardiology \textbf{and} Neurology, Gluconeogenesis, \dots \\ \hline
\end{tabularx}
\medskip\caption{Examples of records listing multiple topics that should be split. \label{split-label}
}\vspace{-.6cm}
\end{table}
Merging is appropriate for topics that are similar but listed slightly differently. For example, out of out of half a million researchers in our dataset, 100 list $algorithm$, 20 list $algorithmics$, and 1,087 list $algorithms$. To handle this problem we need to determine the main topic with which the other topics should be merged. We use snowball~\cite{porter2001snowball}
to find the root word by applying stemming (which removes endings such as $-s$, $-ed$, $-ing$).
In the example above, snowball converted \ $algorithm$, $algorithmics$, and $algorithms$ to the stemmed word $algorithm$, however, applying snowball may result in stems that are difficult to understand (e.g., $applied$ and $applications$ are converted to $appli$). With this in mind, we set the main topic to be the one with highest frequency among all the topics with the same stem.
\medskip\noindent{\bf Topic Correction:}
Topic splitting and merging does not resolve all topic issues, as further modifications might be needed due to leading and trailing spaces, lower and upper case letters, punctuation and control characters, and duplicate words. Other issues of this type include ``Human Computer Interaction" and ``Computer Human Interaction," which are really the same topic. We try to address such issues with Google's Openrefine~\cite{OpenRefine} fingerprint key collision method, which attempts to find alternate representations of the same topic~\cite{cavnar1994n,hjaltason2003index,elmagarmid2007duplicate}.
\medskip\noindent{\bf Network Statistics:}
After the steps above, our topic graph contains 34,774 topics and 646,582 edges.
There are 17 components, but just one giant connected component (34,741 nodes and 646,565 edges).
The average shortest path length is 3.141 which shows that the topic network is highly connected. The graph has a low global clustering coefficient of 0.09 (defined as the ratio of the number of triangles over the total number of node triples); see degree distribution in Fig.~\ref{fig:nodegraph}.
The node ``machine learning" has the highest degree and more researchers are reporting working on this topic than any other. Figure~\ref{fig:gstats} shows the top ten topics by degree, by number of researchers, by citations per person.
\begin{figure}[b]
\vspace{-.2cm}\hfil \includegraphics[width=1\textwidth]{graphstats.pdf} \hfil
\vspace{-.3cm}\caption{Top ten topics: highest degree, number of researchers, number of citations.\vspace{-.5cm}}
\label{fig:gstats}
\end{figure}
Figure~\ref{fig:profiles} shows which institutions contributed the most scholar profiles.
Interestingly, some universities seem to have more profiles than academic staff, (likely due to doctoral and postdoctoral students with university affiliations), although the majority of the universities are associated with fewer profiles than the size of their academic staff.
\vspace{-.2cm}
\section{Map Generation}\vspace{-.3cm}
We use the GMap framework~\cite{Gansner10} to generate map layouts of the research topic graph and extend it to support semantic zooming.
There are three high-level steps: (1) embedding the topic graph in the plane, (2) grouping vertices into clusters, and (3) creating the geographic map representation.
We embed the graph using a scalable force-directed algorithm ({\tt sfdp} from graphviz) and then group the vertices using $k$-means clustering.
To create the geographic map look, we use a modified Voronoi diagram based on the obtained embedding and clustering. The geographic regions are colored such that no two adjacent countries have colors that are too similar, using the spectral vertex labeling method~\cite{Gansner10}.
We use the GraphViz implementation of node-overlap removal provided by {\tt prism}, but that provides non-overlapping labels only for the complete basemap, and not for the other 7 level-of-detail views, needed for semantic zooming. Semantic zoom requires modifications to nodes, edges, clusters, and heatmaps. The google maps API handles all of these issues except for node-overlap (and hence node-label overlap), which is a natural side effect of zooming-in. To ensure that neither nodes nor labels overlap on any zoom-level, we compute different {\it node visibilities} for different zoom-levels. For each level, we sort the nodes by their weight (recall, that node weight is proportional to the number of researchers working on the topic associated with the node). We make $i$-th node visible on the $j$-th level
if the bounding box of the $i$-th node does not overlap with the bounding boxes of nodes $1,2,\cdots, (i-1)$. This algorithm takes $O(n^2)$ time but it can be improved by using~\cite{dwyer2005fast}.
Figure~\ref{fig:zoom} shows how the local neighborhood of ``computer vision" is changing in different zoom levels.
\begin{figure}[t]
\hfil \includegraphics[width=1\textwidth]{zoom.pdf} \hfil
\vspace{-.4cm}\caption{Three zoom-level views near the ``computer vision" topic.}\vspace{-.4cm}
\label{fig:zoom}
\end{figure}
The size of the font label for topic $t$ is directly proportional to the number of researchers working on that topic, denoted by the weight: $w(t)$.
We assign font size from the range 80\% to 200\% of the default browser font size, as follows:
\begin{equation*}
\mathcal{F}_t =
\begin{cases}
80 & \text{if $w_t/10 \le 80 $} \\
200 & \text{if $w_t/10 \ge 200 $} \\
w_t/10 & \text{otherwise}
\end{cases}
\end{equation*}
\paragraph{\bf Web Interface and User Interaction:}
GMap produces a ``basemap" from the given graph which is a static image that is not ideal for user interaction, such as zooming, panning, and searching. We enable interactions with the basemap with the help of the google maps API~\cite{svennerberg2010beginning}. Specifically, we take the output from GMap and convert it into google map objects, i.e., $google.maps.SymbolPath$, $google.maps.Polygon$, $google.maps.Polyline$, etc.
For the web interface we provide 8 levels of details, showing different subgraphs, depending on the zoom level.
We provide basic search functionality, which finds topics containing the query words. Clicking on a node shows the number of people who work on that topic in the underlying dataset and highlights edges to adjacent nodes (other topics that are frequently co-listed with that topic); see Fig.~\ref{fig:nodeselection}
\paragraph{\bf Basemaps:}
We compute several different basemaps, covering a different set of universities. The full map is determined by researchers in 1,000 universities around the world~\cite{top1000}, but we also provide basemaps for universities in the United States and universities in Europe.
Changing the basemap results in different node-weights and hence different label font-sizes. This is a useful feature when comparing a specific US university with universities around the world, with universities in the US, or with universities in Europe.
\paragraph{\bf Overlays:}
After creating the basemap and all level-of-detail maps, we use overlays to show additional information, such as human resource investments of and citations associated with a specific university. Overlays can also be used to highlight topics associated with different departments (e.g., Computer Science, History) and even individual text documents (e.g., research paper, call for proposals). The overlay requests are collected from the browser (client) and the request is processed on the server, which then returns the necessary data to produce the overlay in the client. This is discussed in more detail in the next section
\section{Knowledge Strengths and Weaknesses}\vspace{-.3cm}
Quantifying knowledge strengths and weaknesses is a non-trivial challenge. We provide a university-level search feature which allows a specific university to be selected. We then attempt to visualize the strength and weakness of that university by computing the number of people working on different research topics and the number of citations associated with different topics for researchers from that university. Figure~\ref{fig:strengthandweakness} illustrates this using the University of Arizona (UofA), Arizona State University (ASU), and the California Institute of Technology (CalTech). It is easy to see that UofA has a significant human resource investment in ecology/evolution (large green circles around these topics) and this translates to many citations in these topics. Such visualizations also make it possible to see that UofA has
not invested human resources in computer science (purple circles around CS topics) but CS is still associated with a large number of citations.
\begin{figure}[t]
\hfil \includegraphics[width=1\textwidth]{strength-weakness.pdf} \hfil
\vspace{-.3cm}\caption{
University of Arizona (left column), Arizona State University (middle column), and California Institute of Technology (right column). (a-c) Heatmap overlays, based on citations. (e-g) Showing the number of people associated with different topics: green (purple) circles represent higher (lower) than average number of people working in this area. (h-j) Normalized citation heatmaps. Higher resolution images can be found in the image gallery at \url{rtopmap.arl.arizona.edu}.\vspace{-.2cm} }
\label{fig:strengthandweakness}
\end{figure}
\vspace{-.2cm}
\subsubsection{Citation Overlays: }
Let $r$ be a researcher and let $topics(r)$ be the set of topics associated with researcher $r$. We denote number of citation received by $r$ as $cite(r)$.
Then we can define $T$ as the set of topics that associated with university $X$ as follows:
$T=\bigcup\limits_{\forall r \in X} topics(r)$. Then the citations associated with university $X$ for each each topic $t \in T$ is determined by the sum of citations of researchers at university $X$ who work on topic $t$:
$c_X(t)=\sum\limits_{r\in X \& t\in r} cite(r)$.
The above formulas produce raw citation counts, although not all research fields cite at the same rate, e.g., ``particle physics" is associated with more citations than average (due to high number of co-authors and citations per paper); see also the citations-per-person table in Fig.~\ref{fig:gstats}.
With this in mind we provide the option to normalize citation counts by total number of citations associated with a specific field $t$: $normalized\ citation\ of\ t= c_X(t) \frac{c(t)}{C}$, where $c(t)=\sum\limits_{r\in X \& t\in r} cite(r)$ and $C$ is the total number of citations for all topics.
When a researcher lists multiple topics, it is not easy to determine which citation contributes to which topic. In this case each citation contributes equally to each topic. A more careful analysis of the citation meta-data might allow us to distribute the contribution of each citation to different topics.
\subsubsection{Human Resources Overlays:}
We calculate human resource investment of a particular university by simply counting researchers and comparing the results to the averages. That is, to determine the human resource investment in topic $t$ at university X, given a base set (top 1,000 universities, US universities, European universities), we calculate the difference in the percentage of researchers at university $X$ who work on topic $t$ and the percentage of researchers in the base set who work on topic $t$. If this difference is positive (negative) then we consider this a human resource strength (weakness) of university X. This is illustrated with circles of different color: green for strength and purple for weakness. The size of the circles is proportional to the magnitude
|
3} we get
\begin{equation} \label{e14}
|n|=16\, ,
\end{equation}
as is required for getting duality invariant spectrum of massless states.
\subsection{Type IIA on $(K3 \times
(T^4)')/(-1)^{F_L}\cdot\sigma_{II}\cdot{\cal I}_4'$
and heterotic theory on $(T^4 \times (T^4)')/{\cal I}_{20,4}\cdot\sigma_H
\cdot{\cal I}_4'$}
We consider type IIA theory compactified on a special class of $K3$
surfaces which have a $Z_2$ isometry generated by $\sigma_{II}$ with the
following properties\cite{NIKULIN,SCHSEN,CHLO}:
\begin{enumerate}
\item{It exchanges the two $E_8$ factors in the lattice of second cohomology
elements of $K3$.}
\item{It has eight fixed points on $K3$.}
\item{Modding out by this symmetry gives us back an orbifold of SU(2)
holonomy.}
\end{enumerate}
The corresponding transformation $\sigma_H$
in the dual heterotic string theory on
$T^4$ simply exchanges the two $E_8$ gauge groups in the theory.
We now further compactify both theories on a four torus $(T^4)'$ and
mod out the type IIA theory by $(-1)^{F_L}\cdot\sigma_{II}\cdot{\cal I}_4'$
and the heterotic theory by its image ${\cal I}_{20,4}\cdot\sigma_H\cdot{\cal I}_4'$.
This leads us to the dual pair described above.
We shall now compare the spectrum of massless states in the twisted
sector in the two theories.
By making an $R\to(1/R)$
duality transformation on one of the circles of $(T^4)'$ we can map the
type IIA
theory to type IIB on $(K3\times(T^4)')/\sigma_{II}\cdot{\cal I}_4'$.
In this theory there are $8\times16$
fixed points, since $\sigma_{II}$ has eight fixed points on $K3$ and
${\cal I}_4'$ has sixteen fixed points on $(T^4)'$.
The twisted sector
at each of these fixed points is characterized by eight anti-periodic
bosons on the left and eight anti-periodic bosons on the right.
The eight fermions on either
side are periodic in the NS sector and anti-periodic in the R sector.
As a result the total ground state energy in either side is zero
in the R sector and one in the NS sector. Thus the only massless states
in the twisted sector arise from the ground state of the RR sector. This
state is unique since all the fermions are anti-periodic. Furthermore,
this state, as constructed, is chiral, since in the light cone gauge that
we are using the $k^+$ component of the momentum is constrained to be
non-vanishing. It is easy to verify by working in the covariant formulation
that there are no massless states
in this theory from the twisted sector with non-vanishing $k^-$. Thus
each fixed point gives a massless chiral boson, giving a total of
128 massless chiral bosons in this theory. One can also verify that
there are no $B_{\mu\nu}$ tadpoles in this theory since the type IIB
theory is invariant under a world-sheet parity transformation under
which $B_{\mu\nu}$ changes sign\cite{VWIT}. Thus we do not need to
introduce any background elementary type II string.
Let us now turn to the twisted sector states in the heterotic theory.
There are altogether $16\times 16$ fixed points since ${\cal I}_{20,4}$
has sixteen fixed points on $T^4$ and ${\cal I}_4'$ has sixteen fixed points
on $(T^4)'$. The action of the $Z_2$ transformation
on the sixteen internal left moving bosons is to exchange eight of them
with eight others. Thus by taking appropriate linear combinations of these
bosons we get eight periodic and eight anti-periodic internal bosons in the
twisted sector. On the other hand all the eight bosons associated with the
eight coordinates labelling $T^4\times (T^4)'$ are anti-periodic both
in the left and the right moving side. Thus we have a total of eight
periodic and sixteen anti-periodic bosons on the left, giving a total
vacuum energy of zero. On the right hand side we have eight anti-periodic
bosons. In the NS sector, the eight fermions on the right
are periodic, giving a total vacuum energy one, whereas in the R sector
the eight fermions on the right are anti-periodic, giving a total
vacuum energy zero. Thus we get a unique massless state from the Ramond
ground state on the right. This is a fermionic state, and as before one
can verify by working in the covariant formalism that these are
chiral fermions. Thus we get a total of 256 chiral fermions from the
256 fixed points. Using Bose-Fermi equivalence in two dimensions these
can be shown to be equivalent to
|
128 chiral bosons. Thus we again get
identical spectrum of massless fields from the twisted sector
of the two theories.
It remains to verify that there is no $B_{\mu\nu}$ tadpole in this
theory, since any such tadpole will force us to introduce heterotic
string background and hence introduce new massless states from the
collective coordinates of these strings. The calculation proceeds
as in the last subsection. In particular, the untwisted sector
contribution to $A_M(q)$ is now given by
\begin{equation} \label{e15}
-8(q^{-1}-8+O(q))\, .
\end{equation}
Note that the 24 in eq.\refb{e13} has been replaced by 8, since
of the 24 left moving bosonic oscillators 16 are odd and 8 are even under
${\cal I}_{20,4}\cdot\sigma_H\cdot{\cal I}_4$. Thus 16 of these oscillator
states need to be accompanied by $(-1)^{f_R}=-1$ states from the
right, and 8 of them have to be accompanied by $(-1)^{f_R}=1$ states
from the right. This time there is also twisted sector contribution
to $A_M(q)$. The 256 massless twisted sector states, each with
$(-1)^{f_R}=1$, contributes a factor of $-256$ to $A_M(q)$. (The $-1$
is again due to the fact that these states are all space-time
fermions.) This gives
\begin{equation} \label{e16}
A_M(q)=-8(q^{-1}+24+O(q))\, .
\end{equation}
Substituting this and \refb{e4} into \refb{e3} we get
\begin{equation} \label{e17}
n=0\, ,
\end{equation}
as desired.
\section{Conclusion}
In this paper we have constructed several examples where the orbifolding
procedure commutes with the duality transformation. We start with a known
dual pair of theories, identify a pair of symmetries in these two theories
that are related by a duality transformation, and mod out both theories by
their respective symmetries to construct a dual pair. In most cases, if
we do not combine the original pair of symmetries with a space-time
symmetry transformation,
we are lead to an inconsistent result. On the other hand, if
we combine the original pair of symmetries (which could be called internal
symmetries) with a space-time symmetry transformation (with fixed
points in general), and then construct the orbifold,
we get a consistent dual pair.
In many of these cases, we get identical spectrum of massless states
in the dual pair of theories constructed this way only after introducing
appropriate background fields that cancel the one loop tadpoles in both
the theories. This puts non-trivial constraint on the coefficients of one
loop tadpoles in various theories, and in every
case that has been studied one finds that the coefficient of the
tadpole is consistent with the predictions of duality.
Many examples of dual pairs of theories, constructed by modding out
another dual pair by appropriate symmetries, have
been discussed before\cite{FHSV,VAFAWIT,SENVAFA,ALOK}.
Our examples differ from most of the previous examples
in that in our models, there are massless states from the `twisted
sector' in both theories {\it at a generic point in the moduli
space.} So far there has been no systematic rule for
determining when orbifolding commutes with duality transformation,
except in cases where
the adibatic argument of ref.\cite{VAFAWIT} is applicable.
We hope that the results of this paper will provide a step towards
a more systematic understanding of this phenomenon.
Finally the result of this paper boosts our confidence in the
results of ref.\cite{MORBI} where many of the conjectures involving
orbifolds of $M$-theory were derived using the ansatz that orbifolding
procedure commutes with the duality transformation. The only place
where this procedure failed was in finding the dual of the $Z_2$ orbifold
of $M$ theory on $S^1$, where it gave the dual theory as type IIB string
theory in ten dimensions instead of the $E_8\times E_8$ heterotic
string\cite{HOR}.
According to the classification given in this paper, this example falls
in the class 3, where the argument for duality is the weakest, and fails
even in many string theory examples. On the other hand, every other
example discussed in ref.\cite{MORBI}, where this procedure gave
sensible answer, is of type 2(b)
in our classification. As we saw in this paper, in many string theory
examples of this kind we get sensible answers by assuming that
orbifolding commutes with the duality group. It is satisfying that even
in the $M$-theory examples we got sensible answers precisely for this
class of models.
I wish to thank K. Dasgupta and S. Mukhi for useful discussions.
I would also like to thank the theoretical high energy physics group at
Rutgers university for hospitality during the course of this work.
|
\section{Introduction}
The present paper continues the investigation of analogues of Witt cancellation for stably trivial quadratic forms over smooth affine schemes which was started in \cite{hyperbolic-dim3}, originally motivated by the MathOverflow question 166249 ``Metabolic vs stably metabolic'' by K.J. Moi. In the previous paper, some low-dimensional results and examples were discussed; this time, the focus is on higher-dimensional schemes. The question which will be investigated in the paper is if a stably hyperbolic or stably trivial quadratic form splits off a hyperbolic plane. Over a field, stably hyperbolic implies hyperbolic by Witt cancellation. Over schemes, this is no longer true: stably trivial forms don't necessarily split off hyperbolic planes and it is interesting to know what kind of invariants can detect this failure. For generically trivial forms over smooth affine schemes over perfect fields of characteristic $\neq 2$, such questions can, via the $\mathbb{A}^1$-representability theorems of \cite{gbundles2}, be translated into questions of $\mathbb{A}^1$-obstruction theory. Eventually, the above question is a question about reduction of structure group along the stabilization morphism $\operatorname{SO}(n)\to\operatorname{SO}(n+2)$. Hence the relevant spaces controlling the splitting questions are the orthogonal Stiefel varieties $\operatorname{V}_{2,n+2}:=\operatorname{SO}(n+2)/\operatorname{SO}(n)$. The goal of the present paper is then to compute some $\mathbb{A}^1$-homotopy sheaves of these varieties, investigate the corresponding obstruction groups and deduce some results on splitting of quadratic forms.
\subsection{Splitting results}
First, let me describe more precisely the results on $\mathbb{A}^1$-homotopy sheaves of orthogonal Stiefel varieties and the consequences for the splitting questions. A first easy result one can prove is that the orthogonal Stiefel varieties $\operatorname{V}_{2,2n}$ and $\operatorname{V}_{2,2n+1}$ are $\mathbb{A}^1$-$(n-2)$-connected, as one would expect from real realization. A direct consequence of this is the following general splitting theorem. This is a quadratic-form version of a similar result of Serre on projective modules; for stably hyperbolic forms, it was proved by Roy \cite{roy} in greater generality. The $\mathbb{A}^1$-topology approach allows to provide a proof very close to classical algebraic-topological splitting results, cf.~Theorem~\ref{thm:stablesplitting}.
\begin{theorem}
\label{thm:roy}
Let $F$ be a perfect field of characteristic unequal to $2$, let $X=\operatorname{Spec}A$ be a smooth affine scheme of dimension $d$ over $F$ and let $(\mathscr{P},\phi)$ be a generically trivial quadratic form over $A$ of rank $2n$ or $2n+1$. If $d\leq n-1$, then the form $(\mathscr{P},\phi)$ splits off a hyperbolic plane.
\end{theorem}
The first obstruction that appears when we move toward the unstable range is described in the following. This is a quadratic-form version of Morel's theory of the Euler class, cf.~\cite{MField}. The following main result is proved in Propositions~\ref{prop:stiefel1} and \ref{prop:stiefel2} as well as Theorems~\ref{thm:euler1} and \ref{thm:euler2}.
\begin{theorem}
Let $F$ be a perfect field of characteristic unequal to $2$, let $X=\operatorname{Spec}A$ be a smooth affine scheme of dimension $d$ over $F$.
\begin{enumerate}
\item The first non-vanishing $\mathbb{A}^1$-homotopy sheaf of $\operatorname{V}_{2,2d+1}$ is
\[
\bm{\pi}^{\mathbb{A}^1}_{d-1}(\operatorname{V}_{2,2d+1})\cong \mathbf{K}^{\operatorname{MW}}_d/(1+\langle(-1)^d\rangle).
\]
Consequently, a generically trivial quadratic form $(\mathscr{P},\phi)$ over $A$ of rank $2d+1$ which admits a spin lift splits off a hyperbolic plane if and only if an obstruction class in the following cohomology groups vanishes:
\[
\operatorname{H}^d_{\operatorname{Nis}}(X,\mathbf{K}^{\operatorname{MW}}_d/(1+\langle(-1)^d\rangle))\cong \left\{
\begin{array}{ll} \operatorname{H}^d_{\operatorname{Nis}}(X,\mathbf{K}^{\operatorname{MW}}_d/2) & d\equiv 0\bmod 2\\
\operatorname{H}^d_{\operatorname{Nis}}(X,\mathbf{I}^d) & d\equiv 1\bmod 2.\end{array}\right.
\]
\item The first non-vanishing $\mathbb{A}^1$-homotopy sheaf of $\operatorname{V}_{2,2d}$ is
\[
\bm{\pi}^{\mathbb{A}^1}_{d-1}(\operatorname{V}_{2,2d})\cong \mathbf{K}^{\operatorname{MW}}_d\times \mathbf{K}^{\operatorname{MW}}_{d-1}.
\]
Consequently, a generically trivial form $(\mathscr{P},\phi)$ over $A$ of rank $2d$ which admits a spin lift splits off a hyperbolic plane if and only if an obstruction class in the following product of cohomology groups vanishes:
\[
\widetilde{\operatorname{CH}}^d(X)\times \operatorname{H}^d_{\operatorname{Nis}}(X,\mathbf{K}^{\operatorname{MW}}_{d-1})
\]
where the second factor has a presentation
\[
\operatorname{H}^d_{\operatorname{Nis}}(X,\mathbf{K}^{\operatorname{MW}}_{d-1})\cong \operatorname{H}^d_{\operatorname{Nis}}(X,\mathbf{W})\cong \operatorname{coker}\left(\beta\colon\operatorname{CH}^{d-1}(X) \to \operatorname{H}^d_{\operatorname{Nis}}(X,\mathbf{I}^d)\right).
\]
\end{enumerate}
\end{theorem}
\begin{remark}
Note the two alternating patterns: the two items in the above theorem correspond to the parity in homotopy groups of complex Stiefel varieties. The two subcases in the first item repeat the parity in homotopy groups of the real Stiefel varieties. Note also that the parity in the first item of the above theorem arises since $\langle(-1)^d\rangle$ is the motivic degree of the antipodal map on the quadric $\operatorname{Q}_{2d-1}$.
\end{remark}
The computation proceeds along the classical lines, cf. \cite{steenrod}, by writing out an explicit trivialization of the torsor $\operatorname{SO}(n)\to\operatorname{SO}(n+1)\to\operatorname{Q}_{n}$ to obtain an explicit description of the connecting map in the long exact homotopy sequence. The homotopy sheaves of the Stiefel varieties are then given as cokernel of the connecting map whose degree can be computed explicitly. Actually, the computation provides similar information on some higher homotopy sheaves of orthogonal Stiefel varieties: for $\operatorname{V}_{2,2n+1}$, Morel's Freudenthal suspension theorem can be used to show that the homotopy sheaves of orthogonal Stiefel varieties are (in some stable range) built from kernel and cokernel of multiplication by $2$ or $h$ on the corresponding homotopy sheaves of spheres, depending on parity, cf. Theorem~\ref{thm:stiefelhigh}. In the case $\operatorname{V}_{2,2n}$, the splitting as product of homotopy sheaves of spheres is true in all degrees. In particular, some higher obstruction groups can be identified as soon as more information on $\mathbb{A}^1$-homotopy sheaves of spheres becomes available.
From the above computations of the relevant obstruction groups, we can deduce some stronger splitting results such as the following, cf. Proposition~\ref{prop:special1} and \ref{prop:special2} as well as further discussion and examples in Section~\ref{sec:examples1}:
\begin{corollary}
Let $F$ be an algebraically closed field of characteristic unequal to $2$ and let $X=\operatorname{Spec} A$ be a smooth affine variety of dimension $d$ over $F$.
\begin{enumerate}
\item A generically trivial quadratic form $(\mathscr{P},\phi)$ over $A$ of rank $2d+1$ which admits a spin lift splits off a hyperbolic plane.
\item A generically trivial form $(\mathscr{P},\phi)$ over $A$ of rank $2d$ which admits a spin lift splits off a hyperbolic plane if and only if its Edidin--Graham Euler class is trivial.
\end{enumerate}
\end{corollary}
Beyond the Edidin--Graham Euler class statement above, the obstruction classes haven't been identified exactly. Of course, more precise results and examples could be provided if one could identify the obstruction classes explicitly in terms of characteristic classes of the quadratic forms. However, at this point, the Chow--Witt characteristic classes of the orthogonal groups are not known. As the computation of $\widetilde{\operatorname{CH}}^\bullet({\operatorname{B}}_{\operatorname{Nis}}G,\mathscr{L})$ for $G=\operatorname{O}(n)$ resp. $G=\operatorname{SO}(n)$ appears to be significantly more complicated than for $G=\operatorname{Sp}_{2n},\operatorname{GL}_n,\operatorname{SL}_n$, the relevant computations will be subject for future research.
As a final remark, I want to mention that it is also possible to ask if a stably hyperbolic form is the hyperbolic form of a vector bundle. Again, this question reduces to Witt cancellation over fields, but over schemes, stably hyperbolic forms aren't necessarily hyperbolic. In terms of bundle classification, this is the question of reduction of structure group along the hyperbolic homomorphism $H\colon\operatorname{GL}_n\to \operatorname{SO}(2n)$. The relevant obstruction theory is controlled by the space $\Gamma_n=\operatorname{SO}(2n)/\operatorname{GL}_n$. It is easy to show that the stabilization morphism $\Gamma_n\to\Gamma_{n+1}$ induces isomorphisms on $\bm{\pi}^{\mathbb{A}^1}_i$ for $i\leq n-2$, and the homotopy sheaves in this range are given by $\bm{\pi}^{\mathbb{A}^1}_i(\Gamma_n)=\bm{\pi}^{\mathbb{A}^1}_i(\operatorname{O}/\operatorname{GL})\cong \mathbf{GW}^3_i$. This shows that under the conditions of Theorem~\ref{thm:roy}, a stably hyperbolic form is already the hyperbolic form associated to some vector bundle (which is also covered by the results of Roy \cite{roy}). Computations of the first unstable homotopy sheaf could probably be done along the lines of the computations of Harris \cite{harris} and Massey \cite{massey}; note that the topological analogue of the hyperbolicity question above is the classification of almost complex structures on real vector bundles of even rank.
\subsection{Structure of the paper}
We start with some preliminaries on quadratic forms and orthogonal groups in Section~\ref{sec:prelims}. The computation of the $\mathbb{A}^1$-homotopy of the Stiefel varieties requires the discussion of an orthogonal clutching construction, explicit trivializations of $\operatorname{SO}(2n+1)$-torsors and computations of degrees of maps between motivic spheres in Section~\ref{sec:clutching}. This leads to the description of some $\mathbb{A}^1$-homotopy sheaves for orthogonal Stiefel varieties in Section~\ref{sec:stiefel}; consequences for splitting theorems and relevant examples are then given in Section~\ref{sec:splitting}. The appendix~\ref{sec:stabilization} also discuss some structural statements concerning the first unstable $\mathbb{A}^1$-homotopy sheaves of the orthogonal groups $\operatorname{SO}(2n-1)$.
\section{Preliminaries on quadratic forms and orthogonal groups}
\label{sec:prelims}
In this paper, $F$ always denotes a perfect field of characteristic $\neq 2$. We consider smooth affine schemes $X=\operatorname{Spec} A$ over $F$ and are interested in classification results for quadratic forms over the ring $A$. In the present section, we provide a short recollection on quadratic forms, orthogonal groups and their associated quadrics and Stiefel varieties. We also shortly discuss the $\mathbb{A}^1$-representability results for orthogonal groups and the resulting $\mathbb{A}^1$-obstruction theory approach to splitting questions.
\subsection{Quadratic forms as torsors}
Most of the material concerning quadratic forms can be found in standard textbooks on the subject, such as \cite{knus} (or \cite[Sections 3,4]{roy} which also deals with splitting questions similar to the ones in the present paper). A similar recollection was already used in \cite{hyperbolic-dim3}.
\begin{definition}
Let $F$ be a field of characteristic unequal to $2$.
\begin{itemize}
\item A \emph{quadratic form} over a commutative $F$-algebra $A$ is given by a finitely generated projective $A$-module $\mathscr{P}$ together with a map $\phi\colon\mathscr{P}\to A$ such that for each $a\in A$ and $x\in\mathscr{P}$, we have $\phi(ax)=a^2\phi(x)$ and $B_\phi(x,y)=\phi(x+y)-\phi(x)-\phi(y)$ is a symmetric bilinear form $B_\phi\in \operatorname{Sym}^2(\mathscr{P}^\vee)$.
\item
The \emph{rank} of the quadratic form is defined to be rank of the projective module $\mathscr{P}$.
\item A quadratic form $(\mathscr{P},\phi)$ is \emph{non-singular} or \emph{non-degenerate} if the morphism $\mathscr{P}\to \mathscr{P}^\vee\colon x\mapsto B_\phi(x,-)$ is an isomorphism.
\item An element $x\in\mathscr{P}$ is called \emph{isotropic} if $\phi(x)=0$.
\item A morphism $f\colon (\mathscr{P}_1,\phi_1)\to (\mathscr{P}_2,\phi_2)$ of quadratic forms is an $A$-linear map $f\colon \mathscr{P}_1\to\mathscr{P}_2$ such that $\phi_2(f(x))=\phi_1(x)$ for all $x\in\mathscr{P}_1$. An isomorphism of quadratic forms is also called an \emph{isometry}. The automorphism group of a quadratic form is called the \emph{orthogonal group} of the quadratic form.
\item Given two quadratic forms $(\mathscr{P}_1,\phi_1)$ and $(\mathscr{P}_2,\phi_2)$, there is a quadratic form
\[
(\mathscr{P}_1,\phi_1)\perp(\mathscr{P}_2,\phi_2):=(\mathscr{P}_1\oplus\mathscr{P}_2, \phi_1+\phi_2)
\]
which is called the \emph{orthogonal sum}.
\end{itemize}
\end{definition}
\begin{example}
Let $A$ be a commutative ring and let $\mathscr{P}$ be a finitely generated projective module. Then there is a quadratic form whose underlying module is $\mathscr{P}\oplus \mathscr{P}^\vee$, equipped with the evaluation form $\operatorname{ev}\colon(x,f)\mapsto f(x)$. The quadratic form $(\mathscr{P}\oplus\mathscr{P}^\vee,\operatorname{ev})$ is called the \emph{hyperbolic space} associated to the projective module $\mathscr{P}$. In the special case where $\mathscr{P}=A$ is the free module of rank 1, this is called the \emph{hyperbolic plane} $\mathbb{H}$ over $A$.
\end{example}
The following are the standard hyperbolicity notions from quadratic form theory, cf.~\cite[Section VIII.2]{knus}.
\begin{definition}
A quadratic form is called \emph{hyperbolic}, if it is isometric to $\mathbb{H}(\mathscr{P})$ for some projective module $\mathscr{P}$. A quadratic form $(\mathscr{P},\phi)$ is called \emph{stably hyperbolic} if there exists a projective module $\mathscr{Q}$ such that $(\mathscr{P},\phi)\perp\mathbb{H}(\mathscr{Q})$ is hyperbolic. A quadratic form $(\mathscr{P},\phi)$ over an integral domain $A$ is called \emph{rationally hyperbolic} if $(\mathscr{P},\phi)\otimes_A\operatorname{Frac}(A)$ is hyperbolic.
\end{definition}
\begin{remark}
Note that stably hyperbolic forms are then necessarily of even rank, and stably hyperbolic forms are those that become $0$ in the Witt ring.
\end{remark}
For the purposes of the present paper, we will also be interested in stricter notions of stable triviality of quadratic forms:
\begin{definition}
A quadratic form $(\mathscr{P},\phi)$ is called \emph{stably trivial} if it becomes isometric to one of the split forms $\mathbb{H}^{\perp n}$ or $\mathbb{H}^{\perp n}\perp (A,a\mapsto a^2)$ after adding sufficiently many hyperbolic planes.
\end{definition}
The stably trivial forms are those which represent classes in $\mathbb{Z}\cdot [\mathbb{H}]\oplus\mathbb{Z}\cdot [(A,a\mapsto a^2)]\subseteq \operatorname{GW}(A)$ in the Grothendieck--Witt ring of $A$.
The classical Witt cancellation theorem implies in particular that the notions of hyperbolic, stably trivial and stably hyperbolic all agree over fields:
\begin{proposition}[Witt cancellation theorem]
Let $k$ be a field of characteristic $\neq 2$ and let $(V_1,\phi_1)$ and $(V_2,\phi_2)$ be two quadratic forms over $k$. If $(V_1,\phi_1)\oplus\mathbb{H}\cong (V_2,\phi_2)\oplus\mathbb{H}$ then $(V_1,\phi_1)\cong(V_2,\phi_2)$. In particular, a stably hyperbolic form is hyperbolic.
\end{proposition}
\begin{remark}
\label{rem:abuse2}
All quadratic forms over fields considered in this paper are hyperbolic or of the form $\mathbb{H}^{\perp n}\oplus(A,a\mapsto a^2)$. In abuse of notation, the group $\operatorname{SO}(n)$ will always denote the special orthogonal group associated to the \emph{split form} of rank $n$. I apologize to anyone who might be offended by this.
\end{remark}
Now we recall the representability theorem for torsors from \cite{gbundles2}. The identification of quadratic forms with torsors for the orthogonal groups is classical, cf. \cite[Section 5]{RojasVistoli} or \cite[Section 2]{hyperbolic-dim3}.
\begin{theorem}
\label{thm:representability}
Let $k$ be a field, and let $X=\operatorname{Spec} A$ be a smooth affine $k$-scheme. Let $G$ be a reductive group such that each absolutely almost simple component of $G$ is isotropic. Then there is a bijection
\[
\operatorname{H}^1_{\operatorname{Nis}}(X;G)\cong [X, {\operatorname{B}}_{\operatorname{Nis}}G]_{\mathbb{A}^1}
\]
between the pointed set of isomorphism classes of rationally trivial $G$-torsors over $X$ and the pointed set of $\mathbb{A}^1$-homotopy classes of maps $X\to {\operatorname{B}}_{\operatorname{Nis}}G$.
\end{theorem}
As a consequence, for the split group $\operatorname{SO}(n)$ and a smooth affine scheme $X=\operatorname{Spec} A$, we get a natural bijection between the pointed set of generically split quadratic forms over the ring $A$ and the pointed set of homotopy classes of maps $[X,\operatorname{B}_{\operatorname{Nis}}\operatorname{SO}(n)]_{\mathbb{A}^1}$.\footnote{Note that we are talking about \emph{unpointed} maps here.} There is a similar statement for the spin groups which are $\mathbb{A}^1$-connected. In this case, the corresponding classifying spaces are $\mathbb{A}^1$-simply connected and consequently there is a canonical bijection between pointed maps $X_+\to\operatorname{B}_{\operatorname{Nis}}\operatorname{Spin}(n)$ and unpointed maps $X\to\operatorname{B}_{\operatorname{Nis}}\operatorname{Spin}(n)$. This is not true for the special orthogonal groups, where the sheaf of connected components is the Nisnevich sheaf $\mathcal{H}^1_{\mathrm{\acute{e}t}}(\mu_2)$; the unpointed classification is given by taking a quotient of the pointed result by the action of the fundamental group sheaf, cf. \cite{hyperbolic-dim3} for more details.
\subsection{Split orthogonal groups}
\label{sec:orthogonal}
For some of the computations, we will need an explicit description of the split orthogonal groups.
For the orthogonal group $\operatorname{SO}(2n)$ on an even-dimensional vector space, the set of variables is given by $(X_1,\dots,X_n,X_{-n},\dots,X_{-1})$, and the symmetric bilinear form of rank $2n$ is given by $B(u,v)=u^{\operatorname{t}}Jv$ where $J$ is the matrix with $J_{\alpha,\beta}=\delta_{\alpha,-\beta}$. The associated quadratic form is $\sum_{i=1}^nX_iX_{-i}$. The orthogonal group $\operatorname{O}(2n)$ is the subgroup of $\operatorname{GL}_{2n}$ consisting of the matrices $A$ satisfying $AJA^{\operatorname{t}}=J$. The special orthogonal group $\operatorname{SO}(2n)$ is given as the intersection $\operatorname{SO}(2n)=\operatorname{SL}_{2n}\cap\operatorname{O}(2n)$ inside $\operatorname{GL}_{2n}$.
For the orthogonal group $\operatorname{SO}(2n+1)$, we use the following concrete realization of the split symmetric bilinear form of rank $2n+1$, cf. \cite[Section 1]{vavilov}. The set of variables is given by $(X_1,\dots,X_n,X_0,X_{-n},\dots,X_{-1})$. Consider the matrix
\[
J_{\alpha,\beta}=\left\{\begin{array}{ll}
\frac{1}{2} & \alpha=-\beta\neq 0\\
1 & \alpha=\beta=0\\
0 & \textrm{ otherwise}
\end{array}\right.
\]
The orthogonal group $\operatorname{O}(2n+1)$ is the subgroup of $\operatorname{GL}_{2n+1}$ consisting of the matrices $A$ satisfying $AJA^{\operatorname{t}}=J$. The special orthogonal subgroup $\operatorname{SO}(2n+1)$ is given as the intersection $\operatorname{SO}(2n+1)=\operatorname{SL}_{2n+1}\cap \operatorname{O}(2n+1)$ inside $\operatorname{GL}_{2n+1}$.
The orthogonal group $\operatorname{O}(2n+1)$ preserves the bilinear form $B(u,v)=u^{\operatorname{t}}Jv$, resp. the quadratic form $X_0^2+\sum_{i=1}^n X_iX_{-i}$. The inner product of the standard basis vectors $(\operatorname{e}_1,\dots,\operatorname{e}_n,\operatorname{e}_0,\operatorname{e}_{-n},\dots,\operatorname{e}_{-1})$ is given by
\[
(\operatorname{e}_i,\operatorname{e}_j)=\delta_{i,-j}+\delta_{i,0}\delta_{j,0}.
\]
An orthogonal basis (with vectors of length $1$ or $-1$) is given by
\[
v_{\pm i} =(\operatorname{e}_i\pm \operatorname{e}_{-i}), \qquad v_0=\operatorname{e}_0.
\]
A vector $u=(t_1,\dots,t_n,t_0,t_{-n},\dots,t_{-1})$ can be expressed in the orthogonal basis as follows:
\[
u=t_0v_0+\sum_{i=1}^n\left(\left(\frac{t_i+t_{-i}}{2}\right)v_i+ \left(\frac{t_i-t_{-i}}{2}\right)v_{-i}\right).
\]
In the above orthogonal groups, we have the natural (Householder) reflections: if $w$ is a vector which is not isotropic, then the reflection at the hyperplane perpendicular to $w$ is given by
\[
\tau_w(x)=x-\frac{2 B(w,x)}{B(w,w)}w.
\]
If $u,v$ are vectors in $V$ such that $w:=u-v$ is not isotropic, the above reflection for $w$ maps $u$ to $v$.
\subsection{Quadrics and Stiefel varieties}
\label{sec:quadrics}
We recall some facts about smooth affine split quadrics and Stiefel varieties. First, the odd-dimensional smooth affine split quadrics are defined as follows:
\[
\operatorname{Q}_{2n-1}:=\operatorname{Spec}k[X_1,\dots,X_n,Y_1,\dots,Y_n]/(\sum X_iY_i-1)
\]
For the even-dimensional quadrics, there are two possible presentations over fields of characteristic $\neq 2$. One definition, the one used e.g. in \cite{AsokDoranFasel} provides a scheme definable over the integers:
\[
\operatorname{Q}_{2n}':=\operatorname{Spec} k[X'_1,\dots,X'_n,Y'_1,\dots,Y'_n,Z']/(\sum X'_iY'_i-Z'(1+Z'))
\]
However, it seems that this quadric is not globally a homogeneous space for the split orthogonal groups over $\mathbb{Z}$. Since the later computations and constructions require the relation to the orthogonal groups, we will use the following presentation of the smooth split affine quadrics, defined by the quadratic form $q$ used to define the explicit model of $\operatorname{SO}(2n+1)$ discussed above:
\[
\operatorname{Q}_{2n}:=\operatorname{Spec} k[X_1,\dots,X_n,Y_1,\dots,Y_n,Z]/(\sum X_iY_i+Z^2-1).
\]
An isomorphism between the quadrics $\operatorname{Q}_{2n}$ and $\operatorname{Q}_{2n}'$ is given as follows, cf. \cite{AsokDoranFasel}\footnote{Note that the explicit isomorphism is only found in some earlier arXiv version, not the published version of the paper.}:
\[
\operatorname{Q}_{2n}'\to \operatorname{Q}_{2n}\colon X'_i\mapsto -X_i/2, \, Y'_i\mapsto Y_i/2, \, Z' \mapsto (Z-1)/2.
\]
This morphism takes the form $\sum_iX'_iY'_i-Z'(1+Z')$ to
\[
\frac{1}{4}\left(-\sum_iX_iY_i-(Z-1)(Z+1)\right)= \frac{1}{4}\left(-\sum_iX_iY_i-Z^2+1\right).
\]
Under this isomorphism, the principal open subsets $\operatorname{D}(Z')$ and $\operatorname{D}(1+Z')$ appearing in the clutching construction of \cite{AsokDoranFasel} are mapped to the principal open subsets $\operatorname{D}(Z-1)$ and $\operatorname{D}(Z+1)$ in the quadric $\operatorname{Q}_{2n}$.
The natural choice of base point $v$ for $\operatorname{Q}_{2n-1}$ is given by $X_1=Y_1=1$ and $X_i=Y_i=0$ for $i\geq 2$. Similarly, the natural choice of base point $v$ for $\operatorname{Q}_{2n}$ is given by $Z=1$ and $X_i=Y_i=0$ for all $i$. With these choices of base point, we have natural projections
\[
\pi\colon \operatorname{SO}(m)\to\operatorname{Q}_{m-1}\colon A\mapsto A\cdot v
\]
which induce isomorphisms $\operatorname{Q}_{m-1}\cong \operatorname{SO}(m)/\operatorname{SO}(m-1)$.
Consider the stabilization homomorphism $\operatorname{SO}(n-2)\hookrightarrow\operatorname{SO}(2)$. The corresponding quotient $\operatorname{V}_{2,n}:=\operatorname{SO}(n)/\operatorname{SO}(n-2)$ is one example of a \emph{Stiefel variety}. Since homogeneous spaces $\operatorname{GL}_n/\operatorname{GL}_{n-k}$ have been already been called Stiefel varieties in the motivic homotopy literature, we will use the distinction \emph{orthogonal Stiefel varieties}. There is a fiber bundle
\[
\operatorname{SO}(n-1)/\operatorname{SO}(n-2)\to \operatorname{SO}(n)/\operatorname{SO}(n-2)\to \operatorname{SO}(n)/\operatorname{SO}(n-1).
\]
Via the above identifications of $\operatorname{SO}(n)/\operatorname{SO}(n-1)$ as smooth affine quadrics, this allows to write the orthogonal Stiefel varieties as sphere bundles over a sphere:
\[
\operatorname{Q}_{n-2}\to \operatorname{V}_{2,n}\to \operatorname{Q}_{n-1}.
\]
Via the representability results of \cite{gbundles2}, the above geometric statements provide $\mathbb{A}^1$-fiber sequences which we will use to describe some $\mathbb{A}^1$-homotopy sheaves of orthogonal Stiefel varieties.
First of all, there is an $\mathbb{A}^1$-fiber sequence related to stabilization and splitting off hyperbolic planes. If $X\to {\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(n)$ classifies a generically trivial quadratic form, the form splits off a hyperbolic plane if and only if the classifying map lifts through ${\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(n-2)\to {\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(n)$. By the below fiber sequence, the obstructions to splitting off hyperbolic planes will then live in the cohomology with coefficients in $\bm{\pi}^{\mathbb{A}^1}_i(\operatorname{V}_{2,n})$.
\begin{proposition}
\label{prop:stabil}
Let $F$ be a perfect field of characteristic $\neq 2$. Under the correspondence of Theorem~\ref{thm:representability}, the morphism
\[
{\operatorname{B}_{\operatorname{Nis}}}\operatorname{SO}(n-2)\to{\operatorname{B}_{\operatorname{Nis}}}\operatorname{SO}(n)
\]
induced from the standard embedding $\operatorname{SO}(n-2)\to\operatorname{SO}(n)$ corresponds to adding a hyperbolic plane. There is an $\mathbb{A}^1$-homotopy fiber sequence
\[
\operatorname{V}_{2,n}\to{\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(n-2)\to {\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(n).
\]
\end{proposition}
\begin{proof}
The $\operatorname{SO}(n-2)$-torsor $\operatorname{SO}(n)\to \operatorname{V}_{2,n}$ is Zariski-locally trivial: it is the universal $\operatorname{SO}(n-2)$-torsor which becomes trivial after adding a hyperbolic plane. By Witt cancellation, it must already be rationally trivial. The claim then follows directly from the results of \cite[Section 2.4]{gbundles2}, with an argument similar to \cite[Theorem 4.2.2]{gbundles2}.
\end{proof}
\begin{remark}
Note that there are similar $\mathbb{A}^1$-fiber sequences $\operatorname{Q}_n\to {\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(n) \to{\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(n+1)$, cf. \cite[Theorems 4.2.1 and 4.2.2]{gbundles2}.
\end{remark}
The easiest way to understand the $\mathbb{A}^1$-homotopy sheaves of the Stiefel varieties $\operatorname{V}_{2,n}$ is the following fiber sequence:
\begin{lemma}
\label{lem:quadfib}
Let $F$ be a perfect field of characteristic $\neq 2$. There is an $\mathbb{A}^1$-fiber sequence $\operatorname{Q}_{n-2}\to \operatorname{V}_{2,n}\to \operatorname{Q}_{n-1}$, and consequently there is a long exact sequence of $\mathbb{A}^1$-homotopy sheaves
\[
\cdots\to \bm{\pi}^{\mathbb{A}^1}_{i+1}(\operatorname{Q}_{n-1})\to \bm{\pi}^{\mathbb{A}^1}_{i}(\operatorname{Q}_{n-2})\to \bm{\pi}^{\mathbb{A}^1}_{i}(\operatorname{V}_{2,n})\to \bm{\pi}^{\mathbb{A}^1}_{i}(\operatorname{Q}_{n-1})\to\cdots
\]
Here the base points are induced from the standard base point on $\operatorname{Q}_{n-2}$.
\end{lemma}
\begin{proof}
By definition, $\operatorname{V}_{2,n}\cong\operatorname{SO}(n)/\operatorname{SO}(n-2)$, and the intermediate group $\operatorname{SO}(n-1)$ gives rise to the maps $\operatorname{Q}_{n-2}\cong\operatorname{SO}(n-1)/\operatorname{SO}(n-2)\hookrightarrow \operatorname{V}_{2,n}$ and $\operatorname{V}_{2,n}\twoheadrightarrow \operatorname{SO}(n)/\operatorname{SO}(n-1)\cong\operatorname{Q}_{n-1}$. Zariski-locally, the $\operatorname{Q}_{n-2}$-bundle $\operatorname{V}_{2,n}\to\operatorname{Q}_{n-1}$ is trivial; and it is the associated bundle for the $\operatorname{SO}(n-1)$-torsor $\operatorname{SO}(n)\to \operatorname{Q}_{n-1}$ and the natural transitive $\operatorname{SO}(n-1)$-action on $\operatorname{Q}_{n-2}$, by definition. From this, using the results on associated bundles from \cite{torsors}, the claimed fiber sequence is obtained via pullback from the fiber sequence for $\operatorname{Q}_{n-2}$ as follows:
\[
\xymatrix{
\operatorname{Q}_{n-2}\ar[r] \ar[d] & \operatorname{Q}_{n-2} \ar[d] \\
\operatorname{V}_{2,n} \ar[r] \ar[d] & {\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(n-2) \ar[d] \\
\operatorname{Q}_{n-1} \ar[r] & {\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(n-1)
}
\]
The claim about $\mathbb{A}^1$-homotopy sheaves follows directly.
\end{proof}
\subsection{Obstruction theory}
We recall some basics of relative obstruction theory and Moore--Postnikov factorizations which are used for the obstruction-theoretic approach to torsor classification in $\mathbb{A}^1$-homotopy. Most of the material here is taken from \cite{AsokFaselSplitting}.
Suppose that $p\colon(\mathcal{E},x)\to (\mathcal{B},y)$ is a pointed map of $\mathbb{A}^1$-connected spaces and let $\mathcal{F}$ be the $\mathbb{A}^1$-homotopy fiber of $p$. Assume that $p$ is an $\mathbb{A}^1$-fibration, $\mathcal{B}$ is $\mathbb{A}^1$-local and that $\mathcal{F}$ is $\mathbb{A}^1$-simply connected. Then there are pointed spaces $(\mathcal{E}^{(i)},x_i)$, $i\in\mathbb{N}$ $\mathcal{E}^{(0)}=\mathcal{B}$, pointed morphisms
\[
g^{(i)}\colon \mathcal{E}\to\mathcal{E}^{(i)}, \qquad
h^{(i)}\colon \mathcal{E}^{(i)}\to \mathcal{B}, \qquad
p^{(i)}\colon\mathcal{E}^{(i+1)}\to \mathcal{E}^{(i)}
\]
and commutative diagrams
\[
\xymatrix{
& \mathcal{E}^{(i+1)} \ar[rd]^{h^{(i+1)}} \ar[d]_{p^{(i)}} \\
\mathcal{E} \ar[ru]^{g^{(i+1)}} \ar[r]_{g^{(i)}} &\mathcal{E}^{(i)} \ar[r]_{h^{(i)}} & \mathcal{B}
}
\]
such that the following statements are satisfied:
\begin{enumerate}
\item For any $i$, we have $h^{(i)}\circ g^{(i)}=p$.
\item The morphism $\bm{\pi}^{\mathbb{A}^1}_n(\mathcal{E}) \to \bm{\pi}^{\mathbb{A}^1}_n(\mathcal{E}^{(i)})$ induced by $g^{(i)}$ is an isomorphisms for $n\leq i$ and an epimorphism for $n=i+1$.
\item The morphism $\bm{\pi}^{\mathbb{A}^1}_n(\mathcal{E}^{(i)})\to \bm{\pi}^{\mathbb{A}^1}_n(\mathcal{B})$ induced by $h^{(i)}$ is an isomorphism for $n>i+1$ and a monomorphism for $n=i+1$.
\item The induced morphism $\mathcal{E}\to \operatorname{holim}_i\mathcal{E}^{(i)}$ is an $\mathbb{A}^1$-weak equivalence.
\end{enumerate}
For any $i$, there is an $\mathbb{A}^1$-fiber sequence
\[
\operatorname{K}(\bm{\pi}_i^{\mathbb{A}^1}(\mathcal{F}),i)\to \mathcal{E}^{(i+1)}\xrightarrow{p^{(i)}} \mathcal{E}^{(i)}
\]
Moreover, the $p^{(i)}$ are twisted principal $\mathbb{A}^1$-fibrations, in the sense that there is a unique morphism (up to $\mathbb{A}^1$-homotopy)
\[
k_{i+1}\colon\mathcal{E}^{(i)}\to \operatorname{K}^{\bm{\pi}_1^{\mathbb{A}^1}(\mathcal{B})}(\bm{\pi}_i^{\mathbb{A}^1}(\mathcal{F}),i+1),
\]
called the $k$-invariant, such that there is an $\mathbb{A}^1$-homotopy pullback square
\[
\xymatrix{
\mathcal{E}^{(i+1)} \ar[r] \ar[d] & \operatorname{B}\bm{\pi}^{\mathbb{A}^1}_1(\mathcal{B}) \ar[d] \\
\mathcal{E}^{(i)} \ar[r]_{k_{i+1}} & \operatorname{K}^{\bm{\pi}^{\mathbb{A}^1}_1(\mathcal{B})}(\bm{\pi}^{\mathbb{A}^1}_i(\mathcal{F}),i+1).
}
\]
If the base space $\mathcal{B}$ is $\mathbb{A}^1$-simply-connected, this reduces to an $\mathbb{A}^1$-fiber sequence
\[
\mathcal{E}^{(i+1)}\to \mathcal{E}^{(i)} \xrightarrow{k_{i+1}} \operatorname{K}(\bm{\pi}^{\mathbb{A}^1}_i(\mathcal{F}),i+1).
\]
For a smooth scheme $X$ with a morphism $f\colon X\to \mathcal{B}$, the above Moore--Postnikov factorization provides a sequence of obstructions to lifting $f$ along $p\colon\mathcal{E}\to\mathcal{B}$: if we have already lifted to $\mathcal{E}^{(i)}$, then a lift to $\mathcal{E}^{(i+1)}$ exists if and only if the composition
\[
X\to \mathcal{E}^{(i)}\xrightarrow{k_{i+1}} \operatorname{K}^{\bm{\pi}^{\mathbb{A}^1}_1(\mathcal{B})}(\bm{\pi}^{\mathbb{A}^1}_i(\mathcal{F}), i+1)
\]
lifts to $X\to\operatorname{B}\bm{\pi}^{\mathbb{A}^1}_1(\mathcal{B})$. If $\mathcal{B}$ is $\mathbb{A}^1$-simply-connected, then the obstruction is simply a cohomology class in $\operatorname{H}^{i+1}_{\operatorname{Nis}}(X,\bm{\pi}^{\mathbb{A}^1}_i(\mathcal{F}))$, and the set of lifts $X\to\mathcal{E}^{(i+1)}$ is then given as a quotient of $\operatorname{H}^i_{\operatorname{Nis}}(X,\bm{\pi}^{\mathbb{A}^1}_i(\mathcal{F}))$ by the image of the looping of the $k$-invariant. In the more general case, the obstructions and liftings are elements of some $\bm{\pi}^{\mathbb{A}^1}_1(\mathcal{B})$-equivariant cohomology groups, cf. \cite[Section 6]{AsokFaselSplitting} for more details. Note for smooth schemes there are only finitely many obstructions since all obstructions above the dimension of the scheme vanish, for Nisnevich cohomological dimension reasons.
We shortly discuss the specific case of quadratic forms. First, we note that there is a sequence of $\mathbb{A}^1$-fiber sequences
\[
\xymatrix{
\operatorname{V}_{2,n+2}\ar[r] \ar[d]_= & {\operatorname{B}}_{\operatorname{Nis}}\operatorname{Spin}(n) \ar[r] \ar[d] & {\operatorname{B}}_{\operatorname{Nis}}\operatorname{Spin}(n+2) \ar[d]\\
\operatorname{V}_{2,n+2} \ar[r] & {\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(n) \ar[r] & {\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(n+2)
}
\]
Since the classifying spaces of the spin groups are $\mathbb{A}^1$-simply-connected, the obstruction groups for splitting off a trivial $\operatorname{Spin}(2)$-torsor from a $\operatorname{Spin}(n+2)$-torsor on a smooth scheme $X$ are given by $\operatorname{H}^{i+1}_{\operatorname{Nis}}(X,\bm{\pi}^{\mathbb{A}^1}_i(\operatorname{V}_{2,n+2}))$.
On the other hand, the classifying spaces of the orthogonal groups have a non-trivial $\mathbb{A}^1$-fundamental group, given by $\mathscr{H}^1_{\mathrm{\acute{e}t}}(\mu_2)$. We need to discuss the action of $\mathscr{H}^1_{\mathrm{\acute{e}t}}(\mu_2)$ on the $\mathbb{A}^1$-homotopy sheaves of the classifying spaces ${\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(n)$. Note that over a field $K$, $\operatorname{H}^1_{\mathrm{\acute{e}t}}(K,\mu_2)=K^\times/(K^\times)^2$. To understand the action of the square residues on the sections of $\bm{\pi}_i^{\mathbb{A}^1}{\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(n)$ one can follow the arguments in \cite[Section 6.2]{AsokFaselSplitting}. This allows to identify the action of the square residues as the usual action of square classes on strictly $\mathbb{A}^1$-invariant sheaves, given by restricting the Milnor--Witt module structure along the universal morphism $\mathbb{G}_{\operatorname{m}}/2\to \mathbf{K}^{\operatorname{MW}}_0$.
If $X$ is a smooth $F$-scheme and $f:X\to {\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(n)$ is a map, then the obstruction to lifting $f$ through the universal covering ${\operatorname{B}}_{\operatorname{Nis}}\operatorname{Spin}(n)\to {\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(n)$ is a torsor over $X$ with structure group $\mathbb{G}_{\operatorname{m}}/2$, the spinor norm torsor, which can be concretely described as follows. Assume we have a Nisnevich covering $U_i\to X$ trivializing the $\operatorname{SO}(n)$-torsor. The transition maps are morphisms $U_i\times_X U_j\to \operatorname{SO}(n)$. We can compose these with the spinor norm $\operatorname{SO}(n)\to \mathbb{G}_{\operatorname{m}}/2$, and this provides the transition function for the $\mathbb{G}_{\operatorname{m}}/2$-torsor over $X$. If this torsor is trivial, then the maps $X\to \operatorname{K}^{\mathscr{H}^1_{\mathrm{\acute{e}t}}(\mu_2)}(\bm{\pi}^{\mathbb{A}^1}_i(\operatorname{V}_{2,n+2}),i+1)$, corresponding to $\mathscr{H}^1_{\mathrm{\acute{e}t}}(\mu_2)$-equivariant cohomology of $X$ with coefficients in $\bm{\pi}^{\mathbb{A}^1}_i(\operatorname{V}_{2,n+2})$ can be identified with ``ordinary'' cohomology groups $\operatorname{H}^{i+1}_{\operatorname{Nis}}(X,\bm{\pi}^{\mathbb{A}^1}_i(\operatorname{V}_{2,n+2}))$.
By the discussion in \cite[Section 6]{hyperbolic-dim3}, stably trivial quadratic forms always have spin lifts; and we will only focus on rationally trivial quadratic forms which have a spin lift in this paper. In particular, there is no need to deal with equivariance for the fundamental group $\mathscr{H}^1_{\mathrm{\acute{e}t}}(\mu_2)$ of the classifying spaces ${\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(n)$ for the present investigation. Put differently, to determine if a quadratic form splits off a hyperbolic plane, we can choose a spin lift and then use the above obstruction groups to determine if it splits off a trivial $\operatorname{Spin}(2)$-torsor.
\section{Orthogonal clutching and connecting map}
\label{sec:clutching}
For the later computation of the first non-vanishing $\mathbb{A}^1$-homotopy sheaf of the orthogonal Stiefel varieties, we need to identify the $\operatorname{SO}(2n)$-torsor $\operatorname{SO}(2n+1)\to\operatorname{Q}_{2n}$ in terms of a clutching construction for a morphism $\operatorname{Q}_{2n-1}\to \operatorname{SO}(2n)$. This is obtained by an explicit trivialization of the torsor $\operatorname{SO}(2n+1)\to\operatorname{Q}_{2n}$ similar to the classical results in \cite[Section 23]{steenrod}. We also compute the degree of the composition $\operatorname{Q}_{2n-1}\to\operatorname{SO}(2n)\to\operatorname{Q}_{2n-1}$ of the clutching function and the natural projection. These computations will be used in the description of the connecting map in the long exact $\mathbb{A}^1$-homotopy sequence for the Stiefel fibration $\operatorname{Q}_{2n-1}\to\operatorname{V}_{2,2n+1}\to\operatorname{Q}_{2n}$ of Lemma~\ref{lem:quadfib}.
\subsection{Recollection of the clutching construction}
We shortly review the algebraic version of the clutching construction for vector bundles which was established in \cite{AsokDoranFasel}. The relevant covering for the clutching construction for the quadric $\operatorname{Q}_{2n}'$ (described by the equation $\sum X_i'Y_i'-Z'(1+Z')$) is given by the principal open subschemes $\operatorname{D}(Z')$ and $\operatorname{D}(1+Z')$ whose intersection is $\operatorname{D}(Z'(1+Z'))$. On the intersection, we have the morphism $\operatorname{D}(Z'(1+Z'))\to \operatorname{Q}_{2n-1}$ given by
\[
(X'_1,\dots,X'_n,Y'_1,\dots,Y'_n,Z')\mapsto \left(\frac{X'_1}{Z'},\dots,\frac{X'_n}{Z'},\frac{Y'_1}{1+Z'},\dots, \frac{Y'_n}{1+Z'}\right).
\]
Via the isomorphism $\operatorname{Q}_{2n}'\to\operatorname{Q}_{2n}$, cf. Section~\ref{sec:quadrics}, the covering above corresponds to the covering of $\operatorname{Q}_{2n}$ given by the principal open subschemes $\operatorname{D}(Z-1)$ and $\operatorname{D}(Z+1)$ with intersection $\operatorname{D}(Z^2-1)$. The natural projection $\operatorname{D}(Z^2-1)\to\operatorname{Q}_{2n-1}$ is given by
\[
X_i\mapsto \frac{X_i}{1-Z}, \qquad Y_i\mapsto \frac{Y_i}{1+Z}.
\]
This induces a morphism $\operatorname{Q}_{2n}\to\Sigma_{\operatorname{s}}^1\operatorname{Q}_{2n-1}$, and the map $\operatorname{Q}_{2n}\to{\operatorname{B}}G$ classifying a $G$-torsor over $\operatorname{Q}_{2n}$ corresponds, via adjunction
\[
[\operatorname{Q}_{2n},{\operatorname{B}}G]_{\mathbb{A}^1} \cong [\Sigma_{\operatorname{s}}^1\operatorname{Q}_{2n-1},{\operatorname{B}}G]_{\mathbb{A}^1} \cong [\operatorname{Q}_{2n-1},\Omega_s{\operatorname{B}}G]_{\mathbb{A}^1} \cong [\operatorname{Q}_{2n-1},G]_{\mathbb{A}^1},
\]
to a clutching function $\operatorname{Q}_{2n-1}\to G$. Concretely, we can start with a morphism $\operatorname{Q}_{2n-1}\to G$, and take the $G$-torsor which is trivial on the principal open subschemes $\operatorname{D}(Z\pm 1)$ and whose transition function on $\operatorname{D}(Z^2-1)$ is given by the composition $\operatorname{D}(Z^2-1)\to\operatorname{Q}_{2n-1}\to G$, cf. \cite[Theorem 4.3.6]{AsokDoranFasel}.
For the other direction, we have the following:
\begin{proposition}
\label{prop:clutching}
Let $G$ be an isotropic group, and let $\mathscr{E}$ be a rationally trivial $G$-torsor classified by $\operatorname{Q}_{2n}\to {\operatorname{B}}G$. Assume that we have obtained trivializations over the open subschemes $\operatorname{D}(Z\pm 1)$ and a corresponding transition function $\operatorname{D}(Z^2-1)\to G$. The composition $\operatorname{Q}_{2n-1}\to\operatorname{D}(Z^2-1)\to G$ of the inclusion via $Z=0$ and the transition function is a clutching function for the $G$-torsor $\mathscr{E}$.
\end{proposition}
\begin{proof}
The morphism $\operatorname{Q}_{2n-1} \to \operatorname{D}(Z^2-1) \to \operatorname{Q}_{2n-1}$ which first includes the quadric $\operatorname{Q}_{2n-1}$ via $Z=0$ and then applies the standard projection is the identity. By the representability theorem of \cite{gbundles2}, the isomorphism classes of rationally trivial $G$-torsors over $\operatorname{Q}_{2n}$ are in natural bijection with the naive $\mathbb{A}^1$-homotopy classes of morphisms $\operatorname{Q}_{2n-1}\to G$; this bijection is, by \cite[Theorem 4.3.6]{AsokDoranFasel}, induced by mapping a clutching function $\operatorname{Q}_{2n-1}\to G$ to the $G$-torsor associated to the composition $\operatorname{D}(Z^2-1)\to \operatorname{Q}_{2n-1}\to G$. Composing further with the restriction of the clutching function along $\operatorname{Q}_{2n-1}\to\operatorname{D}(Z^2-1)$ induces a map
\[
[\operatorname{Q}_{2n-1},G]_{\mathbb{A}^1}\to \operatorname{H}^1_{\operatorname{Nis}}(\operatorname{Q}_{2n},G) \to [\operatorname{Q}_{2n-1},G]_{\mathbb{A}^1}.
\]
The first map is the natural bijection between clutching functions and torsors, and the composition is the identity. In particular, if we have a trivialization of a $G$-torsor over the open subschemes $\operatorname{D}(Z\pm 1)$ with the associated transition function $\operatorname{D}(Z^2-1)\to G$, then the composition $\operatorname{Q}_{2n-1}\to\operatorname{D}(Z^2-1)\to G$ is a clutching function for the given $G$-torsor.
\end{proof}
\begin{remark}
Via the natural bijections in the proof of Proposition~\ref{prop:clutching}, we also see that every rationally trivial $G$-torsor (for an isotropic group $G$) has a trivialization in the covering $\operatorname{D}(Z\pm 1)$ of $\operatorname{Q}_{2n}$, with a transition function which is pulled back along $\operatorname{D}(Z^2-1)\to \operatorname{Q}_{2n-1}$.
\end{remark}
\subsection{Trivialization and characteristic map}
To identify the characteristic map $\operatorname{Q}_{2n}\to {\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(2n)$, classifying the $\operatorname{SO}(2n)$-torsor $\pi\colon \operatorname{SO}(2n+1)\to\operatorname{Q}_{2n}$, we proceed as in \cite[Section 23]{steenrod}. The first step is to write down explicit trivializations of the torsor $\pi$ over the two principal open subsets $\operatorname{D}(1+Z)$ and $\operatorname{D}(1-Z)$; this provides an explicit transition map $\operatorname{D}(Z^2-1)\to\operatorname{SO}(2n)$. As a second step, the clutching function is obtained as the composition $\operatorname{Q}_{2n-1}\xrightarrow{\iota}\operatorname{D}(Z^2-1)\to \operatorname{SO}(2n)$. The characteristic map $\operatorname{Q}_{2n}\to {\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(2n)$ then corresponds to the clutching map $\operatorname{Q}_{2n-1}\to\operatorname{SO}(2n)$ via the clutching result \cite[Theorem 4.3.6]{AsokDoranFasel}, cf. also the recollection as above.
To obtain an explicit trivialization of the torsor $\pi$ over the principal open subset $\operatorname{D}(1+Z)$, we follow the construction in \cite[23.3]{steenrod}: in classical topology, one fixes a base point $x_n\in \operatorname{S}^n$ and sends a vector $x$ to the matrix which is the rotation from $x_n$ to $x$ on the plane spanned by $x_n$ and $x$, and is the identity on the orthogonal complement of that plane.
We want to employ the same construction for the split orthogonal groups. For a coordinate-free formula for the above rotation, we can use the composition of two reflections at hyperplanes, one perpendicular to $x$ and the other perpendicular to the bisector of the angle between $x$ and $x_n$ for which we can take $x+x_n$ since both are unit vectors.\footnote{This is discussed in amd's answer to Math.StackExchange question 1909717 ``Rotation matrix in terms of dot products''.} The relevant coordinate-free expression for the rotation is then
\[
a\mapsto a-\frac{B(x_n+x,a)}{1+B(x_n, x)}(x_n+x)+2B(x_n, a)x,
\]
which in the situation of \cite[23.3]{steenrod} reproduces exactly the matrix in loc.cit.
Now we consider this formula in the odd-dimensionsal split setting, given by the bilinear form $B(u,v)=u^{\operatorname{t}}Jv$. Recall that we denote vectors in $V=F^{2n+1}$ by $(t_1,\dots,t_n,t_0,t_{-n},\dots,t_{-1})$, with natural basis given by $(\operatorname{e}_1,\dots,\operatorname{e}_n,\operatorname{e}_0,\operatorname{e}_{-n},\dots,\operatorname{e}_{-1})$. The corresponding orthogonal basis (with vectors of length $1$ or $-1$) is $v_{\pm i}=\operatorname{e}_i\pm \operatorname{e}_{-i}$ and $v_0=\operatorname{e}_0$. We choose $v_0$ as the relevant base point which should be rotated into a given vector $u=(t_1,\dots,t_n,t_0,t_{-n},\dots,t_{-1})$ on $\operatorname{Q}_{2n}$.
The rotation applied to $v_{\pm i}$ yields
\[
v_{\pm i}- \frac{B(u+v_0,v_{\pm i})}{1+B(u,v_0)}(u+v_0)+2B(v_0,v_{\pm i})u=v_{\pm i}-\frac{\left(t_{-1}\pm t_1\right)}{1+t_0}(u+v_0),
\]
and the rotation applied to $v_0$ yields
\[
v_0-\frac{B(u+v_0,v_0)}{1+B(u,v_0)}(u+v_0)+2B(v_0,v_0)u=u.
\]
Now we express $u=(t_1,\dots,t_n,t_0,t_{-n},\dots,t_{-1})$ in the above orthogonal basis:
\[
u=t_0v_0+\sum_{i=1}^n\left(\left(\frac{t_i+t_{-i}}{2}\right)v_i+ \left(\frac{t_i-t_{-i}}{2}\right)v_{-i}\right)
\]
In particular, the entries of the representing matrix for the rotation above are given as follows: the $v_{\pm j}$-coordinate of the image of $v_{\pm i}$ is given as
\[
\delta_{ij}-\frac{\left(t_{-i}\pm_i t_i\right)\left(t_j\pm_j t_{-j}\right)}{2+2t_0},
\]
the $v_0$-coordinate of the image of $v_{\pm j}$ is given by $(t_{-j}\pm t_j)$, and the coordinates of the image of $v_0=u$ are as above.\footnote{The representing matrix for the relevant rotation has a form very similar to the rotation matrices appearing in \cite[Section 23.3]{steenrod}, replacing the coordinates $t_i$ by $t_i\pm t_{-i}$ and the denominator $1+t_n$ by $2+2t_0$.} The fact that the matrix is orthogonal and maps $v_0$ to $u$ (provided that $1+t_0$ is invertible) follows from the construction. We denote by $\rho\colon \operatorname{D}(1+Z)\to \operatorname{SO}(2n+1)$ the corresponding map, taking $u$ to the rotation described (mapping $v_0$ to $u$) above.
As a special case, for $u=v_1$, we get the matrix
\[
(\phi(v_1))_{ij}=\left\{\begin{array}{rl}
1 & i=j \textrm{ or } i=0,j=1\\
-1 & i=1,j=0\\
0 &\textrm{otherwise}
\end{array}\right.
\]
and $\lambda=\phi(v_1)^2$ is the diagonal matrix whose entries are all 1 except $-1$ for the entries corresponding to $v_0$ and $v_1$, representing the $180^\circ$ degree rotation in the plane spanned by $v_0$ and $v_1$.
Then we can directly follow the definition in \cite[23.3]{steenrod}: the trivializing maps are given by
\begin{eqnarray*}
\phi_1\colon \operatorname{D}(1+Z)\times \operatorname{SO}(2n)&\to& \operatorname{SO}(2n+1)\colon (u,r)\mapsto \rho(u)\cdot r\\
\phi_2\colon \operatorname{D}(1-Z)\times \operatorname{SO}(2n) & \to & \operatorname{SO}(2n+1)\colon (u,r)\mapsto \lambda\phi(\lambda(u))\cdot r
\end{eqnarray*}
The corresponding partial inverses $\operatorname{SO}(2n+1)\dashrightarrow\operatorname{D}(1\pm Z)\times\operatorname{SO}(2n)$ can be obtained as follows: the image in the sphere is simply given by the projection $\pi\colon \operatorname{SO}(2n+1)\to\operatorname{Q}_{2n}\colon A\mapsto Av_0$ (and obviously this is only defined whenever the image lands in the respective coordinate neighbourhood $\operatorname{D}(1\pm Z)$). The appropriate rotation in $\operatorname{SO}(2n)$ can be recovered via
\[
p_1(r)=\phi(\pi(r))^{-1}r \quad \textrm{ and } \quad p_2(r)=\phi(\lambda \pi(r))^{-1}\lambda r.
\]
On the intersection of the coordinate neighbourhoods $\operatorname{D}(Z^2-1)$, the transition function is given as
\[
\tau\colon \operatorname{D}(Z^2-1)\to \operatorname{SO}(2n+1)\colon x\mapsto \tau(x)=\phi(x)^{-1}\cdot\lambda\phi(\lambda(x)).
\]
In particular, since $\phi(v_1)$ and $\lambda\phi(\lambda(v_1))$ are both the rotation from $v_0$ to $v_1$, we find that $\tau(v_1)$ is the identity, making $\tau$ a pointed morphism.
As in \cite[§23]{steenrod}, it can be checked that the transition function maps a point $u\in \operatorname{Q}_{2n-1}=\operatorname{V}(Z)\subset \operatorname{D}(Z^2-1)$ to the composition of a reflection for the vector $v_1$ followed by a reflection for the vector $u$. Since $v_1$ is a unit vector, the reflection for $v_1$ is given by $x\mapsto x-B(v_1,x)v_1$. For the orthogonal basis $v_{\pm i}$, every vector except $v_1$ is fixed, and $v_1$ changes sign. Then the reflection for $u$ maps $v_{\pm i}\mapsto v_{\pm i}-2B(u,v_{\pm i})u$ (again using that $u$ is a unit vector). If we write, as above,
\[
u=\sum_{i=1}^n\left(\left(\frac{t_i+t_{-i}}{2}\right)v_i+ \left(\frac{t_i-t_{-i}}{2}\right)v_{-i}\right)
\]
we get $v_{1}\mapsto -v_{1}+2(t_1+ t_{-1})u$ and $v_{\pm i}\mapsto v_{\pm i}-2(t_i\pm t_{-i})u$ for everything else. Consequently, the matrix for the transition function, expressed in the basis $(v_1,\dots,v_n,v_{-n},\dots,v_{-1})$ is given by
\[
\tau_{ij}=(\delta_{ij}-2(t_i\pm_i t_{-i})(t_j\pm_j t_{-j}))_{ij}\cdot\operatorname{diag}(-1,1,\dots,1).
\]
We can now compose this map $\operatorname{Q}_{2n-1}\to \operatorname{SO}(2n)$ with the natural projection $\operatorname{SO}(2n)\to\operatorname{Q}_{2n-1}\colon A\mapsto A\cdot v_1$. The resulting morphism $\pi\circ \tau\colon \operatorname{Q}_{2n-1}\to\operatorname{Q}_{2n-1}$ is given by
\[
u=(t_1,\dots,t_n,t_{-n},\dots,t_{-1})\mapsto -v_1+(t_1+t_{-1})\sum_{i=1}^n\left((t_i+t_{-i})v_i+(t_i-t_{-i})v_{-i}\right)
\]
which can equivalently be written as
\[
((t_1+t_{-1})t_1-1,\dots,(t_1+t_{-1})t_n,(t_1+t_{-1})t_{-n},\dots,(t_1+t_{-1})t_{-1}-1).
\]
This is now the morphism $\operatorname{Q}_{2n-1}\to\operatorname{Q}_{2n-1}$ which induces the connecting map in the long exact $\mathbb{A}^1$-homotopy sequence.
\subsection{A degree computation}
\label{sec:degree}
Now we need to compute the degree of the morphism $\operatorname{Q}_{2n-1}\to\operatorname{Q}_{2n-1}$ determined above. We will see later that the attaching map for the long exact $\mathbb{A}^1$-homotopy sequence for $\operatorname{Q}_{2n-1}\to\operatorname{V}_{2,2n+1}\to\operatorname{Q}_{2n}$ is then obtained as the composition $\operatorname{Q}_{2n-1}\to\operatorname{SO}(2n)\to\operatorname{Q}_{2n-1}$ of the clutching map $\operatorname{Q}_{2n-1}\to\operatorname{SO}(2n)$ with the natural projection $\operatorname{SO}(2n)\to\operatorname{Q}_{2n-1}$. Hence the degree of the above morphism $\operatorname{Q}_{2n-1}\to\operatorname{Q}_{2n-1}$ completely describes the attaching map in the long exact homotopy sequence.
By \cite[Proposition 4.1, Corollary 4.5]{AsokFaselSpheres}, the natural map
\[
[\mathbb{A}^n\setminus\{0\},\mathbb{A}^n\setminus\{0\}]_{\mathbb{A}^1}\to \operatorname{H}^{n-1}(\mathbb{A}^n\setminus\{0\},\mathbf{K}^{\operatorname{MW}}_n)
\]
induced by the identification $\tau_{\leq n-1}\mathbb{A}^n\setminus\{0\}\cong \operatorname{K}(\mathbf{K}^{\operatorname{MW}}_n,n-1)$ is a bijection. Via the natural $\mathbb{A}^1$-equivalence $\operatorname{Q}_{2n-1}\to\mathbb{A}^n\setminus\{0\}$, we also get a natural bijection $[\operatorname{Q}_{2n-1},\operatorname{Q}_{2n-1}]_{\mathbb{A}^1}\cong \operatorname{H}^{n-1}(\operatorname{Q}_{2n-1},\mathbf{K}^{\operatorname{MW}}_n)$.
Therefore, the degree of a morphism $\operatorname{Q}_{2n-1}\to\operatorname{Q}_{2n-1}$ can be determined via cohomology. First, a generator for $\operatorname{H}^{n-1}(\operatorname{Q}_{2n-1},\mathbf{K}^{\operatorname{MW}}_n)\cong \operatorname{GW}(F)$ is given by the orientation class, cf. \cite[Example 4.4]{AsokFaselSpheres}: writing $\operatorname{Q}_{2n-1}=\operatorname{V}(\sum_{i=1}^nX_iX_{-i}-1)$, the vanishing locus of the $X_2,\dots,X_n$ defines a smooth scheme $Y\subset \operatorname{Q}_{2n-1}$ of codimension $n-1$. Then $X_1$ is an invertible function on $Y$, hence gives rise to $[X_1]\in \mathbf{K}^{\operatorname{MW}}_1(F(Y))$. The Koszul complex for the regular sequence $X_2,\dots,X_n$ gives a generator of the $F(Y)$-vector space $\omega_Y=\operatorname{Ext}^{n-1}_{\mathscr{O}_X,Y}(F(Y),\mathscr{O}_{X,Y})$ and an isomorphism $\mathbf{K}^{\operatorname{MW}}_1(F(Y))\cong\mathbf{K}^{\operatorname{MW}}_1(F(Y),\omega_Y)$. Then the class $[X_1]\in\mathbf{K}^{\operatorname{MW}}_1(F(Y),\omega_Y)$ is a cocycle in $\operatorname{H}^{n-1}(\operatorname{Q}_{2n-1},\mathbf{K}^{\operatorname{MW}}_n)$, which is the orientation class $\xi$. With this definition, the degree of the morphism $f\colon \operatorname{Q}_{2n-1}\to\operatorname{Q}_{2n-1}$ can now be determined via $\deg f\cdot\xi=f^\ast(x)$.
\begin{proposition}
\label{prop:degree}
For the morphism $\pi\circ\tau\colon \operatorname{Q}_{2n-1}\to\operatorname{Q}_{2n-1}$ given by mapping $(t_1,\dots,t_n,t_{-n},\dots,t_{-1})$ to
\[
((t_1+t_{-1})t_1-1, \dots, (t_1+t_{-1})t_n, (t_1+t_{-1})t_{-n}, \dots,(t_1+t_{-1})t_{-1}-1),
\]
we have
\[
\deg(\pi\circ \tau)=\left\{\begin{array}{ll}
2&n \equiv 0\bmod 2\\
\mathbb{H}&n\equiv 1\bmod 2
\end{array}\right.
\]
\end{proposition}
\begin{proof}
We compute the pullback of the orientation class $\xi$ along $\pi\circ\tau$. Note that the above map is not flat. This can be seen e.g. by the fact that the preimage of the subscheme $\operatorname{V}(X_2,\dots,X_n)$ under $\pi\circ\tau$ is
\[
\operatorname{V}((X_1+X_{-1})X_2,\dots,(X_1+X_{-1})X_n)= \operatorname{V}(X_1+X_{-1})\cup\operatorname{V}(X_2,\dots,X_n)
\]
and thus has two components of different (co)dimension. To compute the pullback in terms of Gersten complexes, we proceed as in \cite{rost}: factor the morphism $\pi\circ\tau$ as
\[
\operatorname{Q}_{2n-1}\xrightarrow{(\operatorname{id},\pi\circ\tau)}\operatorname{Q}_{2n-1}\times\operatorname{Q}_{2n-1} \xrightarrow{\operatorname{pr}_2}\operatorname{Q}_{2n-1}
\]
of a regular embedding (whose normal bundle is the pullback $(\pi\circ\tau)^\ast\mathscr{T}\operatorname{Q}_{2n-1}$ of the tangent bundle of $\operatorname{Q}_{2n-1}$) and a smooth morphism.
The pullback of the class on $\operatorname{Q}_{2n-1}$ along $\operatorname{pr}_2$ can be described easily: it is given by the class of $[X_1]$ on the subscheme $\operatorname{V}(X_2,\dots,X_n)$, where here the $X_i$ are viewed as functions on $\operatorname{Q}_{2n-1}\times\operatorname{Q}_{2n-1}$ via the projection $\operatorname{pr}_2$.
For the pullback along the regular embedding $f:Y\hookrightarrow X$, we can use the description given in \cite{rost} or \cite{fasel:chowwittring}, cf. \cite{AsokFaselEuler} for an identification with pullbacks in sheaf cohomology. This definition uses the deformation to the normal cone: in the deformation space $D(X,Y)=\operatorname{Bl}_{Y\times \{0\}}X\times\mathbb{A}^1\setminus \operatorname{Bl}_YX$, we have the normal bundle $\mathscr{N}_YX\hookrightarrow D(X,Y)$ embedded as divisor, with open complement $X\times\mathbb{A}^1\setminus\{0\}$. The pullback $\operatorname{H}^i(X,\mathbf{K}^{\operatorname{MW}}_j)\to \operatorname{H}^i(Y,\mathbf{K}^{\operatorname{MW}}_j)$ is then defined by composing the flat pullback along $X\times\mathbb{A}^1\setminus\{0\}$, multiplication by the uniformizer of $X\times\{0\}\subseteq X\times\mathbb{A}^1$, followed by the boundary map to the divisor $\mathscr{N}_YX$ and finally the homotopy invariance map to $Y$. Tracing through the definition in the specific case of the graph embedding $\operatorname{Q}_{2n-1}\hookrightarrow\operatorname{Q}_{2n-1}\times\operatorname{Q}_{2n-1}$, we see that the component $\operatorname{V}(X_1+X_{-1})$ doesn't contribute to the pullback cycle (essentially for codimension reasons). The underlying scheme of the pullback cycle is the other component of the preimage, $\operatorname{V}(X_2,\dots,X_n)$, and the pullback of the class $[X_1]\in\mathbf{K}^{\operatorname{MW}}_1(F(Y),\omega_Y)$ is given by
\[
[(X_1+X_{-1})X_1-1]\in \mathbf{K}^{\operatorname{MW}}_1(F(Y),\omega_Y).
\]
But on $F(Y)$, we have $X_1X_{-1}=1$, hence $[(X_1+X_{-1})X_1-1]=[X_1^2]$. The degree $\deg(\pi\circ\tau)$ is therefore determined by the equality $[X_1^2]=\deg(\pi\circ\tau)[X_1]$ in $\mathbf{K}^{\operatorname{MW}}_1(F(Y))$.
If $n$ is even, we have $[X_1^2]=2[X_1]$, as discussed in the proof of \cite[Theorem 4.9]{AsokFaselSpheres}. If $n$ is odd, we have $[X_1^2]=\mathbb{H}[X_1]$, as discussed in the proof of \cite[Theorem 4.11]{AsokFaselSpheres}. This proves the claim.
\end{proof}
\begin{remark}
Note that this fits the classical pattern. From \cite[Lemma 1.7]{levine}, we knnow that the reduced motivic Euler characteristic of the sphere $\operatorname{S}^{p,q}=(\operatorname{S}^1_s)^{\wedge p-q}\wedge (\mathbb{G}_{\operatorname{m}})^{\wedge q}$ is $\chi(\operatorname{S}^{p,q})=(-1)^p\langle-1\rangle^q\in \operatorname{GW}(F)$. This is the inverse of the $\mathbb{A}^1$-Brouwer degree of the antipodal map $\operatorname{S}^{p,q}\to\operatorname{S}^{p,q}$. In the particular case $\operatorname{Q}_{2n-1}\simeq \operatorname{S}^{2n-1,n}$, the Euler characteristic is $-\langle-1\rangle ^n$ and the degree of the antipodal map is $\langle-1\rangle^n$. The degree of the map $\operatorname{Q}_{2n-1}\to\operatorname{Q}_{2n-1}$ considered above is $1+\langle-1\rangle^n$, i.e. 1 plus the degree of the antipodal map, recovering exactly the classical result in \cite[Section 23]{steenrod}.
\end{remark}
\begin{remark}
For $F=\mathbb{C}$, the complex realization, the motivic degree of the map in $\operatorname{GW}(\mathbb{C})\cong\mathbb{Z}$ is always 2, independent of $n$. This fits exactly with the classical situation of the torsor $\operatorname{SO}(2n+1)\to \operatorname{S}^{2n}$, where the degree of the map $\operatorname{S}^{2n-1}\to\operatorname{S}^{2n-1}$ will always be $2$ because the sphere is always odd-dimensional.\footnote{This is one of the few cases in this paper where $\operatorname{SO}(2n+1)$ denotes the compact form, or alternatively the split group $\operatorname{SO}(2n+1,\mathbb{C})$.}
For $F=\mathbb{R}$, the $\mathbb{A}^1$-Brouwer degree of the map in $\operatorname{GW}(\mathbb{R})\cong\mathbb{Z}$ is a rank 2 form whose signature is $2$ or $0$, depending if $n$ is even or odd. This fits exactly with the topological situation: the split form of $\operatorname{O}(2n+1)$ has maximal compact subgroup $\operatorname{O}(n)\times\operatorname{O}(n+1)$ and the relevant torsor $\operatorname{O}(n)$-torsor is $\operatorname{O}(n+1)\to\operatorname{S}^n$. The degree of the attaching map $\operatorname{S}^n\to\operatorname{S}^n$ is $2$ or $0$, depending if $n$ is even or odd, exactly as in the motivic situation.
\end{remark}
\section{On \texorpdfstring{$\mathbb{A}^1$}{A1}-homotopy groups of orthogonal Stiefel varieties}
\label{sec:stiefel}
Now we have enough background on the geometry of orthogonal Stiefel varieties $\operatorname{V}_{2,n}$ to determine their $\mathbb{A}^1$-connectivity and the first non-vanishing $\mathbb{A}^1$-homotopy sheaves. The Nisnevich cohomology groups with coefficients in the first non-vanishing $\mathbb{A}^1$-homotopy sheaf will be the natural home for obstructions to splitting off hyperbolic planes.
\begin{proposition}
\label{prop:stiefel1}
The Stiefel variety $\operatorname{V}_{2,2d+1}$ is $\mathbb{A}^1$-$(d-2)$-connected. The first non-vanishing $\mathbb{A}^1$-homotopy sheaf is
\[
\bm{\pi}^{\mathbb{A}^1}_{d-1}(\operatorname{V}_{2,2d+1})\cong \mathbf{K}^{\operatorname{MW}}_d/(1+\langle(-1)^d\rangle)\cong \left\{\begin{array}{ll}
\mathbf{K}^{\operatorname{MW}}_d/2 & d\equiv 0\mod 2\\
\mathbf{K}^{\operatorname{MW}}_d/[\mathbb{H}]\cong\mathbf{I}^d & d\equiv 1\mod 2\end{array}\right.
\]
\end{proposition}
\begin{proof}
The result follows from the long exact sequence in $\mathbb{A}^1$-homotopy associated to the fibration
\[
\operatorname{Q}_{2d-1}\to \operatorname{V}_{2,2d+1}\to \operatorname{Q}_{2d}
\]
from Lemma~\ref{lem:quadfib}. The relevant portion of the long exact $\mathbb{A}^1$-homotopy sequence is the following
\[
\bm{\pi}_{d}^{\mathbb{A}^1}(\operatorname{Q}_{2d})\stackrel{\partial}{\longrightarrow} \bm{\pi}_{d-1}^{\mathbb{A}^1}(\operatorname{Q}_{2d-1})\to\bm{\pi}_{d-1}^{\mathbb{A}^1}(\operatorname{V}_{2,2d+1})\to \bm{\pi}_{d-1}^{\mathbb{A}^1}(\operatorname{Q}_{2d}).
\]
By Morel's computations \cite{MField}, $\operatorname{Q}_{2d}$ is $\mathbb{A}^1$-$(d-1)$-connected and $\operatorname{Q}_{2d-1}$ is $\mathbb{A}^1$-$(d-2)$-connected. In particular, all $\mathbb{A}^1$-homotopy sheaves to the right of $\bm{\pi}_{d-1}^{\mathbb{A}^1}(\operatorname{V}_{2,2d+1})$ will vanish. Moreover, Morel's computations \cite{MField} imply that $\bm{\pi}^{\mathbb{A}^1}_d(\operatorname{Q}_{2d})\cong\mathbf{K}^{\operatorname{MW}}_d$ and $\bm{\pi}^{\mathbb{A}^1}_d(\operatorname{Q}_{2d-1})\cong\mathbf{K}^{\operatorname{MW}}_d$. In particular, we can rewrite the above exact sequence as
\[
\bm{\pi}_{d-1}^{\mathbb{A}^1}(\operatorname{V}_{2,2d+1})\cong\operatorname{coker}\left( \mathbf{K}^{\operatorname{MW}}_d \stackrel{\partial}{\longrightarrow} \mathbf{K}^{\operatorname{MW}}_d\right).
\]
It remains to identify the map $\partial$. In the category of strictly $\mathbb{A}^1$-invariant sheaves of abelian groups, we have an identification
\[
\operatorname{Hom}(\mathbf{K}^{\operatorname{MW}}_d,\mathbf{K}^{\operatorname{MW}}_d)\cong\operatorname{K}^{\operatorname{MW}}_0(F),
\]
so that $\partial$ is given by multiplication by some element $\partial\in\operatorname{GW}(F)$. Moreover, since $[\operatorname{Q}_{2d},\operatorname{Q}_{2d}]_{\mathbb{A}^1}\cong [\operatorname{Q}_{2d-1},\operatorname{Q}_{2d-1}]_{\mathbb{A}^1}\cong\operatorname{GW}(F)$, we can identify $\partial\in \operatorname{GW}(F)$ by tracing through the definition of the connecting map for the special case $\operatorname{id}_{\operatorname{Q}_{2d}}\in \pi^{\mathbb{A}^1}_d(\operatorname{Q}_{2d})(\operatorname{Q}_{2d})$. The result will be a map $\operatorname{Q}_{2d-1}\to\operatorname{Q}_{2d-1}$ which represents $\partial\in\operatorname{GW}(F)$. By Lemma~\ref{lem:quadfib}, the connecting map for the fiber sequence $\operatorname{Q}_{2d-1}\to \operatorname{V}_{2,2d+1}\to\operatorname{Q}_{2d}$ can be identified with the connecting map for the fiber sequence $\operatorname{Q}_{2d-1}\to {\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(2d-1)\to {\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(2d)$, applied to the map $\operatorname{Q}_{2d}\to{\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(2d)$ which classifies the $\operatorname{SO}(2d)$-torsor $\operatorname{SO}(2d+1)\to\operatorname{SO}(2d+1)/\operatorname{SO}(2d)\cong \operatorname{Q}_{2d}$.
To identify the connecting map for the fiber sequence $\operatorname{Q}_{2d-1}\to{\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(2d-1)\to {\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(2d)$, recall that the classifying map $\operatorname{Q}_{2d}\to{\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(2d)$ corresponds, via the clutching construction, to a morphism $\operatorname{Q}_{2d-1}\to\operatorname{SO}(2d)$. Extending the fiber sequence one step to the left, we get a fiber sequence $\operatorname{SO}(2d)\to \operatorname{Q}_{2d-1}\to{\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(2d-1)$. Now the connecting map takes $\operatorname{Q}_{2d}\to{\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(2d)$ to the composition
\[
\operatorname{Q}_{2d-1}\to\operatorname{SO}(2d)\to \operatorname{Q}_{2d-1},
\]
where the first map is the clutching map and the second map is the natural projection. By the previous discussion, the element $\partial\in\operatorname{GW}(F)$ is the degree of this composition. The claim then follows from Proposition~\ref{prop:degree}.
It remains to identify the cohomology of $\mathbf{K}^{\operatorname{MW}}_d/[\mathbb{H}]$. This is the Witt K-theory $\mathbf{K}^{\operatorname{W}}_\bullet$, cf. \cite[p.61]{MField}. The identification $\mathbf{K}^{\operatorname{W}}_d\cong \mathbf{I}^d$ follows from \cite[Theorem 2.1]{morel:puissances}
\end{proof}
\begin{remark}
In the complex realization, the above is the case $\operatorname{V}_{2,2d+1}$ is the one with the fibration $\operatorname{S}^{2d-1}\to\operatorname{V}_{2,2d+1}\to\operatorname{S}^{2d}$ where the boundary map $\pi_{i+1}(\operatorname{S}^{2d-1})\to\pi_{i}(\operatorname{S}^{2d-2})$ is multiplication by $2$.
Proposition~\ref{prop:stiefel1} implies that for any field $F$ of characteristic $\neq 2$, we have
\[
[\operatorname{Q}_{2d-2},\operatorname{V}_{2,2d+1}]_{\mathbb{A}^1}\cong \left\{\begin{array}{ll}
\operatorname{K}^{\operatorname{MW}}_1(F^\times)/2 & d\equiv 0\mod 2\\
\operatorname{I}(F) & d\equiv 1\mod 2\end{array}\right.
\]
which are both trivial over algebraically closed fields, consistent with the connectivity of the Stiefel varieties over $\mathbb{C}$.
On the other hand, we have
\[
[\operatorname{Q}_{2d-1},\operatorname{V}_{2,2d+1}]_{\mathbb{A}^1}\cong \left\{\begin{array}{ll}
\operatorname{GW}(F)/2 & d\equiv 0\mod 2\\
\operatorname{W}(F) & d\equiv 1\mod 2\end{array}\right.
\]
Over algebraically closed fields, both cases are isomorphic to $\mathbb{Z}/2\mathbb{Z}$ recovering the classical isomorphism $\pi_{2d-1}\operatorname{V}_{2,2d+1}(\mathbb{C})\cong\mathbb{Z}/2\mathbb{Z}$.
In real realization, there are two cases for the homotopy groups of $\operatorname{V}_{2,2d+1}(\mathbb{R})\cong\operatorname{SO}(d+1,d)/\operatorname{SO}(d,d-1)$. The result is $\mathbb{Z}/2\mathbb{Z}$ whenever $d$ is even and $\mathbb{Z}$ whenever $d$ is odd. This is exactly what the above formula reproduces for the real realization.
\end{remark}
Combining the above technique with Morel's Freudenthal suspension theorem \cite[Theorem 5.60]{MField}, some more $\mathbb{A}^1$-homotopy sheaves of Stiefel varieties can be described. The description is not as explicit as the one above, due to our limited knowledge of unstable $\mathbb{A}^1$-homotopy sheaves of spheres.
\begin{theorem}
\label{thm:stiefelhigh}
Let $F$ be a perfect field of characteristic $\neq 2$, and let $d\geq 3$. Then for all $i\leq 2(d-2)$, there are exact sequences
\[
0\to \bm{\pi}_i^{\mathbb{A}^1}(\operatorname{Q}_{2d-1})/a \to \bm{\pi}^{\mathbb{A}^1}_i(\operatorname{V}_{2,2d+1})\to \ker\left(a\colon\bm{\pi}^{\mathbb{A}^1}_{i-1}(\operatorname{Q}_{2d-1})\to \bm{\pi}^{\mathbb{A}^1}_{i-1}(\operatorname{Q}_{2d-1})\right)\to 0
\]
where $a=2$ if $d\equiv 0\bmod 2$ and $a=[\mathbb{H}]$ if $d\equiv 1\bmod 2$.
\end{theorem}
\begin{proof}
The connecting map can be reinterpreted as being induced from the morphism $\Omega\operatorname{Q}_{2d}\to\operatorname{Q}_{2d-1}$. Then we can consider the composition
\[
\operatorname{Q}_{2d-1}\to\Omega_{\mathbb{A}^1}\Sigma_{\mathbb{A}^1}\operatorname{Q}_{2d-1}\to\operatorname{Q}_{2d-1}.
\]
The first map is the unit of the adjunction $\Omega_{\mathbb{A}^1}\dashv\Sigma_{\mathbb{A}^1}$, the second map is the morphism inducing the connecting map. Morel's Freudenthal suspension theorem \cite[Theorem 5.60]{MField} implies that the first map is $\mathbb{A}^1$-$2(d-2)$-connected since the sphere $\operatorname{Q}_{2d-1}$ is $\mathbb{A}^1$-$(d-2)$-connected. In particular, the connecting map for $\bm{\pi}^{\mathbb{A}^1}_i$ with $i\leq 2(d-2)$ can be identified with multiplication by the degree of the composition $\operatorname{Q}_{2d-1}\to\Omega\Sigma\operatorname{Q}_{2d-1}\to\operatorname{Q}_{2d-1}$ above. Note that the first map is multiplication by some unit, but that doesn't affect the computation of the kernel or cokernel. The claim then follows from the long exact homotopy sequence of Lemma~\ref{lem:quadfib}.
\end{proof}
\begin{remark}
At this point, not even the $\mathbb{A}^1$-homotopy sheaves $\bm{\pi}^{\mathbb{A}^1}_n(\operatorname{Q}_{2n-1})$ are known. There is an explicit conjecture, though, formulated in \cite[Conjecture 7]{AsokFaselMetastable}. It is expected that for $n\geq 4$ there is an exact sequence of strictly $\mathbb{A}^1$-invariant sheaves of abelian groups of the form
\[
\mathbf{K}^{\operatorname{M}}_{n+2}/24\to \bm{\pi}^{\mathbb{A}^1}_n(\operatorname{Q}_{2n-1})\to \mathbf{GW}^n_{n+1}
\]
which becomes short exact after $n$-fold contraction.
\end{remark}
\begin{proposition}
\label{prop:stiefel2}
The Stiefel variety $\operatorname{V}_{2,2d}$ is $\mathbb{A}^1$-$(d-2)$-connected. For any $n$, we have
\[
\bm{\pi}^{\mathbb{A}^1}_n(\operatorname{V}_{2,2d})\cong \bm{\pi}^{\mathbb{A}^1}_n(\operatorname{Q}_{2d-2})\times \bm{\pi}^{\mathbb{A}^1}_{n}(\operatorname{Q}_{2d-1}).
\]
In particular, the first non-vanishing $\mathbb{A}^1$-homotopy sheaf is
\[
\bm{\pi}^{\mathbb{A}^1}_{d-1}(\operatorname{V}_{2,2d})\cong \mathbf{K}^{\operatorname{MW}}_{d-1}\times\mathbf{K}^{\operatorname{MW}}_d.
\]
\end{proposition}
\begin{proof}
The result follows from the long exact sequence in $\mathbb{A}^1$-homotopy associated to the fibration
\[
\operatorname{Q}_{2d-2}\to \operatorname{V}_{2,2d}\to \operatorname{Q}_{2d-1}
\]
from Lemma~\ref{lem:quadfib}. The relevant portion of the long exact $\mathbb{A}^1$-homotopy sequence is the following
\[
\bm{\pi}_{d}^{\mathbb{A}^1}(\operatorname{Q}_{2d-1})\stackrel{\partial}{\longrightarrow} \bm{\pi}_{d-1}^{\mathbb{A}^1}(\operatorname{Q}_{2d-2})\to\bm{\pi}_{d-1}^{\mathbb{A}^1}(\operatorname{V}_{2,2d})\to \bm{\pi}_{d-1}^{\mathbb{A}^1}(\operatorname{Q}_{2d-1})\to \bm{\pi}_{d-2}^{\mathbb{A}^1}(\operatorname{Q}_{2d-2}).
\]
Since $\operatorname{Q}_{2d-1}$ and $\operatorname{Q}_{2d-2}$ are both $\mathbb{A}^1$-$(d-2)$-connected all $\mathbb{A}^1$-homotopy sheaves to the right of $\bm{\pi}_{d-1}^{\mathbb{A}^1}(\operatorname{Q}_{2d-1})$ will vanish. Using the information on $\mathbb{A}^1$-homotopy groups of spheres, cf. Proposition~\ref{prop:stiefel1}, we can rewrite the above exact sequence as
\[
\bm{\pi}_{d}^{\mathbb{A}^1}(\mathbb{A}^d\setminus\{0\})\stackrel{\partial}{\longrightarrow} \mathbf{K}^{\operatorname{MW}}_{d-1}\to\bm{\pi}_{d-1}^{\mathbb{A}^1}(\operatorname{V}_{2,2d})\to \mathbf{K}^{\operatorname{MW}}_d\to 0
\]
We claim that the projection $\operatorname{V}_{2,2d}\to\operatorname{Q}_{2d-1}$ has a section. This follows from relative obstruction theory as discussed in Section~\ref{sec:prelims}. There is a sequence of obstructions for lifting the identity on $\operatorname{Q}_{2d-1}$ to a map $\operatorname{Q}_{2d-1}\to\operatorname{V}_{2,2d}$, these obstructions live in cohomology groups $\operatorname{H}^{i+1}_{\operatorname{Nis}}(\operatorname{Q}_{2d-1},\bm{\pi}^{\mathbb{A}^1}_i(\operatorname{Q}_{2d-2}))$ of the base with coefficients in $\mathbb{A}^1$-homotopy sheaves of the fiber. The only possibly non-trivial cohomology group of $\operatorname{Q}_{2d-1}$ is
\[
\operatorname{H}^{d-1}_{\operatorname{Nis}}(\operatorname{Q}_{2d-1},\bm{\pi}^{\mathbb{A}^1}_{d-2}(\operatorname{Q}_{2d-2}))\cong \bm{\pi}^{\mathbb{A}^1}_{d-2}(\operatorname{Q}_{2d-2})_{-d}.
\]
Since the sphere $\operatorname{Q}_{2d-2}$ is $\mathbb{A}^1$-$(d-2)$-connected, this implies that all the obstruction groups for lifting the identity vanish, and we obtain the required section $\operatorname{Q}_{2d-1}\to\operatorname{V}_{2,2d}$. As a consequence, the boundary map $\partial$ in the above $\mathbb{A}^1$-homotopy sequence is trivial, and the extension is split. This proves all the claims.
\end{proof}
\begin{remark}
Under complex realization, the case $\operatorname{V}_{2,2d}$ is the one with the fibration $\operatorname{S}^{2d-2}\to\operatorname{V}_{2,2d}\to\operatorname{S}^{2d-1}$. All the boundary map $\pi_{i+1}(\operatorname{S}^{2d-1})\to\pi_{i}(\operatorname{S}^{2d-2})$ are zero because they are induced by the sum of the identity and the antipodal map on the $2d-2$-sphere. Therefore, we get short exact sequences
\[
0\to \pi_n(\operatorname{S}^{2d-2})\to \pi_n(\operatorname{V}_{2,2d})\to\pi_n(\operatorname{S}^{2d-1})\to 0.
\]
The first non-vanishing homotopy is then $\pi_{2d-2}(\operatorname{V}_{2,2d}(\mathbb{C}))\cong\mathbb{Z}$; its motivic analogue is
\[
[\operatorname{Q}_{2d-2},\operatorname{V}_{2,2d}]_{\mathbb{A}^1}\cong \left(\mathbf{K}^{\operatorname{MW}}_{d-1}\right)_{-(d-1)}\times \left(\mathbf{K}^{\operatorname{MW}}_{d}\right)_{-(d-1)}\cong \operatorname{GW}(k)\times \operatorname{K}^{\operatorname{MW}}_1(k).
\]
Over an algebraically closed field, the first summand corresponds to the classical homotopy group and the second summand is $\operatorname{K}^{\operatorname{M}}_1(k)\cong k^\times$; modulo primes only the first summand is visible. The next homotopy group is $\pi_{2d-1}(\operatorname{V}_{2,2d}(\mathbb{C}))\cong \mathbb{Z}/2\mathbb{Z}\times\mathbb{Z}$ with the following motivic analogue
\[
[\operatorname{Q}_{2d-1},\operatorname{V}_{2,2d}]_{\mathbb{A}^1}\cong \left(\mathbf{K}^{\operatorname{MW}}_{d-1}\right)_{-d}\times \left(\mathbf{K}^{\operatorname{MW}}_{d}\right)_{-d}\cong \operatorname{W}(k)\times\operatorname{GW}(k).
\]
In the real realization, Proposition~\ref{prop:stiefel2} corresponds to the case where
\[
\operatorname{SO}(d,d)/\operatorname{SO}(d-1,d-1)\cong \operatorname{S}^{d-1}\times\operatorname{S}^{d-1}.
\]
\end{remark}
\begin{remark}
In particular, computations of the $\operatorname{Q}_{2d-1}$-part of the obstruction groups for splitting off hyperbolic planes from quadratic forms will follow from computations of obstruction groups for splitting off trivial lines from projective modules, cf. \cite{AsokFaselSplitting}. $\mathbb{A}^1$-homotopy group sheaves of $\operatorname{Q}_{2d}$-part can again be related to those of $\operatorname{Q}_{2d-1}$ via Morel's Freudenthal suspension theorem.
\end{remark}
\section{The splitting theorems and examples}
\label{sec:splitting}
Having determined the relevant $\mathbb{A}^1$-homotopy sheaves of the orthogonal Stiefel varieties $\operatorname{V}_{2,n}$, we can now deduce the splitting theorems and discuss examples.
\subsection{Proofs of splitting results}
The first result is a version of the splitting result of Roy, cf. \cite[Theorem 7.2]{roy}.
\begin{theorem}
\label{thm:stablesplitting}
Let $F$ be a perfect field of characteristic unequal to $2$ and let $X=\operatorname{Spec}A$ be a smooth affine scheme over $F$ of dimension $d$. Let $(\mathscr{P},\phi)$ be a generically trivial quadratic form over $A$ of rank $2n$ or $2n+1$. If $d\leq n-1$, then $(\mathscr{P},\phi)$ splits off a hyperbolic plane.
\end{theorem}
\begin{proof}
Denote by $m$ the rank of the quadratic form.
The stably hyperbolic quadratic form $(\mathscr{P},\phi)$ corresponds to a rationally trivial $\operatorname{SO}(m)$-torsor over $X$, i.e., an element of $\operatorname{H}^1_{\operatorname{Nis}}(X;\operatorname{SO}(m))$. By Theorem~\ref{thm:representability}, we can identify the latter pointed set with the set $[X,{\operatorname{B}_{\operatorname{Nis}}}\operatorname{SO}(m)]_{\mathbb{A}^1}$ of $\mathbb{A}^1$-homotopy classes of maps into the classifying space of rationally trivial $\operatorname{SO}(m)$-torsors, pointed by the trivial torsor. By Proposition~\ref{prop:stabil}, we can reformulate the claim as follows: if the rank--dimension inequality is satisfied, then the element $P\in [X,{\operatorname{B}_{\operatorname{Nis}}}\operatorname{SO}(m)]_{\mathbb{A}^1}$ corresponding to $(\mathscr{P},\phi)$ is in the image of the map
\[
[X,{\operatorname{B}_{\operatorname{Nis}}}\operatorname{SO}(m-2)]_{\mathbb{A}^1}\to [X,{\operatorname{B}_{\operatorname{Nis}}}\operatorname{SO}(m)]_{\mathbb{A}^1}
\]
induced by the stabilization morphism.
By the relative $\mathbb{A}^1$-obstruction theory, a lift of the map $X\to{\operatorname{B}_{\operatorname{Nis}}}\operatorname{SO}(m)$ exists if all of the obstructions in the cohomology groups $\operatorname{H}^{i+1}_{\operatorname{Nis}}(X;\bm{\pi}_i^{\mathbb{A}^1}(\operatorname{V}_{2,m}))$ vanish for all $i$. Since $X$ is a finite-dimensional smooth scheme, the obstruction groups will vanish automatically for $i\geq d$. The claim follows if $\bm{\pi}_i^{\mathbb{A}^1}(\operatorname{V}_{2,m})=0$ for all $i\leq d-1$. For the case $m=2n+1$, Proposition~\ref{prop:stiefel1} states that the Stiefel varieties are $\mathbb{A}^1$-$(n-2)$-connected. Similarly, for $m=2n$, the Stiefel varieties are $\mathbb{A}^1$-$(n-2)$-connected by Proposition~\ref{prop:stiefel2}. By assumption $d\leq n-1$ which is the required range to prove the result.
\end{proof}
\begin{remark}
The splitting result in Theorem~\ref{thm:stablesplitting} is a version of a classical result analogous to Serre's splitting result for projective modules. In \cite[Theorem 7.2]{roy}, A. Roy has proved a splitting result which implies in particular that if $A$ is a commutative ring $A$ of Krull dimension $d$ in which $2$ is invertible, a stably hyperbolic quadratic form is hyperbolic if the rank of the underlying projective module is $\geq 2d+2$. For stably hyperbolic forms, Roy's result is much more general than Theorem~\ref{thm:stablesplitting} because the latter requires the commutative ring to be smooth over a perfect field. On the other hand, \cite[Theorem 7.2]{roy} needs something close to the stably hyperbolic hypothesis to get the sharp bound where we only need a generically hyperbolic quadratic form. In any case, the alternative $\mathbb{A}^1$-topological proof has the appeal that it is very close to the arguments one would use in classical
|
topology (which inspired the algebraic proofs of Serre and Roy).
\end{remark}
Next, we will discuss what happens at the edge of the stable range. For projective modules, the first obstruction to splitting off a trivial line is given by the Euler class, cf. \cite{MField}. Something similar but slightly different happens for quadratic forms. There are again obstruction classes controlling when a generically trivial quadratic form splits off a hyperbolic plane, and they live in the cohomology of $X$ with coefficients in the first non-vanishing $\mathbb{A}^1$-homotopy sheaf of the appropriate Stiefel variety.
\begin{theorem}
\label{thm:euler1}
Let $F$ be a perfect field of characteristic unequal to $2$ and let $X=\operatorname{Spec}A$ be a smooth affine scheme over $F$ of dimension $d$. Let $(\mathscr{P},\phi)$ be a generically trivial quadratic form over $A$ of rank $2d+1$ admitting a spin lift. Then $(\mathscr{P},\phi)$ splits off a hyperbolic plane if and only if the obstruction class in
\[
\operatorname{H}^d_{\operatorname{Nis}}(X,\mathbf{K}^{\operatorname{MW}}_d/(1+\langle(-1)^d\rangle))\cong\left\{ \begin{array}{ll}
\widetilde{\operatorname{CH}}^d(X)/2 & d\equiv 0\bmod 2\\
\operatorname{H}^d_{\operatorname{Nis}}(X,\mathbf{I}^d) & d\equiv 1\bmod 2
\end{array}\right.
\]
vanishes.
\end{theorem}
\begin{proof}
The argument is similar to the one for Theorem~\ref{thm:stablesplitting}. The relevant first obstruction lives in $\operatorname{H}^d_{\operatorname{Nis}}(X;\bm{\pi}_{d-1}^{\mathbb{A}^1}(\operatorname{V}_{2,2d+1}))$, cf. Section~\ref{sec:prelims}. The relevant homotopy sheaf of $\operatorname{V}_{2,2d+1}$ has been determined in Proposition~\ref{prop:stiefel1}.
\end{proof}
\begin{theorem}
\label{thm:euler2}
Let $F$ be a perfect field of characteristic unequal to $2$ and let $X=\operatorname{Spec}A$ be a smooth affine scheme over $F$ of dimension $d$. Let $(\mathscr{P},\phi)$ be a generically hyperbolic quadratic form over $A$ of rank $2d$ admitting a spin lift. Then $(\mathscr{P},\phi)$ splits off a hyperbolic plane if and only if the obstruction class in
\[
\widetilde{\operatorname{CH}}^d(X)\times \operatorname{coker}\left(\operatorname{CH}^{d-1}(X)\to\operatorname{H}^d_{\operatorname{Nis}}(X,\mathbf{I}^d)\right)
\]
vanishes.
\end{theorem}
\begin{proof}
By the relative $\mathbb{A}^1$-obstruction theory, the relevant obstruction lives in the group $\operatorname{H}^d_{\operatorname{Nis}}(X;\bm{\pi}_{d-1}^{\mathbb{A}^1}(\operatorname{V}_{2,2d}))$ and by the dimension assumption all higher obstructions will vanish. By Proposition~\ref{prop:stiefel2}, we have $\bm{\pi}_{d-1}^{\mathbb{A}^1}(\operatorname{V}_{2,2d})\cong \mathbf{K}^{\operatorname{MW}}_d\times\mathbf{K}^{\operatorname{MW}}_{d-1}$ which induces a product decomposition of the Nisnevich cohomology. For the first factor we get an Euler class group $\operatorname{H}^d_{\operatorname{Nis}}(X,\mathbf{K}^{\operatorname{MW}}_d)\cong \widetilde{\operatorname{CH}}^d(X)$ (note that we have the trivial duality here because we assumed trivial determinant). For the second factor we get the cohomology group $\operatorname{H}^d_{\operatorname{Nis}}(X,\mathbf{K}^{\operatorname{MW}}_{d-1})$ which sits in an exact sequence
\[
\operatorname{CH}^{d-1}(X)\cong\operatorname{H}^{d-1}_{\operatorname{Nis}}(X,\mathbf{K}^{\operatorname{M}}_{d-1})\xrightarrow{\beta} \operatorname{H}^d_{\operatorname{Nis}}(X,\mathbf{I}^{d})\to \operatorname{H}^d_{\operatorname{Nis}}(X,\mathbf{K}^{\operatorname{MW}}_{d-1})\to
\operatorname{H}^d_{\operatorname{Nis}}(X,\mathbf{K}^{\operatorname{M}}_{d-1}).
\]
The last term vanishes by the Gersten resolution for Milnor K-theory. This induces the required presentation.
\end{proof}
\begin{remark}
There are natural generalizations of the above results: if the quadratic form doesn't admit a spin lift, we would have to consider cohomology equivariant with respect to the action of $\bm{\pi}^{\mathbb{A}^1}_1{\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(n)\cong\mathscr{H}^1_{\mathrm{\acute{e}t}}(\mu_2)$. Of course, understanding or computing these equivariant cohomology groups is another matter.
\end{remark}
\begin{remark}
Note that the splitting for the Stiefel variety implies in particular that the map $\operatorname{Q}_{2d+1}\to{\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(2d+1)$ factors through the stabilization map ${\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(2d)\to{\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(2d+1)$.
\end{remark}
\subsection{Consequences and examples: odd-rank forms}
\label{sec:examples1}
\begin{proposition}
\label{prop:special1}
Let $F$ be an algebraically closed field of characteristic unequal to $2$ and let $X=\operatorname{Spec} A$ be a smooth affine variety of dimension $d$ over $F$. A generically trivial quadratic form $(\mathscr{P},\phi)$ over $A$ of rank $2d+1$ which admits a spin lift splits off a hyperbolic plane.
\end{proposition}
\begin{proof}
There are two cases. By Theorem~\ref{thm:euler1}, if $d\equiv 0\bmod 2$, the relevant obstruction lives in $\operatorname{H}^d_{\operatorname{Nis}}(X,\mathbf{K}^{\operatorname{MW}}_d/2)$. The natural morphism $\mathbf{K}^{\operatorname{MW}}_n/2\to\mathbf{K}^{\operatorname{M}}_n/2$ induces a morphism of Gersten or Rost--Schmid complexes
\[
\xymatrix{
\bigoplus_{x\in X^{(d-1)}}\operatorname{K}^{\operatorname{MW}}_1(\kappa(x))/2\ar[r] \ar@{>>}[d] &
\bigoplus_{y\in X^{(d)}}\operatorname{GW}(\kappa(y))/2 \ar[r] \ar[d]^\cong & \operatorname{H}^d_{\operatorname{Nis}}(X,\mathbf{K}^{\operatorname{MW}}_d/2) \ar[r] \ar[d] &0 \\
\bigoplus_{x\in X^{(d-1)}}\operatorname{K}^{\operatorname{M}}_1(\kappa(x))/2\ar[r] &
\bigoplus_{y\in X^{(d)}}\mathbb{Z}/2\mathbb{Z} \ar[r] & \operatorname{Ch}^d(X) \ar[r] &0.
}
\]
Since $F$ is quadratically closed, $\operatorname{GW}(\kappa(y))/2\cong\mathbb{Z}/2\mathbb{Z}$ for all $y\in X^{(d)}$, and the left-hand vertical morphism is surjective; therefore, the right-hand vertical morphism is an isomorphism. By Roitman's theorem, $\operatorname{CH}^d(X)$ is uniquely divisible and therefore $\operatorname{Ch}^d(X)=0$. This implies that the obstruction group $\operatorname{H}^d_{\operatorname{Nis}}(X,\mathbf{K}^{\operatorname{MW}}_d/2)$ vanishes, and by Theorem~\ref{thm:euler1}, the quadratic form splits off a hyperbolic plane.
In the case if $d\equiv 1\bmod 2$, the relevant obstruction group is $\operatorname{H}^d_{\operatorname{Nis}}(X,\mathbf{I}^d)$. A similar argument as above implies that $\operatorname{H}^d_{\operatorname{Nis}}(X,\mathbf{I}^d)\cong \operatorname{Ch}^d(X)\cong 0$ and again the obstruction vanishes and the quadratic form splits off a hyperbolic plane.
\end{proof}
\begin{example}
To see that some condition is necessary in Proposition~\ref{prop:special1}, we discuss the case of the real algebraic 2-sphere $S=\mathbb{R}[X,Y,Z]/(X^2+Y^2+Z^2-1)$. By the Serre--Swan theorem, $\operatorname{Pic}(S)=0$ and so the discussion in \cite[Section 6.3]{hyperbolic-dim3} implies that $\operatorname{H}^1_{\operatorname{Nis}}(S,\operatorname{SO}(5))\cong \operatorname{H}^1_{\operatorname{Nis}}(S,\operatorname{Sp}_4)$. By \cite[Propositions 5.5, 5.13 and 5.23]{hyperbolic-dim3}, the morphism $\widetilde{\operatorname{CH}}^2(S)\cong \operatorname{H}^1_{\operatorname{Nis}}(S,\operatorname{Spin}(3))\to \operatorname{H}^1_{\operatorname{Nis}}(S,\operatorname{Spin}(5))\cong \widetilde{\operatorname{CH}}^2(S)$ induced by the stabilization map $\operatorname{Spin}(3)\to\operatorname{Spin}(5)$ is identified with multiplication by $2$. In particular, any class in $\widetilde{\operatorname{CH}}^2(S)$ which is not divisible by $2$ provides an example of a rank 5 quadratic form which doesn't split off a hyperbolic plane. By \cite[Theorem 16.3.8]{fasel:memoir}, we have $\operatorname{H}^2(S,\mathbf{I}^2)\cong \mathbb{Z}$. Using the natural surjection $\widetilde{\operatorname{CH}}^2(S)\to \operatorname{H}^2(S,\mathbf{I}^2)$ we find that there are classes in $\widetilde{\operatorname{CH}}^2(S)$ which are not divisible by $2$. These correspond to quadratic forms of rank $5$ which are not obtained from rank $3$ forms by adding a hyperbolic plane.
\end{example}
\begin{example}
To give an example for $d\equiv 1\bmod 2$, recall from \cite{hyperbolic-dim3} that for $X$ a smooth affine 3-fold over a perfect field of characteristic $\neq 2$, we have $\operatorname{H}^1_{\operatorname{Nis}}(X,\operatorname{Spin}(7))\cong \operatorname{CH}^2(X)$ with the bijection given by mapping a quadratic form or spin torsor to its second Chern class. On the other hand, we have a surjection $\operatorname{H}^1_{\operatorname{Nis}}(X,\operatorname{Spin}(5))\twoheadrightarrow\widetilde{\operatorname{CH}}^2(X)$ with the invariant given by the first Pontryagin class of the corresponding $\operatorname{Sp}_4$-bundle. The stabilization morphism $\operatorname{H}^1_{\operatorname{Nis}}(X,\operatorname{Spin}(5))\to \operatorname{H}^1_{\operatorname{Nis}}(X,\operatorname{Spin}(7))$ is identified with the natural morphism $\widetilde{\operatorname{CH}}^2(X)\to\operatorname{CH}^2(X)$. This morphism isn't necessarily surjective, and the obstruction to surjectivity is exactly given by the Bockstein morphism $\operatorname{CH}^2(X)\to\operatorname{Ch}^2(X)\to \operatorname{H}^3(X,\mathbf{I}^3)$. Incidentally, in this case, the second $\mathbb{A}^1$-homotopy sheaf of ${\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(5)$ is given $\mathbf{K}^{\operatorname{MW}}_2$ in the extension
\[
0\to\mathbf{I}^3\to \bm{\pi}^{\mathbb{A}^1}_2({\operatorname{B}}_{\operatorname{Nis}}\operatorname{Spin}(5))\to \mathbf{K}^{\operatorname{M}}_2\to 0
\]
and the Bockstein morphism is exactly the boundary morphism for this extension.
Consequently, the obstruction to splitting off a hyperbolic plane is exactly the third integral Stiefel--Whitney class $\beta(\overline{\operatorname{c}}_2)$. Under the natural reduction morphism $\operatorname{H}^3(X,\mathbf{I}^3)\to \operatorname{Ch}^3(X)$, this class maps to $\operatorname{Sq}^2(\overline{\operatorname{c}}_2)=\overline{\operatorname{c}}_3$. From the classification we had $\operatorname{H}^1(X,\operatorname{Spin}(7))\cong \operatorname{CH}^2(X)$, and therefore any class in $\operatorname{CH}^2(X)$ is realizable as second Chern class of a quadratic form. In particular, over a smooth affine 3-fold $X$, we can obtain an oriented quadratic form of rank $7$ which doesn't split off a hyperbolic plane whenever $\operatorname{Sq}^2\colon \operatorname{Ch}^2(X)\to\operatorname{Ch}^3(X)$ is nontrivial.
\end{example}
\begin{example}
The obstructions in Theorem~\ref{thm:euler1} are trivial for smooth affine quadrics of even dimension $\geq 8$ over quadratically closed fields. Any morphism $\operatorname{Q}_{2d}\to {\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(2d+1)$ necessarily factors through the morphism ${\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(2d-1)\to {\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(2d+1)$. Viewing the morphism as element of $[\operatorname{Q}_{2d},{\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(2d+1)]_{\mathbb{A}^1}\cong \left(\bm{\pi}^{\mathbb{A}^1}_{d-1}(\operatorname{SO}(2d+1))\right)_{-d}$, the obstruction to lifting is simply the image under the morphism
\[
\left(\bm{\pi}^{\mathbb{A}^1}_{d-1}(\operatorname{SO}(2d+1))\right)_{-d}\to \left(\bm{\pi}^{\mathbb{A}^1}_{d-1}(\operatorname{V}_{2,2d+1})\right)_{-d}
\]
induced by the projection $\operatorname{SO}(2d+1)\to \operatorname{V}_{2,2d+1}$. It can be checked on topological realization that this morphism is zero, hence the primary obstruction to splitting off a hyperbolic plane from a rank $2d+1$ form vanishes for the sphere $\operatorname{Q}_{2d}$.
\end{example}
\begin{example}
A specific example of a stably trivial quadratic form which doesn't split off a hyperbolic plane can also be obtained (but that's more a low-dimensional accident). Consider $\operatorname{Q}_4$ over $\mathbb{R}$. By the results of \cite{hyperbolic-dim3}, the isometry classes of $\operatorname{SO}(3)$-torsors are in bijection with $\operatorname{GW}(F)\cong\operatorname{H}^2(\operatorname{Q}_4,\mathbf{K}^{\operatorname{MW}}_2)$. The same is true for $\operatorname{SO}(5)$-torsors. The natural stabilization map $\operatorname{SO}(3)\to\operatorname{SO}(5)$ induces the multiplication by 2 on homotopy sheaves, cf. \cite{hyperbolic-dim3}. Finally, the isometry classes of $\operatorname{SO}(7)$-torsors are in bijection with $\mathbb{Z} \cong \operatorname{H}^2(\operatorname{Q}_4,\mathbf{K}^{\operatorname{M}}_2)$, and the stabilization map $\operatorname{SO}(5)\to\operatorname{SO}(7)$ corresponds to the rank map $\dim\colon \operatorname{GW}(F)\to\mathbb{Z}$. Now we can take an $\operatorname{SO}(5)$-torsor over $\operatorname{Q}_4$ corresponding to a class in $\operatorname{GW}(\mathbb{R})$ of rank $0$ and odd signature. This torsor becomes trivial after adding a single hyperbolic plane, but it doesn't split off a hyperbolic plane because its invariant in $\operatorname{GW}(F)$ isn't divisible by $2$.
\end{example}
\begin{remark}
At this point, we have no precise information about the obstruction classes. An obvious guess is that the obstruction classes for $2d+1$-rank forms with $d\equiv 1\bmod 2$ are integral Stiefel--Whitney classes. In this case, the obstructions would vanish for stably trivial forms. However, looking at the conjectural structure of the first unstable $\mathbb{A}^1$-homotopy sheaf of $\operatorname{SO}(2d+1)$, cf. the discussion in Appendix~\ref{sec:stabilization}, it's also conceivable that the obstruction classes for odd-rank forms vanish altogether. In any case, one would expect significantly stronger splitting results than those discussed in the present paper. Very likely, an investigation of the Chow--Witt rings of ${\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(2n+1)$ would be a relevant step for the identification of the obstruction class.
\end{remark}
\subsection{Consequences and examples: even-rank bundles}
Recall from Theorem~\ref{thm:euler2} that for a generically split quadratic form over a smooth affine scheme $X=\operatorname{Spec} A$ of rank $2d$ the first obstruction to split off a hyperbolic plane lives in $\widetilde{\operatorname{CH}}^d(X)\times \operatorname{H}^d(X,\mathbf{W})$. We now want to discuss the relation between the Chow--Witt part of the obstruction class and the Euler class of quadratic forms as defined by Edidin and Graham in \cite{edidin:graham}.
Let $F$ be a field of characteristic $\neq 2$, and let $V$ be an $n$-dimensional $F$-vector space. Equipping $V\oplus V^\vee$ with the quadratic form $\operatorname{ev}(v+\phi)=\phi(v)$ given by evaluation, we can concretely realize the special orthogonal group $\operatorname{SO}(2n)$ as the linear automorphisms of $V\oplus V^\vee$ preserving the evaluation form. If we choose a basis $e_1,\dots,e_n$ of $V$ with the corresponding dual basis $e_1^\vee,\dots,e_n^\vee$, then the evaluation form is given by
\[
\operatorname{ev}\colon \sum_{i=1}^n\left(\lambda_i e_i+\mu_i e_i^\vee\right)\mapsto \sum_{i=1}^n\lambda_i\mu_i.
\]
In the basis $e_i+e_i^\vee$, $e_i-e_i^\vee$ the evaluation form is given by a sum of squares of signature $(n,n)$.
Now we can write down an explicit model for the hyperbolic map $H\colon\operatorname{GL}_n\to \operatorname{SO}(2n)$: a linear automorphism $\alpha\colon V\to V$ in $\operatorname{GL}_n$ maps to the corresponding automorphism $\alpha\oplus(\alpha^{-1})^\vee\colon V\oplus V^\vee\to V\oplus V^\vee$ in $\operatorname{SO}(2n)$. For a unit-length vector $v\in V\oplus V^\vee$ (such as $e_1+e_1^\vee$), the stabilizer of $v$ in $\operatorname{SO}(2n)$ is an orthogonal group $\operatorname{SO}(2n-1)$ and the inclusion $\operatorname{SO}(2n-1)\hookrightarrow\operatorname{SO}(2n)$ is the usual stabilization morphism.
\begin{lemma}
\label{lem:cart}
Let $F$ be a field of characteristic $\neq 2$, and let $V$ be an $n$-dimensional $F$-vector space. Then there is a cartesian square
\[
\xymatrix{
\operatorname{GL}_{n-1}\ar[r] \ar[d] & \operatorname{GL}_n \ar[d]^H \\
\operatorname{SO}(2n-1)\ar[r] & \operatorname{SO}(2n)
}
\]
where the horizontal morphisms are the usual stabilization morphisms. Moreover, the induced morphism on quotients $\operatorname{GL}_n/\operatorname{GL}_{n-1}\to \operatorname{SO}(2n)/\operatorname{SO}(2n-1)$ is an isomorphism.
\end{lemma}
\begin{proof}
The first statement is simply the claim that $\operatorname{GL}_{n-1}$ is the intersection of $\operatorname{GL}_n$ and $\operatorname{SO}(2n-1)$ inside $\operatorname{SO}(2n)$. Note that matrices in the image of $H$ preserve the decomposition $V\oplus V^\vee$. If such a matrix preserves the vector $e_1+e_1^\vee$, then the original matrix in $\operatorname{GL}(V)$ preserves the vector $e_1$. Moreover, because the matrix is orthogonal, it will also preserve the decomposition $V=\langle e_1\rangle\oplus \langle e_2,\dots,e_n\rangle$. This shows that the intersection is exactly $\operatorname{GL}_{n-1}$, the endomorphisms of $\langle e_2,\dots,e_n\rangle$.
\end{proof}
Note that the quotient maps above are locally trivial in the Nisnevich topology and therefore give rise to fiber sequences, cf. \cite{gbundles2}. As a consequence of the above statement, we get a homotopy cartesian diagram of classifying spaces
\[
\xymatrix{
{\operatorname{B}}\operatorname{GL}_{n-1}\ar[r] \ar[d] & {\operatorname{B}}\operatorname{GL}_n \ar[d] \\
{\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(2n-1)\ar[r] & {\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(2n).
}
\]
\begin{proposition}
\label{prop:special2}
Let $F$ be a field of characteristic unequal to $2$, let $X=\operatorname{Spec} A$ be a smooth affine variety of dimension $d$ over $F$ and let $(\mathscr{P},\phi)$ be a generically split quadratic form over $A$ of rank $2d$ admitting a spin lift.
\begin{enumerate}
\item
Under the natural projection
\[
\operatorname{H}^d(X;\bm{\pi}^{\mathbb{A}^1}_{d-1}(\operatorname{V}_{2,2d}))\cong \widetilde{\operatorname{CH}}^d(X)\times \operatorname{H}^d(X;\mathbf{W})\xrightarrow{\operatorname{pr}_1} \widetilde{\operatorname{CH}}^d(X)\to\operatorname{CH}^d(X),
\]
the obstruction class from Theorem~\ref{thm:euler2} maps to the Edidin--Graham Euler class of \cite{edidin:graham}.
\item
If $F$ is quadratically closed, then $(\mathscr{P},\phi)$ splits off a hyperbolic plane if and only if its Edidin--Graham Euler class is trivial.
\end{enumerate}
\end{proposition}
\begin{proof}
(1) We use the characterization of Euler classes for quadratic forms in \cite[Section 4, Definition 1 and Proposition 2]{edidin:graham}: if $\mathscr{V}$ is a rank $d$ vector bundle on a smooth scheme $X$ and $\mathscr{P}:=\mathscr{V}\oplus\mathscr{V}^\vee$ the associated hyperbolic bundle with the evaluation form, then the Euler class of $(\mathscr{P},\operatorname{ev})$ is, up to sign, the top Chern class $\operatorname{c}_n(\mathscr{V})$. It thus suffices to prove that the obstruction class for a hyperbolic bundle reduces to the top Chern class. Now if $\mathscr{V}$ is a rank $d$ vector bundle on a smooth scheme $X$, the obstruction to split off a trivial line from $\mathscr{V}$ is the same as the obstruction for reducing the structure group for the hyperbolic bundle $(\mathscr{V}\oplus\mathscr{V}^\vee,\operatorname{ev})$ from $\operatorname{SO}(2d)$ to $\operatorname{SO}(2d-1)$. This follows directly from the homotopy cartesian square of classifying spaces after Lemma~\ref{lem:cart}. Consequently, in the case of hyperbolic bundles as above, the natural projection $\operatorname{H}^d(X;\bm{\pi}^{\mathbb{A}^1}_{d-1}(\operatorname{V}_{2,2d}))\cong \widetilde{\operatorname{CH}}^d(X)\times \operatorname{H}^d(X;\mathbf{W})\xrightarrow{\operatorname{pr}_1} \widetilde{\operatorname{CH}}^d(X)$ maps the obstruction class of Theorem~\ref{thm:euler2} to the obstruction class for splitting off a line from $\mathscr{V}$. By \cite{AsokFaselEuler}, this is, up to a unit in the Grothendieck--Witt ring $\operatorname{GW}(F)$, the Euler class of $\mathscr{V}$. Now under the projection $\widetilde{\operatorname{CH}}^d(X)\to\operatorname{CH}^d(X)$, the Euler class maps to the top Chern class, up to sign. Together with the above characterization of the Edidin--Graham Euler class, this proves the claim.
(2) This follows from Theorem~\ref{thm:euler2} and (1). Since $F$ is quadratically closed, the natural projection $\widetilde{\operatorname{CH}}^d(X)\xrightarrow{\cong}\operatorname{CH}^d(X)$ is an isomorphism. The group $\operatorname{H}^d_{\operatorname{Nis}}(X,\mathbf{W})$ is also trivial since we have an exact sequence
\[
\operatorname{CH}^{d-1}(X)\xrightarrow{\beta}\operatorname{H}^d_{\operatorname{Nis}}(X,\mathbf{I}^d)\to \operatorname{H}^d_{\operatorname{Nis}}(X,\mathbf{W})\to 0,
\]
and $0=\operatorname{Ch}^d(X)/2\twoheadrightarrow\operatorname{H}^d_{\operatorname{Nis}}(X,\mathbf{I}^d)$. In particular, the $\mathbf{W}$-cohomological contribution to the obstruction class vanishes. Then (1) shows that the obstruction class in $\widetilde{\operatorname{CH}}^d(X)$ maps to the Edidin--Graham Euler class (up to sign) in $\operatorname{CH}^d(X)$, which proves the claim.
\end{proof}
\begin{remark}
The above result actually means that the obstruction class reduces to the Chow--Witt-theoretic Euler class of a maximal-rank isotropic subbundle, whenever it exists. However, it is not clear that this already uniquely determines the class as a characteristic class in the Chow--Witt ring. Further analysis of the Chow--Witt rings of classifying spaces ${\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(2n)$ would be required for that.
\end{remark}
Here is a final example of the existence of quadratic bundles whose underlying vector bundles are trivial.
\begin{theorem}
Let $F$ be a perfect field of characteristic $\neq 4$. For any $m\geq 1$, there exists an oriented quadratic form of rank $8m$ over the quadric $\operatorname{Q}_{8m+1}$ which is generically trivial, stably nontrivial, and whose underlying vector bundle is trivial.
\end{theorem}
\begin{proof}
By \cite{schlichting:tripathi}, we have an identification
\[
[\operatorname{Q}_{2n+1},{\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(\infty)]_{\mathscr{H}(F)}\cong (\mathbf{GW}^0_n)_{-n-1}(F).
\]
The result is $\operatorname{GW}^{-n-1}_{-1}(F)\cong \operatorname{W}^{-n}(F)$, by the identification of negative higher Grothendieck--Witt groups and Balmer's triangular Witt groups. So there are no maps unless $n\equiv 0\bmod 4$, and in this case we have $[\operatorname{Q}_{2n+1},{\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(\infty)]_{\mathscr{H}(F)}\cong\operatorname{W}(F)$.
By the stabilization results, we have
\[
[\operatorname{Q}_{2n+1},{\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(2n)]_{\mathscr{H}(F)}\to [\operatorname{Q}_{2n+1},{\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(\infty)]_{\mathscr{H}(F)}\to [\operatorname{S}^{n-1}\wedge\mathbb{G}_{\operatorname{m}}^{\wedge n+1},\operatorname{Q}_{2n}]=0
\]
in particular we have a surjection $[\operatorname{Q}_{2n+1},{\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(2n)]_{\mathscr{H}(F)}\twoheadrightarrow\operatorname{W}(F)$ so that we can choose a nontrivial map $\operatorname{Q}_{2n+1}\to{\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(2n)$ detected on $\operatorname{W}(F)$. This corresponds to an oriented quadratic form of rank $2n$ on $\operatorname{Q}_{2n+1}$ which is generically trivial but stably nontrivial. Now consider the composition
\[
\operatorname{Q}_{2n+1}\to{\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(2n)\to{\operatorname{B}}\operatorname{GL}_{2n},
\]
which corresponds to taking the underlying vector bundle. This is way inside the stable range, so we have $[\operatorname{Q}_{2n+1},{\operatorname{B}}\operatorname{GL}_{2n}]\cong (\mathbf{K}^{\operatorname{Q}}_n)_{-n-1}=0$, i.e., the quadratic form we have ``constructed'' has trivial underlying vector bundle.
\end{proof}
\begin{remark}
This is an algebraic version of the example in the answer of David Chataur to MathOverflow question 112764 of a real vector bundle which is stably nontrivial but has trivial characteristic classes. The above example would have all the characteristic classes in $\operatorname{CH}^\bullet({\operatorname{B}}\operatorname{O}(2n))$ trivial: either use the fact that these classes are generated by Chern classes of the underlying vector bundle or use the fact that $\operatorname{CH}^{>0}(\operatorname{Q}_{2n+1})=0$ since $\operatorname{Q}_{2n+1}\cong\mathbb{A}^{n+1}\setminus\{0\}$. However, the triviality of the underlying vector bundle also implies that Chow--Witt characteristic classes induced from ${\operatorname{B}}_{\operatorname{Nis}}\operatorname{SO}(2n)\to {\operatorname{B}}\operatorname{GL}_{2n}$ would also all be trivial. At this point, it is not clear if all characteristic classes in $\widetilde{\operatorname{CH}}^\bullet({\operatorname{B}}_{\operatorname{Nis}}\operatorname{O}(2n))$ are induced from the underlying vector bundle. As mentioned, the questions of Chow--Witt characteristic classes of quadratic forms will be discussed elsewhere.
\end{remark}
|
\section{Introduction} \label{sec_introduction}
There are several applications of systems where the dynamics are state-dependent including the repairman problem, retrial queues, chemical reactions, epidemic models, communication networks with state-dependent routing, call centers, etc. Even assuming Markovian properties, analysis of state-dependent systems is difficult. Therefore, typically fluid and diffusion approximations are used to obtain performance measures of these systems \citep{iglehart65,ethier86,mandelbaum98-2,mandelbaum02,Whitt04,Whitt06b}. These fluid and diffusion models are obtained by utilizing Functional Strong Law of Large Numbers (FSLLN) and Functional Central Limit Theorem (FCLT) which are well summarized in \citet{bill99} and \citet{Whitt02}. Using these models, one can investigate the asymptotic behavior of the system state which can be a good approximation to the original system under certain specific conditions (e.g. heavy traffic, large number of servers, etc). The previous studies mentioned above have a common feature in that they utilize FSLLN and FCLT. They, however, have different aims, scenarios, and assumptions. This paper specifically focuses on the fluid and diffusion models for the non-stationary (i.e. time-dependent or transient) and state-dependent exponential queueing systems similar to those in \citet{mandelbaum95} and \citet{mandelbaum98}. \\
In fact, some rate functions to describe the system dynamics are of the forms, $\min(\cdot,\cdot)$ and $\max(\cdot,\cdot)$ which are not differentiable everywhere. Notice that to apply the seminal weak convergence result in \citet{kurtz78}, we require differentiability of rate functions which is not satisfied in most non-stationary and state-dependent queueing systems. To extend the theory to non-smooth rate functions, \citet{mandelbaum98} proves the weak convergence by introducing a new derivative called ``scalable Lipschtz derivative'' and provides models for several queueing systems such as Jackson networks, multiserver queues with abandonments and retrials, multiclass priority preemptive queues, etc. In addition, several sets of differential equations are also provided to obtain mean values and the covariance matrix of the limit process. It, however, turns out that the resulting sets of differential equations are not computationally tractable in general and hence the theorems cannot directly be applied to obtain performance measures numerically. In a follow-on paper, \citet{mandelbaum02} provides numerical results for queue lengths and waiting times in multiserver queues with abandonments and retrials by including an additional assumption to deal with computational tractability. Specifically, that paper assumes measure zero at a set of time points where the fluid model hits non-differentiable points, which enables applying Kurtz's diffusion models. However, as pointed out in \citet{mandelbaum02}, if the system stays in a critically loaded state (non-differentiable points) for a long time, also called ``lingering'', their approach may cause significant inaccuracy. Before describing our goal, it is worthwhile to summarize the above results and point out possible problems:
\begin{itemize}
\item On one hand, \citet{mandelbaum98} provides rigorous theory to obtain the fluid and diffusion models for the system having non-smooth rate functions. On the other hand, it is not possible to solve the resulting set of differential equations to obtain performance measures numerically.
\item The additional assumption of measure zero in \citet{mandelbaum02} (also see \citet{massey02}) provides a computationally tractable way to obtain performance measures. However, when this assumption is not valid, it might cause inaccuracy in obtaining the results.
\end{itemize}
The goal of this paper is to devise a technique that strikes a balance between accuracy and computational tractability leveraging upon the fluid and diffusion models in \citet{kurtz78}. To do so, we explain when the fluid approximations might fail to achieve accuracy from a different point of view than those considered in \citet{mandelbaum98,mandelbaum02}. Our method is irrelavent to the smoothness of rate functions and provides a condition to obtain exact estimation of mean values of the system state. We apply our methodology to several queueing systems including not only multiserver queues with abandonments and retrials considered in \citet{mandelbaum98,mandelbaum02} but also more complex queueing systems such as multiclass priority preemptive queues (considered but not numerically investigated in \citet{mandelbaum98}) and peer-based networks in multimedia distribution. Here, we emphasize that this paper does NOT aim at proving another weak convergence to a limit process but pursues providing a practically effective methodology to increase accuracy in performance measures such as mean values and the covariance matrix of the system state. Our paper has discriminating features from previous research in that we:
\begin{itemize}
\item address possible inaccuracy of the fluid model which might occur irrespective of the smoothness of rate functions,
\item solve the fluid model directly by providing a methodology to estimate mean values exactly unlike previous research where the fluid model is unchanged and is complemented by the expected value of the diffusion model, and
\item devise an effective algorithm transforming the fluid and diffusion models, which achieves not only increased accuracy but also computational feasibility.
\end{itemize}
We now describe the organization of this paper. In Section \ref{sec_fluiddiffusion}, we explain the fluid and diffusion models in \citet{kurtz78} and \citet{mandelbaum98}, and describe their limited applicability in practice. In Section \ref{sec_adjustedfluid}, we provide a methodology to estimate exact mean values of the system state. However, this would not immediately result in a computationally feasible approach. For that, in Section \ref{sec_heuristic}, we explain our algorithm based on Gaussian density to achieve computational tractability and the benefits of using Gaussian density. In Section \ref{sec_application}, we show how our proposed method works for the queueing system described in \citet{mandelbaum02} by comparing our method with theirs. In Section \ref{sec_additional}, we provide some numerical results for more complex queueing systems where we have reasons to believe our methodology may not be accurate, and for these cases, we determine the performance of our approach. Finally, in Section \ref{sec_conclusion}, we make concluding remarks and explain directions for future work.
\section{Recapitulating fluid and diffusion approximations} \label{sec_fluiddiffusion}
Before explaining our results, we recapitulate fluid and diffusion approximations developed by \citet{kurtz78} that we would leverage on for our methodology. As a matter of fact, the diffusion model developed in \citet{kurtz78} is not directly applicable in many queueing systems because it requires differentiability of rate functions which is sometimes not satisfied. Therefore, we also briefly mention the result in \citet{mandelbaum98} which extends the Kurtz's result to the model involving non-smooth rate functions. Further, it is worthwhile to note that for $n\in \mathbf{N}$, the state of the queueing system $X_n(t)$ includes jumps but the limit process is continuous. Therefore, the weak convergence result that is presented is with respect to uniform topology in $D$ (\citet{bill99} and \citet{Whitt02}).\\
Let $X(t)$ be a $d$-dimensional stochastic process which is the solution to the following integral equation:
\begin{eqnarray}
X(t) = x_0 + \sum_{i=1}^{k} l_i Y_{i} \bigg(\int_{0}^{t} f_{i}\big(s,X(s)\big)ds \bigg), \label{eqn_001}
\end{eqnarray}
where $x_0 = X(0)$ is a constant, $Y_{i}$'s are independent rate $1$ Poisson processes, $l_i \in \mathbf{Z}^d$ for $i \in \{1,2,\ldots, k\}$ are constants, and $f_i$'s are continuous functions such that $|f_i(t,x)| \le C_i (1+|x|)$ for some $C_i < \infty$, $t\le T$ and $T<\infty$.
Note that we just consider a finite number of $l_i$'s to simplify proofs, which is reasonable for real world applications.
It is usually not tractable to solve the integral equation (\ref{eqn_001}). Therefore, to approximate the $X(t)$ process, define a sequence of stochastic processes $\{X_n(t)\}$ which satisfy the following integral equation:
\begin{eqnarray*}
X_n(t) = x_0 + \sum_{i=1}^{k} \frac{1}{n} l_i Y_{i} \bigg(\int_{0}^{t} n f_{i}\big(s, X_n(s)\big)ds \bigg). \label{eqn_002}
\end{eqnarray*}
Typically the process $X_n(t)$ (usually called a scaled process) is obtained by taking $n$ times faster rates of events and $1/n$ of the increment of the system state. This type of setting is used in the literature and is denoted as ``uniform acceleration'' in \citet{massey98}, and \citet{mandelbaum98,mandelbaum02}. Then, the following theorem provides the fluid model to which $\{X_n(t)\}$ converges almost surely as $n\rightarrow \infty$. Define
\begin{eqnarray}
F(t,x) = \sum_{i=1}^{k} l_i f_{i}(t,x) \label{eqn_F}.
\end{eqnarray}
\begin{theorem}[Fluid model, \citet{kurtz78}] \label{theo_fluid}
If there is a constant $M < \infty$ such that $|F(t,x)-F(t,y)| \le M|x-y|$ for all $t \le T$ and $T<\infty$. Then, $\lim_{n \rightarrow \infty} X_n(t) = \bar{X}(t)$ a.s. where $\bar{X}(t)$ is the solution to the following integral equation:
\begin{eqnarray*}
\bar{X}(t) = x_0 + \sum_{i=1}^{k} l_i \int_{0}^{t} f_{i}\big(s, \bar{X}(s)\big)ds. \label{eqn_003}
\end{eqnarray*}
\end{theorem}
Note that $\bar{X}(t)$ is a deterministic time-varying quantity. We will subsequently connect $\bar{X}(t)$ and $X(t)$ defined in equation (\ref{eqn_001}), but before that we provide the following result. Once we have the fluid model, we can obtain the diffusion model from the scaled centered process ($D_n(t)$). Define $D_n(t)$ to be $\sqrt{n}\big(X_n(t) - \bar{X}(t)\big)$. Then, the limit process of $D_n(t)$ is provided by the following theorem.
\begin{theorem}[Diffusion model, \citet{kurtz78}] \label{theo_diffusion}
If $f_i$'s and $F$, for some $M<\infty$, satisfy
\begin{eqnarray*}
|f_i(t,x)-f_i(t,y)| \le M|x-y| \quad \textrm{and} \quad \bigg| \frac{\partial}{\partial x_i}F(t,x)\bigg| \le M, \qquad \textrm{for } i \in \{1,\ldots,k\} \textrm{ and } 0\le t\le T,
\end{eqnarray*}
then $\lim_{n \rightarrow \infty} D_n(t) = D(t)$ where $D(t)$ is the solution to
\begin{eqnarray*}
D(t) = \sum_{i=1}^{k} l_i \int_{0}^{t} \sqrt{f_i\big(s,\bar{X}(s)\big)}dW_i(s) + \int_{0}^{t} \partial F\big(s,\bar{X}(s)\big)D(s) ds, \label{eqn_004}
\end{eqnarray*}
$W_i(\cdot)$'s are independent standard Brownian motions, and $\partial F(t,x)$ is the gradient matrix of $F(t,x)$ with respect to $x$.
\end{theorem}
\begin{remark} \label{rem_nondiff}
Theorem \ref{theo_diffusion} requires that $F(\cdot,\cdot)$ has a continuous gradient matrix. Therefore, if we don't have such an $F$, then we cannot apply Theorem \ref{theo_diffusion} directly to obtain the diffusion model.
\end{remark}
\begin{remark} \label{rem_gaussian}
According to \citet{ethier86}, if $D(0)$ is a constant or a Gaussian random vector, then $D(t)$ is a Gaussian process.
\end{remark}
Now, we have the fluid and diffusion models for $X_n(t)$. Therefore, for a large $n$, $X_n(t)$ is approximated by
\begin{eqnarray*}
X_n(t) \approx \bar{X}(t) + \frac{D(t)}{\sqrt{n}}. \label{eqn_005}
\end{eqnarray*}
If we follow this approximation, we can also approximate the mean and covariance matrix of $X_n(t)$ denoted by $E\big[X_n(t)\big]$ and $Cov \big[X_n(t),X_n(t)\big]$ respectively as
\begin{eqnarray}
E\big[X_n(t)\big] &\approx& \bar{X}(t) + \frac{E\big[D(t)\big]}{\sqrt{n}}, \label{eqn_006} \\
Cov \big[X_n(t),X_n(t)\big] &\approx& \frac{Cov \big[D(t),D(t)\big]}{n}. \label{eqn_007}
\end{eqnarray}
In equations (\ref{eqn_006}) and (\ref{eqn_007}), only $\bar{X}(t)$ is known. Therefore, in order to get approximated values of $E\big[X_n(t)\big]$ and $Cov \big[X_n(t),X_n(t)\big]$, we need to obtain $E\big[D(t)\big]$ and $Cov\big[D(t),D(t)\big]$. The following theorem provides a methodology to obtain $E\big[D(t)\big]$ and $Cov \big[D(t),D(t)\big]$.
\begin{theorem}[Mean and covariance matrix of linear stochastic systems, \citet{arnold92}] \label{theo_moment}
Let $Y(t)$ be the solution to the following linear stochastic differential equation.
\begin{eqnarray*}
dY(t) = A(t)Y(t)dt + B(t)dW(t), \quad Y(0)=0, \label{eqn_008}
\end{eqnarray*}
where $A(t)$ is a $d \times d$ matrix, $B(t)$ is a $d \times k$ matrix, and W(t) is a $k$-dimensional standard Brownian motion.
Let $M(t) = E\big[Y(t)\big]$ and $\Sigma(t) = Cov\big[Y(t), Y(t)\big]$. Then, $M(t)$ and $\Sigma(t)$ are the solution to the following ordinary differential equations:
\begin{eqnarray}
\frac{d}{dt}M(t) &=& A(t) M(t) \label{eqn_009} \nonumber \\
\frac{d}{dt}\Sigma(t) &=& A(t) \Sigma(t) + \Sigma(t) A(t)' + B(t)B(t)'. \label{eqn_010}
\end{eqnarray}
\end{theorem}
\begin{corollary} \label{cor_moment}
If $M(0)=0$, then $E\big[M(t)\big] = 0$ for $t \ge 0$.
\end{corollary}
By Corollary \ref{cor_moment}, if $D(0)=0$, then $E\big[D(t)\big] = 0$ for $t \ge 0$. Therefore, if $\bar{X}(0) = X(0) = x_0$, then we can rewrite (\ref{eqn_006}) to be
\begin{eqnarray*}
E\big[X_n(t)\big] &\approx& \bar{X}(t). \label{eqn_011}
\end{eqnarray*}
Recalling Remark \ref{rem_nondiff}, the diffusion model in \citet{kurtz78} requires differentiability of rate functions. Otherwise, we cannot apply Theorem \ref{theo_diffusion}. To address this problem, \citet{mandelbaum98} introduces a new derivative called ``scalable Lipschitz derivative'' and proves the weak convergence using it. Unlike the result in \citet{kurtz78}, it turns out that the diffusion limit may not be a Gaussian process when rate functions are not differentiable everywhere. In \citet{mandelbaum98}, expected values of the diffusion model may not be zero (compare it with Corollary \ref{cor_moment}) and could adjust the inaccuracy in the fluid model (see \citet{mandelbaum02}). The resulting differential equations for the diffusion model, however, are computationally intractable. For example, in \citet{mandelbaum98}, one of the differential equations has the following form:
\begin{eqnarray}
\frac{d}{dt}E\big[Q_1^{(1)}(t)\big] &=& (\mu_t^1 \mathbf{1}_{\{Q_1^{(0)} \le n_t\}} + \beta_t \mathbf{1}_{\{Q_1^{(0)} > n_t\}})E\big[Q_1^{(1)}(t)^-\big] \nonumber \\
&& - (\mu_t^1 \mathbf{1}_{\{Q_1^{(0)} < n_t\}} + \beta_t \mathbf{1}_{\{Q_1^{(0)} \ge n_t\}})E\big[Q_1^{(1)}(t)^+\big] + \mu_t^2E\big[Q_2^{(1)}(t)\big], \label{eqn_actdiff}
\end{eqnarray}
rendering it to be intractable.\\
Therefore, \citet{mandelbaum02}, as we understood, resorts to the method in \citet{kurtz78} by assuming measure zero at non-smooth points to avoid computational difficulty. As described in Section \ref{sec_introduction}, in this paper, our objective is to give the fluid and diffusion models a fresh look from an alternative perspective, and suitably adjust them for non-asymptotic scenarios. This is presented in the next section.
\section{Adjusted fluid model} \label{sec_adjustedfluid}
In this section, we first explain the possibility of inaccuracy when obtaining mean values of the system state using the fluid model. Then, we provide an adjusted fluid model to estimate exact mean values. For those, we consider the actual integral equation to get the exact value of $E\big[X(t)\big]$ by the following theorem.
\begin{theorem}[Expected value of $X(t)$] \label{theo_exp}
Consider $X(t)$ defined in equation (\ref{eqn_001}). Then, for $t\le T$, $E\big[X(t)\big]$ is the solution to the following integral equation.
\begin{eqnarray}
E\big[X(t)\big] = x_0 + \sum_{i=1}^k l_i \int_{0}^{t}E\Big[f_i\big(s,X(s)\big)\Big] ds \label{eqn_013}
\end{eqnarray}
\begin{proof}
Take expectation on both sides of equation (\ref{eqn_001}). Then,
\begin{eqnarray}
E\big[X(t)\big] &=& x_0 + \sum_{i=1}^k l_i E\Bigg[Y_i\bigg(\int_{0}^{t}f_i\big(s,X(s)\big)ds \bigg)\Bigg] \nonumber \\
&=& x_0 + \sum_{i=1}^k l_i E\bigg[\int_{0}^{t}f_i\big(s, X(s)\big)ds \bigg] \textrm{ since $Y_i(\cdot)$'s are nonhomogeneous Poisson processes} \nonumber \\
&=& x_0 + \sum_{i=1}^k l_i \int_{0}^{t}E\Big[f_i\big(s, X(s)\big)\Big] ds \textrm{ by Fubini theorem in \citet{folland99}.} \nonumber
\end{eqnarray}
Therefore, we prove the theorem.
\end{proof}
\end{theorem}
Comparing Theorems \ref{theo_fluid} and \ref{theo_exp}, notice that we cannot conclude that $\bar{X}(t)$ in Theorem \ref{theo_fluid} and $E\big[X(t)\big]$ in Theorem \ref{theo_exp} are close enough since $E\big[f_i(t, X(t))\big] \neq f_i\big(t, E[X(t)]\big)$. In some applications, $f_i$'s might be constants or linear combinations of components of $X(t)$. In those cases, Theorem \ref{theo_exp} and the following corollary imply that the fluid model would be the exact estimation of mean values of the system state.
\begin{corollary} \label{cor_exp}
If $f_i(t,x)$'s are constants or linear combinations of the components of $x$, Then,
\begin{eqnarray}
E[X(t)] = \bar{X}(t), \nonumber
\end{eqnarray}
where $X(t)$ is the solution to (\ref{eqn_001}) and $\bar{X}(t)$ is the deterministic fluid model from theorem \ref{theo_fluid}.
\begin{proof}
Using linearity of expectation in \citet{williams91}, we can obtain the same integral equation for both $E\big[X(t)\big]$ and $\bar{X}(t)$.
\end{proof}
\end{corollary}
However, if we have different forms of $f_i$'s where $E\big[f_i(t, X(t))\big] \neq f_i\big(t, E[X(t)]\big)$, then the fluid model would be inaccurate. As seen in Section \ref{sec_fluiddiffusion}, the fluid model does not require differentiability of rate functions in both \citet{kurtz78} and \citet{mandelbaum98}. In \citet{mandelbaum98}, the diffusion model can contribute to mean values of the system state. However, as seen in equation (\ref{eqn_actdiff}), the differential equations to obtain mean values of the diffusion limit are not computationally tractable. Even if they are numerically solvable, mean values of the diffusion limit is zero by the time the fluid limit hits a non-differentiable point for the first time. We will show in Section \ref{sec_additional} that inaccuracy begins to occur before the fluid limit hits that point. Therefore, we approach this problem in a different point of view.\\
The basic idea of our approach is to construct a new process ($Z(t)$) so that its fluid model is exactly same as mean values of the original process $X(t)$ as described in Theorem \ref{theo_exp}. Define a set $\mathbb{F}$ of all distribution functions that have a finite mean and covariance matrix in $\mathbf{R}^d$. This set is valid for the fluid model since conditions on $f_i$'s guarantee that $E\big[|X(t)|\big] < \infty$ and $|Cov[X(t),X(t)]| < \infty$ for all $t \le T$. Define a subset $\mathbb{F}_0$ of $\mathbb{F}$ such that any $f \in \mathbb{F}_0$ has zero mean. We call an element of $\mathbb{F}_0$ a ``base distribution'' for the remainder of this paper.
\begin{proposition} \label{prop_exp}
$E\big[f_i(t,X(t))\big]$ can be represented as a function of $E[X(t)]$ for $t \le T$.
\begin{proof}
For fixed $t_0 \le T$, suppose the distribution of $X(t_0)$ is $F$. Then, $F \in \mathbb{F}$. For $F \in \mathbb{F}$, we can always find $F_0 \in \mathbb{F}_0$ such that $F(x) = F_0(x-\mu)$ where $\mu = E[X(t_0)] = \int_{\mathbf{R}^d} x dF$. Then,
\begin{eqnarray*}
E\big[f_i(t_0,X(t_0))\big] &=& \int_{\mathbf{R}^d} f_i(t_0,x) dF \\
&=& \int_{\mathbf{R}^d} f_i(t_0,x+\mu) dF_0.
\end{eqnarray*}
Since the integration removes $x$, by making $t_0$ and $\mu$ variables (i.e. substitute $t_0$ and $\mu$ with $t$ and $\mu(t)$ respectively), we have
\begin{eqnarray*}
E\big[f_i(t,X(t))\big] = g_i(t,\mu(t)), \textrm{ for some function } g_i.
\end{eqnarray*}
\end{proof}
\end{proposition}
\begin{remark}
Proposition \ref{prop_exp} does not mean that $\mu(\cdot)$ completely identifies the function $g_i(\cdot,\cdot)$. In fact, the function $g_i(\cdot,\cdot)$ might be unknown unless the base distribution is identified but we can say that such a function $g_i(\cdot,\cdot)$ exists.
\end{remark}
For $t \le T$, let $\mu(t) = E\big[X(t)\big]$. Let $g_i\big(t,\mu(t)\big) = E\big[f_i(t,X(t))\big]$ for $i \in \{1, \ldots, k\}$. Then, we can construct a new stochastic process $Z(t)$ which is the solution to the following integral equation:
\begin{eqnarray}
Z(t) = z_0 + \sum_{i=1}^{k} l_i Y_{i} \bigg(\int_{0}^{t} g_{i}\big(s,Z(s)\big)ds \bigg). \label{eqn_014}
\end{eqnarray}
Based on equation (\ref{eqn_014}), define a sequence of stochastic processes $\{Z_n(t)\}$ satisfying
\begin{eqnarray}
Z_n(t) = x_0 + \sum_{i=1}^{k} \frac{1}{n} l_i Y_{i} \bigg(\int_{0}^{t} n g_{i}\big(s, Z_n(s)\big)ds \bigg). \label{eqn_adjseq}
\end{eqnarray}
Next, we would like to obtain the fluid model for $Z_n(t)$. Before doing that, we, however, need to check whether the functions $g_i$'s satisfy the conditions to apply Theorem \ref{theo_fluid}. Following lemmas provide the proofs that $g_i$'s meet the conditions.
\begin{lemma} \label{lem_condition1}
If $|f_i(t,x)| \le C_i (1+|x|)$ for $t\le T$, then $g_i(t,x)$'s satisfy
\begin{eqnarray*}
|g_i(t,x)| &\le& D_i (1+|x|) \quad \textrm{for some } D_i < \infty. \label{eqn_015}
\end{eqnarray*}
\begin{proof}
To prove this lemma, we need to show that $E\big[|X(t)|\big] \le K\Big(1+\big|E\big[X(t)\big]\big|\Big)$ for $K < \infty$ and $t \le T$. We first show it in the one-dimensional case and then extend it to the $d$-dimensional case.\\
Let, for fixed $t_0\le T$, $X = X(t_0)$ having mean $\mu$ and variance $\sigma^2$, and $f_i(X) = f_i\big(t_0,X(t_0)\big)$. Then, by Cauchy-Schwarz inequality,
\begin{eqnarray}
E\big[|X|\big] \le \sqrt{E[X^2]} = \sqrt{\mu^2 + \sigma^2} \le |\mu| + \sigma \le D (1 + |\mu|) \quad \textrm{for } D = \max(1,\sigma) \label{eqn_016}.
\end{eqnarray}
Now, we have the one-dimensional case and can move to the $d$-dimensional case. Suppose $X$ has a mean vector $\mu$ and a covariance matrix $\Sigma$ such that $X=(x_1, \ldots, x_d)'$, $\mu = (\mu_1, \ldots, \mu_d)'$. Then,
\begin{eqnarray}
E\big[|X|\big] &=& E\bigg[\sqrt{\sum_{i=1}^{d}x_i^2}\bigg] \le E\bigg[\sum_{i=1}^{d}|x_i|\bigg] = \sum_{i=1}^{d} E\big[|x_i|\big] \nonumber \\
&\le& D\bigg(d+ \sum_{i=1}^{d} |\mu_i|\bigg) \quad \textrm{by (\ref{eqn_016})} \qquad \textrm{for } D = \max(1,\sigma_1, \ldots, \sigma_d) \nonumber \\
&\le& D\bigg(d+ d\sqrt{\sum_{i=1}^{d} \mu_{i}^{2}}\bigg) \textrm{by Cauchy-Schwarz inequality} \nonumber \\
&=& Dd\big(1+|\mu|\big) \label{eqn_017}.
\end{eqnarray}
Now we have $E\big[|X|\big] \le K\Big(1+\big|E[X]\big|\Big)$ for the $d$-dimensional random vector $X$ where $K=Dd$. Then,
\begin{eqnarray}
\Big|E\big[f_i(X)\big]\Big| &\le& E\Big[\big|f_i(X)\big|\Big] \le C_i + C_i E\big[|X|\big] \quad \textrm{from assumption} \nonumber \\
&\le& C_i + C_i K \big(1 + |\mu|\big) \nonumber \le D_i \big(1 + |\mu|\big) \quad \textrm{for } D_i = C_i + C_iK \quad \textrm{by equation (\ref{eqn_017})} \label{eqn_018}\nonumber
\end{eqnarray}
Note $g_i(t_0,\mu) = E\big[f_i(X)\big]$. Since $|\Sigma|$ is bounded on $t\le T$, if we make $t_0>0$ arbitrary, we prove the lemma.
\end{proof}
\end{lemma}
For the next lemma, we would like to define
\begin{eqnarray}
G(t,x) = \sum_{i=1}^k l_i g_i(t,x). \label{eqn_G}
\end{eqnarray}
\begin{lemma} \label{lem_condition2}
For $t\le T$, if $|f_i(t,x)-f_i(t,y)| \le M |x-y|$, then $g_i(t,x)$'s satisfy
\begin{eqnarray*}
|g_i(t,x) - g_i(t,y)| \le M|x-y|,
\end{eqnarray*}
and if $|F(t,x)-F(t,y)| \le M |x-y|$, then $G(t,x)$ satisfies
\begin{eqnarray*}
|G(t,x) - G(t,y)| \le M|x-y|. \label{eqn_019}
\end{eqnarray*}
\begin{proof}
For fixed $t_0 \le T$, let $X = X(t_0)$ and $Y = Y(t_0)$ and suppose $X$ and $Y$ have a same base distribution $H_0$ (we use $H$ instead of $F$ to avoid confusion with $F$ in (\ref{eqn_F})) where $E[X] = \mu_1$ and $E[Y]=\mu_2$. Then, the distribution $H_1$ of $X$ and $H_2$ of $Y$ satisfy
\begin{eqnarray*}
H_1(x) &=& H_0(x-\mu_1), \quad \textrm{and} \\
H_2(y) &=& H_0(y-\mu_2),
\end{eqnarray*}
respectively.
Now, we have
\begin{eqnarray*}
\Big|E\big[F(X)\big]-E\big[F(Y)\big]\Big| &=& \bigg|\int_{\mathbf{R}^d} F(x) dH_1 - \int_{\mathbf{R}^d} F(y) dH_2\bigg|. \label{eqn_021}
\end{eqnarray*}
By transforming variables,
\begin{eqnarray}
\Big|E\big[F(X)\big]-E\big[F(Y)\big]\Big| &=& \bigg|\int_{\mathbf{R}^d} F(x+\mu_1) dH_0 - \int_{\mathbf{R}^d} F(y+\mu_2) dH_0\bigg| \nonumber \\
&=& \bigg|\int_{\mathbf{R}^d} \big(F(x+\mu_1) - F(x+\mu_2)\big) dH_0\bigg| \quad \textrm{by linearity}, \nonumber \\
&\le& \int_{\mathbf{R}^d} \bigg|\big(F(x+\mu_1) - F(x+\mu_2)\big)\bigg| dH_0 \nonumber \\
&\le& M \int_{\mathbf{R}^d} |\mu_1-\mu_2| dH_0 = M|\mu_1 - \mu_2| \quad \textrm{by assumption}. \label{eqn_022} \nonumber
\end{eqnarray}
Note $G\big(t_0, \mu_1 \big) = E\big[F(X)\big]$ and $G\big(t_0, \mu_2 \big) = E\big[F(Y)\big]$. Then, by making $t_0>0$ arbitrary, we prove the second part, i.e. if $|F(t,x)-F(t,y)|\le M|x-y|$ then $|G(t,x)-G(t,y)|\le M|x-y|$.
We can prove the first part, i.e. if $|f_i(t,x)-f_i(t,y)| \le M |x-y|$, then $|g_i(t,x) - g_i(t,y)| \le M|x-y|$, in a similar fashion and hence we have the lemma.
\end{proof}
\end{lemma}
Lemmas \ref{lem_condition1} and \ref{lem_condition2} show that if $f_i$'s satisfy the conditions to obtain the fluid limit of $X_n(t)$, then $g_i$'s are also eligible for the fluid model of $Z_n(t)$. Therefore, we are now able to provide the adjusted fluid model based on Lemmas \ref{lem_condition1} and \ref{lem_condition2}.
\begin{theorem}[Adjusted fluid model] \label{theo_modfluid}
Assume
\begin{eqnarray}
\big|f_i(t,x)\big| &\le& C_i\big(1+|x|\big) \quad \textrm{for } i\in \{1,\ldots, k\}, \label{eqn_024}\\
\big|F(t,x)-F(t,y)\big| &\le& M|x-y|. \label{eqn_025}
\end{eqnarray}
Then, $\lim_{n \rightarrow \infty} Z_n(t) = \bar{Z}(t)$ a.s., where $\bar{Z}(t)$ is the solution to the following integral equation:
\begin{eqnarray}
\bar{Z}(t) = x_0 + \sum_{i=1}^{k} l_i \int_{0}^{t} g_{i}\big(s, \bar{Z}(s)\big)ds, \label{eqn_026}
\end{eqnarray}
and furthermore
\begin{eqnarray}
\bar{Z}(t) = E\big[X(t)\big] = x_0 + \sum_{i=1}^k l_i \int_{0}^{t}E\Big[f_i\big(s,X(s)\big)\Big] ds. \label{eqn_027}
\end{eqnarray}
\begin{proof}
From Lemmas \ref{lem_condition1} and \ref{lem_condition2}, (\ref{eqn_024}) and (\ref{eqn_025}) imply
\begin{eqnarray*}
|g_i(t,x)|\le D_i (1+|x|) \quad \textrm{and} \quad |G(t,x) - G(t,y)| \le M|x-y|. \label{eqn_029}
\end{eqnarray*}
Therefore, by Theorem \ref{theo_fluid}, we have equation (\ref{eqn_026}), and by definition of $g_i(t,x)$'s, we have equation (\ref{eqn_027}).
\end{proof}
\end{theorem}
In Theorem \ref{theo_modfluid}, we have the same conditions for $f_i$'s and $g_i$'s, and $g_i(t,x)$'s do not guarantee $E\big[g_i(t,X(t))\big] = g_i\big(t,E[X(t)]\big)$ either. However, comparing equation (\ref{eqn_027}) with equation (\ref{eqn_013}) in Theorem \ref{theo_exp}, we notice that Theorem \ref{theo_modfluid} via equation (\ref{eqn_027}) could provide the exact estimation of $E\big[X(t)\big]$.\\
Though Theorem \ref{theo_modfluid} provides the exact estimation of $E\big[X(t)\big]$, we should identify the functions $g_i$'s in order to obtain these values numerically or analytically. The $g_i$'s, however, cannot be identified unless the base distribution is known, which forces us to develop an algorithm to find $g_i$'s. The following section will describe our Gaussian-based method which would also be useful to adjust the diffusion model.
\section{Gaussian-based adjustment} \label{sec_heuristic}
In Theorem \ref{theo_modfluid}, we encounter a fundamental problem in finding $g_i$'s, i.e. we need to characterize the distribution of $X(t)$. There, however, is no clear way to find the exact distribution of $X(t)$ in general. Therefore, our proposed method starts with assuming the distribution of $X(t)$. Recall that in Section \ref{sec_fluiddiffusion}, $X_n(t)$ is approximated by a Gaussian process when $x_0$ is a constant for a large $n$. Though it is not true for $n=1$, we use a Gaussian density function to obtain $g_i$'s with $z_0 = x_0$ since using the Gaussian density function provides following three benefits:
\begin{enumerate}
\item In many applications, empirical densities are close to Gaussian density even if rate functions are not differentiable (see \citet{mandelbaum98-2}, \citet{mandelbaum02}).
\item Gaussian distribution can be completely characterized by the mean and covariance matrix which can be obtained from the fluid and diffusion models.
\item By using Gaussian density, $g_i$'s can achieve smoothness even if $f_i$'s are not smooth, which enable us to apply Theorem \ref{theo_diffusion} directly.
\end{enumerate}
The third benefit is not obvious and hence we provide the proof of that.
\begin{lemma} \label{lem_smooth}
Let $g_i$'s be the rate functions of $Z(t)$ obtained from Gaussian density. Then, $g_i$'s are differentiable everywhere.
\begin{proof}
Define
\begin{eqnarray}
\phi(x,y) = \frac{1}{(2\pi)^{n/2}|\Sigma|^{1/2}}\exp \bigg(-\frac{(y-x)'\Sigma^{-1}(y-x)}{2}\bigg).\nonumber
\end{eqnarray}
Using Gaussian density,
\begin{eqnarray*}
g_i(t,x) = \int_{\mathbf{R}^d} f_i(t,y) \phi(x,y) dy.
\end{eqnarray*}
For $j \in \{1,\ldots, d\}$, since $\phi(x,y)$ is differentiable with respect to $x_j$ and $|f_i(t,y) \frac{d}{dx_j} \phi(x,y)|$ is integrable,
\begin{eqnarray}
\frac{d}{dx_j}g_i(t,x) &=& \frac{d}{dx_j}\int_{\mathbf{R}^d} f_i(t,y) \phi(x,y) dy \nonumber\\
&=& \int_{\mathbf{R}^d} f_i(t,y) \frac{d}{dx_j} \phi(x,y) dy \quad \textrm{by applying Theorem 2.27 in \citet{folland99}}, \label{eqn_folland}
\end{eqnarray}
where $x_j$ is $j^{\textrm{th}}$ component of $x$.\\
Therefore, $g_i$ is differentiable with respect to $x_j$. \\
\end{proof}
\end{lemma}
Now, we have $g_i(\cdot,\cdot)$'s which are differentiable. Then, we can apply Theorem \ref{theo_diffusion} to obtain the diffusion model for $Z_n(t)$.
\begin{proposition}[Adjusted diffusion model] \label{prop_moddiffusion} Let $g_i(\cdot,\cdot)$'s be the rate functions in $Z(t)$ obtained from Gaussian density. Define a sequence of scaled centered processes $\{V_n(t)\}$ for $t \le T$ to be
\begin{eqnarray*}
V_n(t) = \sqrt{n}\big(Z_n(t)-\bar{Z}(t)\big), \label{eqn_030}
\end{eqnarray*}
where $Z_n(t)$ and $\bar{Z}(t)$ are solutions to equations (\ref{eqn_adjseq}) and (\ref{eqn_026}) respectively. If $f_i(t,x)$'s and $F(t, x)$ satisfy equations (\ref{eqn_024}) and (\ref{eqn_025}) respectively, then
$\lim_{n \rightarrow \infty} V_n(t) = V(t)$, where
\begin{eqnarray*}
V(t) = \sum_{i=1}^{k} l_i \int_{0}^{t} \sqrt{g_i\big(s,\bar{Z}(s)\big)}dW_i(s) + \int_{0}^{t} \partial G\big(s,\bar{Z}(s)\big) ds, \label{eqn_031}
\end{eqnarray*}
$W_i(\cdot)$'s are independent standard Brownian motions, and $\partial G\big(t,\bar{Z}(t)\big)$ is the gradient matrix of $G\big(t,\bar{Z}(t)\big)$ with respect to $\bar{Z}(t)$. Furthermore, $V(t)$ is a Gaussian process.
\begin{proof}
From definition of $G(t,x)$ in (\ref{eqn_G}), we can easily verify that $G(t,x)$ is differentiable by Lemma \ref{lem_smooth} and hence $|G(t,x) - G(t,y)| \le M|x-y|$ implies
\begin{eqnarray*}
\bigg|\frac{\partial}{\partial x_i} G(t,x) \bigg| \le M_i \quad \textrm{for some } M_i < \infty
|
, t \le T, \textrm{ and } i\in \{1,\ldots, d\}.
\end{eqnarray*}
Therefore, by Theorem \ref{theo_diffusion}, we prove this proposition.
\end{proof}
\end{proposition}
\begin{corollary} \label{cor_samedistribution}
If $f_i$'s are constants or linear combinations of the components of $X(t)$. Then,
\begin{eqnarray*}
X(t) = Z(t) \quad \textrm{in distribution}. \label{eqn_032}
\end{eqnarray*}
\begin{proof}
Using the linearity of expectation, we can verify $g_i(t,x)= f_i(t,x)$ for $i\in\{1,\ldots,k\}$.
\end{proof}
\end{corollary}
Now we have the adjusted fluid and diffusion models by utilizing Gaussian density. Therefore, instead of assuming measure zero at a set of non-differentiable points (as done in \citet{mandelbaum02}), we compare the adjusted models with the empirical mean and covariance matrix. Note when we explain Theorem \ref{theo_modfluid}, we do not consider $\Sigma(t)$, the covariance matrix of $X(t)$. However, from Gaussian density, we know that $\Sigma(t)$ characterizes the base distribution and it can be obtained from Proposition \ref{prop_moddiffusion}. Therefore, we rewrite $g_i$'s to be functions of $t$, $\bar{Z}(t)$, and $\Sigma(t)$; i.e.
\begin{eqnarray}
g_{i}\big(t, \bar{Z}(t)\big) &\rightarrow& g_{i}\big(t, \bar{Z}(t), \Sigma(t) \big) \quad \textrm{for } i\in\{1,\ldots,k\} \textrm{ and} \label{eqn_036}\\
G\big(t, \bar{Z}(t)\big) &\rightarrow& G\big(t, \bar{Z}(t), \Sigma(t) \big). \label{eqn_037}
\end{eqnarray}
\begin{proposition}[Mean and covariance matrix] \label{prop_modmoment}
Let $Y(t) = \bar{Z}(t) + V(t)$. Then,
\begin{eqnarray}
E\big(Y(t)\big) &=& \bar{Z}(t) \quad \textrm{and} \label{eqn_038} \\
Cov\big(Y(t),Y(t)\big) &=& Cov\big(V(t),V(t)\big) = \Sigma(t). \label{eqn_039}
\end{eqnarray}
The quantities $\bar{Z}(t)$ and $\Sigma(t)$ are obtained by solving the following simultaneous ordinary differential equations with initial values given by $\bar{Z}(0) = x_0$ and $\Sigma(0)=0$:
\begin{eqnarray}
\frac{d}{dt}\bar{Z}(t) &=& \sum_{i=1}^{k} l_i g_{i}\big(t, \bar{Z}(t), \Sigma(t) \big), \label{eqn_040} \\
\frac{d}{dt}\Sigma(t) &=& A(t) \Sigma(t) + \Sigma(t) A(t)' + B(t)B(t)', \label{eqn_041}
\end{eqnarray}
where $A(t)$ is the gradient matrix of $G\big(t,\bar{Z}(t), \Sigma(t)\big)$ with respect to $\bar{Z}(t)$, and $B(t)$ is the $d \times k$ matrix such that its $i^\textrm{th}$ column is $l_i \sqrt{g_i\big(t,\bar{Z}(t), \Sigma(t)\big)}$.
\begin{proof}
Since $V(0) = 0$, from Corollary \ref{cor_moment}, we have (\ref{eqn_038}) and (\ref{eqn_039}). By rewriting (\ref{eqn_026}) in Theorem \ref{theo_modfluid} as a differential equation form, we have (\ref{eqn_040}), and by Theorem \ref{theo_moment}, we have (\ref{eqn_041}). Note that since both $\bar{Z}(t)$ and $\Sigma(t)$ are variables, we should solve (\ref{eqn_040}) and (\ref{eqn_041}) simultaneously.
\end{proof}
\end{proposition}
In conclusion, we define an adjusted process $Z(t)$ in Section \ref{sec_adjustedfluid} to obtain the exact $E\big[X(t)\big]$ for $X(t)$ process which is the state of a non-stationary and state-dependent queueing system. It, however, is not possible to obtain such $g_i$'s and hence in this section, we provided an algorithm by utilizing Gaussian density. From this, the limit process turns out to be a Gaussian process. We recognize that this is not true for the original process. As mentioned in Section \ref{sec_introduction}, however, this paper does not pursue finding the exact distribution of the original process but proposes an effective way to estimate mean values and the covariance matrix of the original process. Therefore, in the following sections, by means of numerical examples, we illustrate our methodology and show its effectiveness.
\section{Multiserver queues with abandonments and retrials} \label{sec_application}
Multiserver queues with abandonments and retrials are extensively studied in the literature since they are used to model an important application, namely ``call centers'' (e.g. \citep{Halfin81,Garnet02,Whitt04,zeltyn05,Whitt06b}). In this section, therefore, we provide in-depth explanation of how our approach works in this queueing system by numerical examples.
\begin{figure}
\centering
\includegraphics[width = .6\textwidth]{retrial}
\caption{Multiserver queue with abandonment and retrials, \citet{mandelbaum02}} \label{fig_retrial}
\end{figure}
Figure \ref{fig_retrial} illustrates a multiserver queue with abandonments and retrials described in \citet{mandelbaum98, mandelbaum02}. There are $n_t$ number of servers in the service node at time $t$. Customers arrive to the service node according to a nonhomogeneous Poisson process at rate $\lambda_t$. The service time of each customer follows a distribution having a memoryless property at rate $\mu_t^1$. Customers in the queue are served under the FCFS policy and the abandonment rate of customers is $\beta_t$ with exponentially distributed time to abandon. Abandoning customers leave the system with probability $p_t$ or go to the retrial queue with probability $1-p_t$. The retrial queue is an infinite-server-queue and hence each customer in the retrial queue waits there for a random amount of time with mean $1/\mu_t^2$ and returns to the service node.\\
Let $X(t) = \big(x_1(t),x_2(t)\big)$ be the system state where $x_1(t)$ is the number of customers in the service node and $x_2(t)$ is the number of customers in the retrial queue. Then, $X(t)$ is the unique solution to the following integral equation:
\begin{eqnarray*}
x_1(t) &=& x_1(0) +Y_1\Big(\int_{0}^{t}\lambda_s ds\Big) + Y_2\Big(\int_{0}^{t}x_2(s)\mu_s^2ds\Big) - Y_3\Big(\int_{0}^{t}\big(x_1(s)\wedge n_s\big)\mu_s^1ds\Big) \nonumber \\
&& - Y_4\Big(\int_{0}^{t}\big(x_1(s)-n_s\big)^+\beta_s(1-p_s)ds\Big) - Y_5\Big(\int_{0}^{t}\big(x_1(s)-n_s\big)^+\beta_sp_sds\Big), \label{eqn_rx1} \\
x_2(t) &=& x_2(0) + Y_4\Big(\int_{0}^{t}\big(x_1(s)-n_s\big)^+\beta_s(1-p_s)ds\Big) - Y_2\Big(\int_{0}^{t}x_2(s)\mu_s^2ds\Big). \label{eqn_rx2}
\end{eqnarray*}
Then, following the notation in Section \ref{sec_fluiddiffusion}, we have, for $x=(x_1,x_2)$ and $t\le T$,
\begin{eqnarray}
f_1(t,x) &=& \lambda_t, \nonumber \\
f_2(t,x) &=& \mu_t^2 x_2, \nonumber \\
f_3(t,x) &=& \mu_t^1(x_1 \wedge n_t), \nonumber \\
f_4(t,x) &=& \beta_t(1-p_t)(x_1-n_t)^+, \quad \textrm{and} \nonumber \\
f_5(t,x) &=& \beta_tp_t(x_1-n_t)^+. \nonumber
\end{eqnarray}
We can verify that all $f_i$'s satisfy the conditions to apply Theorem \ref{theo_fluid}. However, we cannot apply Theorem \ref{theo_diffusion} directly since $f_3$, $f_4$, and $f_5$ are not differentiable at $x_1=n_t$. To resolve this, \citet{mandelbaum02} assumes measure zero at a set of time points when the fluid limit hits the non-differentiable points and apply Theorem \ref{theo_diffusion}. \citet{mandelbaum02} addresses that for a system of a fixed size, assuming measure zero works well when $x_1(t)$ does not stay too long near the critically loaded phase. It also provides the actual form of differential equations for the diffusion model in \citet{mandelbaum98} which, in fact, are not computationally tractable, e.g. see equations (4.1) and (4.2) in \citep{mandelbaum02}. As mentioned in Section \ref{sec_adjustedfluid}, we approch the problem under a different point of view. Notice that in addition to their non-differentiability, $f_3$, $f_4$, and $f_5$ do not satisfy $E\big[f_i\big(t,X(t)\big)\big] = f_i\big(t,E\big[X(t)\big]\big)$ either. Therefore, we would like to apply Theorem \ref{theo_modfluid} to obtain $E\big[X(t)\big]$ exactly. Recalling Section \ref{sec_heuristic}, however, obtaining exact $g_i$'s is not possible and hence we obtain $g_i$'s from Gaussian density as follows:
\begin{eqnarray}
g_1(t,x) &=& \lambda_t, \nonumber \\
g_2(t,x) &=& \mu_t^2 x_2, \nonumber \\
g_3(t,x) &=& \mu_t^1\big(n_t + (x_1-n_t)\Phi(n_t,x_1,\sigma_{1_t}) - \sigma_{1_t}^2 \phi(n_t,x_1,\sigma_{1_t})\big), \nonumber \\
g_4(t,x) &=& \beta_t(1-p_t)\Big((x_1-n_t)\big(1-\Phi(n_t,x_1,\sigma_{1_t})\big)+\sigma_{1_t}^2\phi(n_t,x_1,\sigma_{1_t})\Big), \quad \textrm{and} \nonumber \\
g_5(t,x) &=& \beta_tp_t\Big((x_1-n_t)\big(1-\Phi(n_t,x_1,\sigma_{1_t})\big)+\sigma_{1_t}^2\phi(n_t,x_1,\sigma_{1_t})\Big), \nonumber
\end{eqnarray}
where $\Phi(a,b,c)$ and $\phi(a,b,c)$ are function values at point $a$ of the Gaussian CDF and PDF respectively with mean $b$ and standard deviation $c$.\\
Since $f_1(t,x)$ and $f_2(t,x)$ are constant and linear with respect to $x$ respectively, $g_1(t,x)=f_1(t,x)$ and $g_2(t,x)=f_2(t,x)$. The derivation of other $g_i(\cdot,\cdot)$'s is straightforward but requires some computational efforts and hence we provide the details in Appendix \ref{app_gi}. Note $g_3$, $g_4$, and $g_5$ include $\sigma_{1_t}$ which is currently treated as a function of $t$ but is used by the adjusted diffusion model (see equations (\ref{eqn_036}) and (\ref{eqn_037})). \\
For a fixed size of the system, both our proposed method and the method assuming measure zero do not guarantee the exact estimation of the system state. However, these two methods provide computational tractability. Therefore, we compare our method against the method assuming zero in \citet{mandelbaum02}. We conduct simulations under the similar settings in \citet{mandelbaum02}. We use 5,000 independent simulation runs and compare the simulation result with both our method and the method assuming measure zero. We use the constant rates for the parameters except the arrival rate. The arrival rate alternates between $45$ and $55$ every two time units. Figures \ref{fig_retrial_mean} and \ref{fig_retrial_covariance} show the estimation of mean values from one experiment. The number of servers ($n_t$) is $50$ and the service rate of each server is $1$.
\begin{figure}[htbp]
\begin{center}
\subfigure[ Mean numbers by assuming measure zero]{
\includegraphics[width = .48\textwidth]{retrial_fluid_original}}
\subfigure[ Mean numbers by our proposed method]{
\includegraphics[width = .48\textwidth]{retrial_fluid_adjusted}}
\caption{Comparison of mean values, $E\big[X(t)\big]$}
\label{fig_retrial_mean}
\end{center}
\end{figure}
As seen in Figure \ref{fig_retrial_mean}, the number of customers in service node ($x_1(t)$) stays near the critically loaded point for a long time. As \citet{mandelbaum02} points out, the method assuming measure zero shows significant difference in estimating $E\big[x_2(t)\big]$. On the other hand, our proposed method provides accurate results. When we see the estimation of the covariance matrix, we notice the similar results as the estimation of mean values.
\begin{figure}[htbp]
\begin{center}
\subfigure[ Covariance matrix by assuming measure zero]{
\includegraphics[width = .48\textwidth]{retrial_diffusion_original}}
\subfigure[ Covariance matrix by our proposed method]{
\includegraphics[width = .48\textwidth]{retrial_diffusion_adjusted}}
\caption{Comparison of covariance matrix entries, $Cov\big[X(t),X(t)\big]$}
\label{fig_retrial_covariance}
\end{center}
\end{figure}
As seen in Figure \ref{fig_retrial_covariance}, the method assuming measure zero causes ``spikes'' as also pointed out in \citet{mandelbaum02}. Our proposed method, however, provides reasonable accuracy and no spikes at all. To verify the effectiveness of our method, we conduct several experiments with different parameter combinations.
\begin{table}[htdp]
\caption{Experiments setting}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
exp \# & svrs & $\lambda_1$ & $\lambda_2$ & $\mu_1$ & $\mu_2$ & $\beta$ & $p$ & alter & time \\ \hline \hline
1 & 50 & 40 & 80 & 1 & 0.2 & 2.0 & 0.5 & 2 & 20 \\ \hline
2 & 50 & 40 & 60 & 1 & 0.2 & 2.0 & 0.5 & 2 & 20 \\ \hline
3 & 100 & 80 & 120 & 1 & 0.2 & 2.0 & 0.7 & 2 & 20 \\ \hline
4 & 100 & 90 & 110 & 1 & 0.2 & 2.0 & 0.7 & 2 & 20 \\ \hline
5 & 50 & 40 & 80 & 1 & 0.2 & 1.5 & 0.7 & 2 & 20 \\ \hline
6 & 50 & 40 & 60 & 1 & 0.2 & 1.5 & 0.7 & 2 & 20 \\ \hline
7 & 50 & 45 & 55 & 1 & 0.2 & 2.0 & 0.5 & 2 & 20 \\ \hline
8 & 100 & 95 & 105 & 1 & 0.2 & 2.0 & 0.5 & 2 & 20 \\ \hline
9 & 150 & 140 & 160 & 1 & 0.2 & 2.0 & 0.5 & 2 & 20 \\ \hline
10 & 150 & 100 & 190 & 1 & 0.2 & 2.0 & 0.5 & 2 & 20 \\
\hline
\end{tabular}
\end{center}
\label{tab_settings}
\end{table}
Table \ref{tab_settings} describes the setting of each experiment. In Table \ref{tab_settings}, ``svrs'' is the number of servers ($n_t$), ``alter'' is the time length for which each arrival rate lasts, and ``time'' is the end time of our analysis. We already recognize that the method assuming measure zero works well when it \emph{does not linger} too long near the non-differentiable points. For comparison, therefore, our experiments contain several cases where the system \emph{does linger} relatively long around those points as well as the cases where it does not. Experiments 1-4 are intended to see the effects of ``lingering'' around non-differentiable points. We change $\beta_t=\beta$ and $p_t=p$ as well as the arrival rates in experiments 5-8 to see the effects of other parameters. In fact, from the other experements not listed in Table \ref{tab_settings}, it turns out that changing other parameters does not affect estimation accuracy significantly. Experiments 9 and 10 are set to observe how larger arrival rates and number of servers affect estimation accuracy along with the lingering around the non-differentiable points by increasing both of them. Here we explain the overall results: for the details of numerical results, see Table \ref{tab_mean_x1}-\ref{tab_var_x2} in Appendix \ref{app_table}. Similar to the results in Figures \ref{fig_retrial_mean} and \ref{fig_retrial_covariance}, we observe that lingering does debase the quality of approximations significantly when assuming measure zero. On the other hand, we see that our proposed method provides excellent accuracy in both mean and covariance values. Even if we increase both arrival rates and number of servers, we notice that lingering still affects estimation accuracy significantly when assuming measure zero but it does not in our proposed method.
\begin{figure}
\centering
\includegraphics[width = .8\textwidth]{avgresult}
\caption{Average difference against simulation} \label{fig_avgresult}
\end{figure}
Figure \ref{fig_avgresult} illustrates the average percentile difference of both methods against the simulation. The average difference are obtained by averaging all differences in the tables, so it does not provide the absolute comparison between two methods. However, from Figure \ref{fig_avgresult}, we intuitively notice that our proposed method shows great effectiveness relative to the method assuming measure zero.
\section{Additional Applications} \label{sec_additional}
We applied our proposed method to a wide variety of non-stationary state-dependent queueing systems. Since our proposed method is based on the adjusted fluid model, we observe that the mean queue lengths are accurate in all the systems. Also, when the rate functions are smooth or Gaussian density approximation is accurate or both, our adjusted diffusion model also provides accurate results. Due to space restrictions and our perception of how much value those cases would add, we have omitted presenting them here. Instead, we focus on scenarios with non-smooth rate functions where we conjecture Gaussian density would perhaps be inaccurate. We specifically consider two such applications \emph{not to showcase the effectiveness of our methodology, but to illustrate that there is room for improvement} for researchers in future to consider. We would like to nonetheless point out that to the best of our knowledge our approximations are still more accurate than those in the present literature. In particular, we consider multiclass preemptive priority queues (Section \ref{subsec_priority}) and peer networks (Section \ref{subsec_p2p}). For these applications, we do not provide as much detail as the multiserver queues with abandonments and retrials in Section \ref{sec_application} and show results graphically.
\subsection{Multiclass preemptive priority queues} \label{subsec_priority}
In this section, we consider a multiclass priority queue (see Figure \ref{fig_multiclass}) with preemptive policy which in fact is described in \citet{mandelbaum98}. It is crucial to notice that \citet{mandelbaum98} does not numerically solve this example. However, we will not only use our methodology but also extend the method assuming measure zero in \citet{mandelbaum02} for this case.\\
\begin{figure}
\centering
\includegraphics[width = .8\textwidth]{multiclass}
\caption{Multiclass priority queue with preemptive policy, \citet{mandelbaum98}} \label{fig_multiclass}
\end{figure}
Explaining the priority queue briefly, there are $c$ number of classes of customers. The class $i$ customers arrive to the system with rate $\lambda_t^i$ and are served by available servers among $n_t$ number of servers with rate $\mu_t^i$ at time $t$. If a class $i$ customer arrives and there is no available server, then the highest class customer (i.e. lowest priority) in service is pushed back to the queue and the class $i$ customer is served. If there is no higher class customer, then the class $i$ customer waits in the queue. \\
In our numerical study, we use two classes of customers. Let $X(t)=\big(x_1(t),x_2(t)\big)$ be the state of the system at time $t$ where $x_1(t)$ and $x_2(t)$ are the number of customers of class 1 and 2 respectively. Then, $X(t)$ is the solution to the following integral equations:
\begin{eqnarray*}
x_1(t) &=& x_1(0) + Y_1\Big(\int_0^t \lambda_s^1 ds\Big) - Y_3\Big(\int_0^t \mu_s^1\big(x_1(s) \wedge n_s)ds\Big), \\
x_2(t) &=& x_2(0) + Y_2\Big(\int_0^t \lambda_s^2 ds\Big) - Y_4\Big(\int_0^t \mu_s^2\big(x_2(s) \wedge (n_s-x_1(s))^+\big)ds\Big).
\end{eqnarray*}
We set the number of servers ($n_t$) to be $200$. The arrival rate of class 1 customers ($\lambda_t^1$) is alternating between $120$ and $200$ every two time units and the arrival rate of class 2 customers ($\lambda_t^2)$ is $20$. The service rates of both class customers are $1$, i.e. $\mu_t^1 = \mu_t^2 = 1$. We conduct 5,000 simulation runs and obtain mean values by averaging them.
\begin{figure}[htbp]
\begin{center}
\subfigure[ Mean numbers by assuming measure zero]{
\includegraphics[width = .48\textwidth]{multiclass_fluid_original}}
\subfigure[ Mean numbers by our proposed method]{
\includegraphics[width = .48\textwidth]{multiclass_fluid_adjusted}}
\caption{Comparison of mean values, $E\big[X(t)\big]$}
\label{fig_multiclass_mean}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\subfigure[ Covariance matrix by assuming measure zero]{
\includegraphics[width = .48\textwidth]{multiclass_diff_original}}
\subfigure[ Covariance matrix by our proposed method]{
\includegraphics[width = .48\textwidth]{multiclass_diff_adjusted}}
\caption{Comparison of covariance matrix entries, , $Cov\big[X(t),X(t)\big]$}
\label{fig_multiclass_covariance}
\end{center}
\end{figure}
Figures \ref{fig_multiclass_mean} and \ref{fig_multiclass_covariance} show the comparison between the method assuming measure zero and our proposed method against simulation. We see that both methods work well for the mean value of $x_1(t)$. However, though not immediately obvious from Figure \ref{fig_multiclass_mean}, there is 5-15\% difference for the mean value of $x_2(t)$ when using the method assuming measure zero while our proposed method shows great accuracy. For the covariance matrix, our proposed method outperforms the method assuming measure zero as seen in Figure \ref{fig_multiclass_covariance}. However, underestimation of variance of $x_2(t)$ is observed in our proposed method. Here, we explain our conjecture on the underestimation of variance. As described in Section \ref{sec_heuristic}, we utilize Gaussian density to obtain new rate functions, $g_i(\cdot,\cdot)$'s.
In this example, we observe from our numerical experiments that empirical density is not close to Gaussian density when the fluid limit stays near a non-differentiable point. Our conjecture is that asymmetry of empirical density, unlike Gaussian density, causes larger values of covariance matrix entries. However, note that although it does affect the estimation of the covariance matrix (usually underestimation), it does not affect the estimation of mean values of the system state significantly, i.e. we still have the accurate estimation of mean values.
\subsection{Peer networks} \label{subsec_p2p}
\begin{figure}
\centering
\includegraphics[width = .5\textwidth]{p2p}
\caption{Peer networks} \label{fig_p2p}
\end{figure}
Figure \ref{fig_p2p} illustrates the queueing system we consider in this section. Motivated by peer networks with centralized controller (frequently encountered in multimedia content delivering industry), we consider the following scenario of a queueing system.\\
At time $t$, customers arrive to the system at rate $\lambda_t$. Customers in the queue are sent to available active servers. The service rate of each server is $\mu_t$. After a customer is fully served, s/he becomes a new active server. Each server serves customers for a random amount of time with mean $1/\theta_t$, and then either becomes inactive with probability $p_t$ or leaves the system with probability $1-p_t$. Each inactive server spends a random amount of time with mean $1/\gamma_t$ and returns to be active. Note that only active servers can serve customers.\\
Let $X(t)=\big(x_1(t),x_2(t),x_3(t)\big)$ be the state of the system at time $t$ where $x_1(t)$, $x_2(t)$, and $x_3(t)$ are the number of customers, active servers, and inactive servers respectively. Then, $X(t)$ is obtained by solving the following integral equation:
\begin{eqnarray*}
x_1(t) &=& Y_1\Big(\int_{0}^{t}\lambda_s ds\Big) - Y_2\Big(\int_{0}^{t} \mu_s (x_1(s) \wedge x_2(s)) ds\Big), \\
x_2(t) &=& Y_2\Big(\int_{0}^{t} \mu_s (x_1(s) \wedge x_2(s)) ds\Big) - Y_3\Big(\int_{0}^{t} \theta_s p_s x_2(s) ds\Big) \\
&&- Y_4\Big(\int_{0}^{t} \theta_s (1-p_s) x_2(s) ds\Big) + Y_5\Big(\int_{0}^{t} \gamma_s x_3(s) ds\Big), \textrm{ and} \\
x_3(t) &=& Y_3\Big(\int_{0}^{t} \theta_s p_s x_2(s) ds\Big) - Y_5\Big(\int_{0}^{t} \gamma_s x_3(s) ds\Big),
\end{eqnarray*}
where $Y_1(\cdot)$, $Y_2(\cdot)$, $Y_3(\cdot)$, $Y_4(\cdot)$, and $Y_5(\cdot)$ are independent Poisson processes with rate 1 corresponding to customer arrival, service completion, server's going up, server's leaving, and server's going down respectively. Similar settings are found in the previous research studies (\citet{qiu02}, and \citet{yang04}). They, however, used Poisson processes (i.e. constant rate functions) to construct the system model and did not consider the time-varying properties. Furthermore, they mainly focused on the steady state analysis not providing in-depth analysis of the transient behavior of the system. Therefore, for this application, we can say that we provide more general settings than previous research in that we address time-varying rate functions and provide performance measures through the entire lifespan of the system. In this section, however, we provide the analysis of transient period when the system does not have enough servers to serve customers. This transient analysis is important since the issues on quality of service may arise during this period. We will provide details of the analysis of peer networks in a forthcoming paper.\\
\begin{figure}[htbp]
\begin{center}
\subfigure[ Mean numbers by assuming measure zero]{
\includegraphics[width = .48\textwidth]{fluid-400-2-03-05-10}}
\subfigure[ Mean numbers by our proposed model]{
\includegraphics[width = .48\textwidth]{afluid-400-2-03-05-10}}
\caption{Comparison of mean values, $E\big[X(t)\big]$}
\label{fig_p2p_mean}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\subfigure[ Covariance matrix by assuming measure zero]{
\includegraphics[width = .48\textwidth]{diff-400-2-03-05-10}}
\subfigure[ Covariance matrix by our proposed method]{
\includegraphics[width = .48\textwidth]{adiff-400-2-03-05-10}}
\caption{Comparison of covariance matrix entries, $Cov\big[X(t),X(t)\big]$}
\label{fig_p2p_covariance}
\end{center}
\end{figure}
Figure \ref{fig_p2p_mean} illustrates mean values of customers ($E\big[x_1(t)\big]$), active servers ($E\big[x_2(t)\big]$), and inactive servers ($E\big[x_3(t)\big]$) over time. We apply both methods (i.e. the method assuming measure zero and our proposed method) and compare them with the simulation result. We use parameters with $\lambda_t = 400$, $\mu_t= 2$, $\theta_t = 0.3$, $\gamma_t=0.5$, $p_t=0.9$ for $t\ge 0$ as well as $x_1(0)=x_3(0)=0$ and $x_2(0)=10$. We conduct 5,000 simulation runs and obtain mean values by averaging them. In this numerical example, we want to see what happens when the fluid limit becomes close to the critically loaded time point. Figure \ref{fig_p2p_mean} (a) shows comparison between simulation and the method assuming measure zero. We see that it works well when the fluid limits are far from the critically loaded time point. However, as it becomes closer to that point, the method assuming measure zero shows difference from the simulation result. On the other hand, when we apply our proposed method, as seen in Figure \ref{fig_p2p_mean} (b), it provides the almost exact estimation even if the fluid limits are close to the critically loaded point. Figure \ref{fig_p2p_covariance} is the graph of the covariance matrix entries of the system over time. We observe sharp spikes in the method assuming measure zero (Figure \ref{fig_p2p_covariance} (a)). Note that the time when the extreme points of sharp spikes occur is exactly same as the time when the fluid limit hits the critically loaded time point in Figure \ref{fig_p2p_mean} (a) which is the non-differentiable point. However, when we apply our proposed method, we have no sharp spikes at all and the covariance matrix entries are quite close to the simulation result as seen in Figure \ref{fig_p2p_covariance} (b). Thus we believe that our proposed method works well even under this complex scenario.
\section{Conclusion} \label{sec_conclusion}
In this paper, we initially explain the fluid and diffusion models used in analysis of state-dependent queues and show potential problems that one faces in balancing accuracy and computational tractability. The first problem is from the fact that expectation of a function of a random vector $X$ is not equal to the value of the function of the expectation of $X$. Therefore, unless they are equal or close, the fluid model may not provide an accurate estimation of mean values of the system state. The second problem is caused by non-differentiability of rate functions which prevents applying the diffusion model in \citet{kurtz78}. Therefore, addressing these problems is quite important in order to develop accurate approximations as well as to achieve computational feasibility. For that, we proposed a methodology to obtain the exact estimation of mean values of system states and an algorithm to achieve computational tractability.\\
The basic idea of our approach is to construct a new stochastic process which has the fluid limit exactly same as the mean value of the system state. We proved that if rate functions in the original model satisfy the conditions to apply the fluid model, rate functions in the constructed model also satisfy those conditions. Therefore, we can apply the adjusted fluid model if we can apply the existing fluid model. It turns out that there is, in general, no computational method to obtain the adjusted fluid model exactly and hence we utilize Gaussian density to approximate it. By using Gaussian density, we see that rate functions in the constructed model are smooth and we are able to apply the diffusion model in \citet{kurtz78} even if we could not apply it to the original process.\\
To validate our proposed method, we provide several numerical examples of non-stationary state-dependent queueing systems. In the examples, we observe that our proposed method shows great accuracy compared with the method assuming measure zero (which is the only other way in the literature, to the best of our knowledge, that provides computational tractability). Due to space restriction, we have not shown all examples where our method works well. We, however, observe that when the Gaussian density assumption is inaccurate, especially near non-smooth points, our methodology needs further investigation for the covariance matrix. We conjecture that this phenomenon is from the gap between empirical and Gaussian density. To address this, one can investigate the properties of specific rate functions that affect the shape of empirical density and can devise a new algorithm finding $g_i(\cdot,\cdot)$'s from other density functions in the future.
|
\section{Introduction and Results}
In Euclidean space $\mathbb{R}^n$, it is always possible to exchange the order of partial derivatives: in particular, derivatives of solutions of the heat equation also satisfy the heat equation and therefore enjoy many nice properties, in particular a maximum principle.
Let $u(t,x)$ denote a solution of the heat equation in $\mathbb{R}^n$. Then it is explicitly given by
$$ u(t,x) = \frac{1}{(4\pi t)^{d/2}} \int_{\mathbb{R}^n} \exp\left(-\frac{\|x-y\|^2}{4t} \right) u(0,y) dy.$$
At any given point $x_0 \in \mathbb{R}^2$ and any unit vector $\nu \in \mathbb{S}^{n-1}$ and any $k \geq 1$, we can (at least formally) differentiate under the integral sign to obtain
$$ \frac{\partial^k u}{\partial \nu^k} (t, x_0) = \frac{1}{(4\pi t)^{d/2}} \int_{\mathbb{R}^n}\exp\left(-\frac{\|x_0-y\|^2}{4t} \right)\frac{\partial^k u}{\partial \nu^k} (0, x_0)dy.$$
We were interested in whether there is an analogous result on bounded domains in terms of the heat kernel $p_t(\cdot, \cdot)$ of the domain $\Omega$.
\begin{theorem}[Main Result] Let $\Omega \subset \mathbb{R}^n$ be a domain with $C^1-$boundary and let $f \in C^k(\Omega) \cap L^{\infty}(\Omega)$, let $x_0 \in \Omega$ and $\nu \in \mathbb{S}^{n-1}$. Then, for all $t>0$, all , the solution $e^{t\Delta} f$ of the heat equation with Dirichlet boundary conditions $u(t, x) = g(x)$ has
$$X = \left| \frac{\partial^k }{\partial \nu^k} e^{t\Delta} f(x_0) - \int_{\Omega} p_t(x_0, y) \frac{\partial^k }{\partial \nu^k} f(y) dy \right|$$
bounded by
$$X \leq \left(1 - \int_{\Omega}p_t(x_0, y) dy \right) \max_{0 \leq s \leq t} \left\| \frac{\partial^k }{\partial \nu^k} e^{s\Delta} f \right\|_{L^{\infty}(\partial \Omega)}.$$\\
\end{theorem}
Note that if $t \ll d(x_0, \partial \Omega)^2$, then the integral term in our error bound is actually quite small independently of what happens on the boundary.
In particular, in free space, the heat kernel always has total integral 1 and we get $X \equiv 0$.
The result seems to have fairly natural extensions to the Neumann Laplacian, higher derivatives and even more general parabolic equations following essentially the same type of argument.
This type of argument might have an interesting analogue on manifolds (both with and without boundary). In the case of manifolds without boundary, one would expect that the underlying curvature has an additional perturbative effect on the particles (`stochastic parallel transport', see e.g. Bismut \cite{bis}, Elworthy \& Li \cite{el} and Thalmaier \& Wang \cite{thal}). We also refer to recent developments on second-order Feynman Kac formulas (see Li \cite{li} and Thompson \cite{thomp}).\\
\textbf{An Application.}
Let $\Omega \subset \mathbb{R}^n$ be a bounded domain with smooth boundary. We consider Laplacian eigenfunctions,
$ -\Delta \phi_k = \lambda_k \phi_k$,
with Dirichlet conditions on the boundary $\partial \Omega$. If these eigenfunctions to be normalized in $L^2$, i.e. $\| \phi_k\|_{L^2}=1$, then results of
Levitan in 1952 \cite{levitan}, Avakumovic in 1956 \cite{ava} and H\"ormander in 1968 \cite{hor} guarantee that
$$ \left\| \phi_k\right\|_{L^{\infty}(\Omega)} \leq c_{\Omega} \cdot \lambda_k^{\frac{n-1}{4}},$$
where $c_{\Omega}$ is a constant depending only on $\Omega$.
This estimate is sharp (for an example on a ball, see \cite[\S 2.3]{grieser}) and has been well studied \cite{berard, blair, hassell, smith0, smith, sogge, sogge2, sogge3, sogge35, sogge4}.
The optimal estimate for the gradient is
$$ \left\| \nabla \phi_k\right\|_{L^{\infty}(\Omega)} \leq c_{\Omega} \cdot \lambda_k^{\frac{n+1}{4}}$$
and has been studied by Xu \cite{xu1, xu2, xu3} and, subsequently, by
Arnaudon, Thalmaier \& Wang \cite{arnaud}, Cheng, Thalmaier \& Thompson \cite{cheng}, Hu, Shi \& Xu \cite{hu} and Shi \& Xu \cite{shi}. On compact manifolds without boundary, there are results of Xu \cite{xu0} for all derivatives and by Wang \& Zhou \cite{wang} for linear combinations of eigenfunctions. Recently, Frank \& Seiringer \cite{frank} showed that for compact $\Omega \subset \mathbb{R}^n$ with $C^{k,\delta}-$smooth boundary, there is an estimate
$$ \left\| \frac{\partial^k \phi_k}{\partial \nu^k} \right\|_{L^{\infty}(\Omega)} \lesssim_{\Omega, k}~ \lambda^{k/2} \| \phi_k\|_{L^{\infty}(\Omega)} $$
As an application, we give an elementary proof of this inequality for $k=2$.
\begin{theorem} Let $\Omega \subset \mathbb{R}^n$ be a compact domain with smooth boundary. There exists a constant $c_{\Omega}$ such that for all solutions of $-\Delta \phi_k = \lambda_k \phi_k$ that vanish on $\partial \Omega$
$$ \| D^2 \phi_k(x)\|_{L^{\infty}(\Omega)} \leq c_{\Omega} \cdot \lambda_k^{1/2} \cdot \| \nabla \phi_k\|_{L^{\infty}(\Omega)}.$$
\end{theorem}
More generally, Theorem 1 allows us to obtain similar estimates also for higher derivatives provided there is some control on the derivatives on the boundary.
\section{Proof of Theorem 1}
\begin{proof} We describe the proof for $k=2$ in detail: the more general case $k \geq 3$ is completely analogous (after replacing the second differential quotient by the corresponding $k-$th differential quotient). We use a probabilistic argument. For any $x \in \Omega$, let $\omega_x(t)$ denote a Brownian motion started in $x$ after $t$ units of time. This Brownian motion gets `stuck' once it hits the boundary (this corresponds to Dirichlet boundary conditions). This gives us a way of solving the heat equation via
$$ e^{t\Delta} f(x) = \mathbb{E} \left(f(\omega_x(t))\right),$$
where the expectation ranges over all Brownian motions $\omega_x$ started in $x$ that run for $t$ units of time.
Our goal is to control the size of second derivatives of $e^{t\Delta} f(x)$ for points inside the domain. Let now $x_0 \in \Omega$ be fixed and let $\nu \in \mathbb{S}^{n-1}$ be some fixed direction. Calculus tells us that the second derivative of $e^{t\Delta} f$ at $x_0$ in direction $\nu$ is given by the limit of a differential quotient
$$ \left(\frac{\partial^2 }{\partial \nu^2} e^{t\Delta} f \right)(x_0) = \lim_{\varepsilon \rightarrow 0} \frac{ e^{t\Delta} f(x_0 + \varepsilon \nu) - 2 e^{t\Delta} f(x_0) + e^{t\Delta} f(x_0 - \varepsilon \nu)}{\varepsilon^2}.$$
We will, for the remainder of the proof, control exactly this differential quotient by carefully grouping the three expectations arising from
\begin{align*}
e^{t\Delta} f(x_0 + \varepsilon \nu) - 2 e^{t\Delta} f(x_0) + e^{t\Delta} f(x_0 - \varepsilon \nu) &= \mathbb{E} \left( f(\omega_{x+ \varepsilon \nu}(t))\right) - 2 \cdot \mathbb{E} \left( f(\omega_{x}(t))\right) \\
&+\mathbb{E} \left(f(\omega_{x- \varepsilon \nu}(t))\right).
\end{align*}
These are three independent Brownian motions started in three different points. However, these three initial points are very close to one another (ultimately, $\varepsilon \rightarrow 0$), so we expect them to be somewhat related. Let $A$ denote all Brownian motion paths started in $0 \in \mathbb{R}^n$ and running for $t$ units of time. Then, for each $y \in \Omega$, we can use translation invariance of Brownian motion in $\mathbb{R}^n$ to write the expectations as an expectation over the set $A$ via
$$ e^{t\Delta} f(y) = \mathbb{E} \left(f(\omega_y(t))\right) = \mathbb{E}_{a \in A} \left( f(a(t) + y) \cdot 1_{\left\{a(s) + y \in \Omega ~\mbox{\tiny for all}~0 \leq s \leq t\right\}}\right).$$
This has the advantage of being able to take the expectation with respect to one universal set $A$ shared by all three Brownian motions.
The size of the characteristic function has an analytic expression which is given by the heat kernel $p_t(\cdot, \cdot)$:
$$ \mathbb{P} \left( 1_{\left\{a(s) + y \in \Omega ~\mbox{\tiny for all}~0 \leq s \leq t\right\}} \right) = \int_{\Omega} p_t(y, z) dz.$$
There is an interesting subset of $A$ depending on $x_0, \nu$ and $\varepsilon$
$$ A_{\varepsilon} = \left\{ a \in A: \quad \begin{cases} a(s) + x_0 \in \Omega \hspace{5pt} \qquad \mbox{\tiny for all}~0 \leq s \leq t \\
a(s) + x_0 + \varepsilon \nu \in \Omega ~\mbox{\tiny for all}~0 \leq s \leq t \\
a(s) + x_0 - \varepsilon \nu \in \Omega ~\mbox{\tiny for all}~0 \leq s \leq t \end{cases}\right\} .$$
$A_{\varepsilon} \subset A$ is the set of Brownian paths that remain in $\Omega$ for all time $0\leq s \leq t$ independently in which of the three points $x_0, x_0 \pm \varepsilon \nu$ they are started in. For paths in $A_{\varepsilon}$, the differential quotient is easy to analyze: the path ends at a certain point $a(t)$ and the relative position of the three Brownian particles has been preserved since not a single one of them has hit the boundary. Recalling that for $B \subset \Omega$
$$ \mathbb{P}\left(\omega_x(t) \in B\right) = \int_{B} p_t(x_0, y) dy,$$
we see that
\begin{align*}
X = \lim_{\varepsilon \rightarrow 0} \frac{1}{\varepsilon^2}\mathbb{E}_{A_{\varepsilon}} \left[ f(\omega_{x_0+ \varepsilon v}(t)) - 2 f(\omega_{x_0}(t)) + f(\omega_{x_0- \varepsilon v}(t)) \right] \end{align*}
can be written as
\begin{align*}
X = \lim_{\varepsilon \rightarrow 0}\mathbb{E}_{a \in A_{\varepsilon}} \left( \frac{\partial^2 f}{\partial \nu^2}(a(t)) \right) = \int_{\Omega} p_t(x_0, z) \frac{\partial^2 f}{\partial \nu^2}(z) dz.
\end{align*}
This serves as a probabilistic proof that in $\mathbb{R}^n$, solving the heat kernel and differentiation commute since in that case $A_{\varepsilon} = A$ because there is no boundary. The remainder of the argument will be concerned with $A \setminus A_{\varepsilon}$.
The cases that are in $A \setminus A_{\varepsilon}$ can be written as a disjoint union
$$ A \setminus A_{\varepsilon} = \bigcup_{0 \leq s_0 \leq t} A_{1,s_0} \cup A_{2,s_0} \cup A_{3, s_0},$$
where $A_{i, s_0}$ is the event where the $i-$th of the three particles (having enumerated them in an arbitrary fashion) has hit the boundary at time $s_0$ and is the first of the three particles to do so (in case two particles hit the boundary simultaneously, we may put this event into either of the two sets; if all three hit simultaneously, it can be put into any of the three sets). We have a very good understanding of the size of $A \setminus A_{\varepsilon}$ since
$$ \lim_{\varepsilon \rightarrow 0} \mathbb{P}\left(A \setminus A_{\varepsilon}\right) = 1 - \int_{\Omega} p_t(x_0, y) dy.$$
It remains to understand the expected value of the differential quotient conditional on the path being in $A_{i, s_0}$ for all $1 \leq i \leq 3$ and all $0 \leq s_0 \leq t$. As it turns out, these cases can all be analyzed in the same fashion.
We illustrate the argument using the example shown in Fig. \ref{fig:conf}. In that example, the middle particle has impacted the boundary at time $s_0$. That middle Brownian motion is stuck and
$$ \mathbb{E} \left( f(\omega_{x_0}(t)) \big| \omega_{x_0}(s_0) = a(s_0) + x_0 \right) = 0.$$
\begin{center}
\begin{figure}[h!]
\begin{tikzpicture}
\draw [very thick] (3,0) to[out=20, in = 270] (5,1) to[out=90, in =340] (3,2);
\filldraw (5,0.3) circle (0.05cm);
\node at (6, 1) {$a(s_0) + x_0$};
\node at (4.4, 2) {$\Omega$};
\node at (3.5, 1) {$\Omega^c$};
\node at (2.6, 2.1) {$\partial \Omega$};
\node at (6.4, 1.7) {$a(s_0) + x_0 + \varepsilon \nu$};
\node at (6.4, 0.3) {$a(s_0) + yx_0- \varepsilon \nu$};
\filldraw (5,1) circle (0.05cm);
\filldraw (5,1.7) circle (0.05cm);
\draw (5, 0.3) -- (5, 1.7);
\end{tikzpicture}
\caption{A path in $ A_{2, s_0} \subset A \setminus A_{\varepsilon}$: the middle point hits the boundary.}
\label{fig:conf}
\end{figure}
\end{center}
It remains to understand the expected value of $f(\omega_{x_0}(t))$ subject to knowing that at time $s_0$ the particle is in the position $a(s_0) + x_0 + \varepsilon \nu$ or $a(s_0) + x_0 - \varepsilon \nu$. At this point we use Markovianity: the Brownian motion does not remember its past and behaves as if it were freshly started in that point. In particular, this shows that
$$ \mathbb{E} \left( f(\omega_{x_0}(t)) \big| \omega_{x_0}(s_0) = a(s_0) + x_0 + \varepsilon \nu \right) = \mathbb{E} \left( f(\omega_{a(s_0) + x_0 + \varepsilon \nu}(t-s_0)) \right).$$
However, this is merely the formula for the solution of the heat equation and
$$ \mathbb{E} \left( f(\omega_{a(s_0) + x_0 + \varepsilon \nu}(t-s_0)) \right) = e^{(t-s_0)\Delta} f(a(s_0) + x_0 + \varepsilon \nu).$$
Likewise, we have that
$$ \mathbb{E} \left( f(\omega_{x_0}(t)) \big| \omega_{x_0}(s_0) = a(s_0) + x_0 - \varepsilon \nu \right) = e^{(t-s_0)\Delta} f(a(s_0) + x_0 - \varepsilon \nu).$$
Finally, we argue that even the middle point (the one that already impacted on the boundary) can be written the same way since
$$ \mathbb{E} \left( f(\omega_{x_0}(t)) \big| \omega_{x_0}(s_0) = a(s_0) + x_0 \right) = 0 = e^{(t-s_0)\Delta} f(a(s_0) + x_0).$$
However, these three identities tell us a nice story: it tells us that evaluating the `probabilistic' second differential quotient amounts to, in this special case,
merely to evaluating the second differential quotient of
$$ e^{(t-s_0)\Delta} f \qquad \mbox{at the point}~a(s_0) + x_0\qquad \mbox{in direction}~\nu.$$
This leads to the following conclusion: for any smooth, compact domain, the heat equation started with $u(0,x) = f(x)$ inside $\Omega$ and constant boundary conditions $u(t,x) = g(x)$ for $x \in \partial \Omega$ has a solution
$$ u(t,x) = \int_{\Omega} p_t(x,y) f(y) dy + \int_{\partial \Omega} q_t(x,y)g(y) dy,$$
where $q_t(x,y)$ converges to the harmonic measure as $t \rightarrow \infty$ (since $\Omega$ is smooth, regularity of the boundary does not play a role).
From this we can deduce that
$$ \left(\frac{\partial^2 e^{t\Delta} f }{\partial \nu^2} \right)(x) = \int_{\Omega} p_t(x,y) \frac{\partial^2 f}{\partial \nu^2}(y) dy + \int_{0}^{t} \int_{\partial \Omega} \frac{\partial q_s}{\partial s}(x,y) \frac{\partial^2 e^{(t-s) \Delta} f}{\partial \nu^2}(y) dy ds.$$
The second integral can be easily bounded from above by
\begin{align*}
\left| \int_{0}^{t} \int_{\partial \Omega} \frac{\partial q_s}{\partial s}(x,y) \frac{\partial^2 e^{(t-s) \Delta} f}{\partial \nu^2}(y) dy ds \right| &\leq
\int_{0}^{t} \int_{\partial \Omega} \frac{\partial q_s}{\partial s}(x,y) \left| \frac{\partial^2 e^{(t-s)\Delta}f}{\partial \nu^2}(y) \right| dy ds\\
&= \int_{\partial \Omega} q_t(x,y) \max_{0 \leq s \leq t} \left| \frac{\partial^2 e^{(t-s)\Delta} f}{\partial \nu^2}(y) \right| dy.
\end{align*}
In particular, recalling that $u(t,x) \equiv 1$ is a solution of the heat equation with initial data $u(0,x) \equiv 1$ and boundary data $g(x) \equiv 1$, we have that
$$ \int_{\Omega} p_t(x,y)
|
dy + \int_{\partial \Omega} q_t(x,y) dy = 1.$$
This shows that the integral
$$ I = \int_{\partial \Omega} q_t(x,y) \max_{0 \leq s \leq t} \left| \frac{\partial^2 e^{(t-s)\Delta} f}{\partial \nu^2}(y) \right| dy$$
can be bounded by
$$ I\leq \left(1 - \int_{\Omega}p_t(x_0, y) dy \right) \max_{0 \leq s \leq t} \max_{y \in \partial \Omega} \left| \frac{\partial^2 e^{(t-s)\Delta} f}{\partial \nu^2} \right|,$$
where $\nu = \pm \nu$ always points inside the domain. This is the desired statement. \end{proof}
We note that the last few steps are certainly a bit wasteful and one could obtain more precise estimates if one were to assume additional knowledge about $p_t(\cdot, \cdot)$, $q_t(\cdot, \cdot)$ or the harmonic measure $\omega_{x_0}$.
\section{Proof of Theorem 2}
The proof decouples into the following steps.
\begin{enumerate}
\item First, we show that second derivatives on the boundary are controlled and at most of size $\lesssim_{\partial \Omega} \| \nabla \phi_k\|_{L^{\infty}}$. The implicit constant will only depend on the mean curvature of the boundary $\partial \Omega$.
\item We then use this in combination with Theorem 1. We use time scale $t = \varepsilon \lambda_k^{-1}$ and will show that the entire argument
can be carried out with a sufficiently small $\varepsilon>0$ whose final size only depends on the geometry of $\partial \Omega$ and the dimension $n$. If the second derivatives assume their maximum value in a point $x_0$, then there exists a set $A$ in a $\sqrt{t}-$neighborhood of $x_0$ where the second derivatives are large. $A$ itself is large in the sense of
$$ \int_{A} p_t(x_0, y) dy \geq 1 - 4 \varepsilon.$$
\item This implies that $x_0$ is not too close to the boundary: $d(x_0, \partial \Omega) \geq \sqrt{t}$.
\item Finally, we show that if we consider a ball of radius $\sqrt{t}$ around $x_0$ (and, by the previous step, this ball is fully contained in $\Omega$), then there exists a line segment such that second derivatives are large on most of its length.
\item The Fundamental Theorem of Calculus then shows that the derivatives have to grow very quickly along the line segment and this will lead to a contradiction once the second derivatives are too large: this will show that $\| \nabla \phi_k\|_{L^{\infty}}$ has to be large which then contradicts known bounds.
\end{enumerate}
\subsection{Eigenfunctions on the boundary.}
We start the argument by noting that eigenfunctions $-\Delta \phi_k = \lambda_k \phi_k$ in smooth domains with vanishing Dirichlet boundary conditions on $\partial \Omega$ cannot have a particularly large second derivative on the boundary. We will prove this by expressing the Laplacian in local coordinates at the boundary: more precisely, let $\partial \Omega$ be a hypersurface in $\mathbb{R}^n$ and let $\nu$ be a unit normal vector to $\partial \Omega$. Using $\Delta_{\partial \Omega}$ to denote the Laplacian in the induced metric on $\partial \Omega$ and $\Delta$ to denote the classical Laplacian in $\mathbb{R}^n$, we have the identity
$$ \Delta u = \Delta_{\partial \Omega} u + (n-1)H \frac{\partial u}{\partial n} + \frac{\partial^2 u}{\partial n^2},$$
where $H$ is the mean curvature of the boundary $\partial \Omega$ in that point. This identity is derived, for example, in the book by Sperb \cite[Eq. 4.68]{sperb}.
\begin{center}
\begin{figure}[h!]
\begin{tikzpicture}
\draw [thick] (0,0) to[out= 320, in =190] (4, -1);
\filldraw (2,-1.02) circle (0.04cm);
\draw [->] (2, -1.02) -- (2.18, -0.2);
\node at (1.9, -1.3) {$x$};
\node at (2.4, -0.3) {$\nu$};
\node at (4.5, -1) {$\partial \Omega$};
\end{tikzpicture}
\caption{A point on the boundary and a normal direction.}
\end{figure}
\end{center}
This equation is particularly useful in our case because $\phi_k$ vanishes identically on the boundary and thus $\Delta_{\partial \Omega} \phi_k = 0$. The identity is also frequently used on level sets of $u$ (since then the Laplacian $\Delta_{\partial \Omega} u$ vanishes for the same reason), see for example Kawohl \& Horak \cite{kawohl}. We refer to Reilly \cite{reilly} for a friendly introduction to this identity in low dimensions. Since $\phi_k$ vanishes on the boundary, we have
$$ \frac{\partial^2 \phi_k}{\partial \nu^2} + (n-1) H \frac{\partial \phi_k}{\partial \nu} = 0.$$
However, the mean curvature $H$ is bounded depending only on $\partial \Omega$ because the domain is smooth. Therefore, using the gradient estimate of Hu, Shi \& Xu \cite{hu} on compact Riemannian manifolds with boundary, we get
\begin{align*}
\left| \frac{\partial^2 \phi_k}{\partial \nu^2} \right| &\leq n \| H\|_{L^{\infty}(\partial \Omega)} \left| \frac{\partial \phi_k}{\partial \nu} \right| \lesssim_{\Omega} \| \nabla \phi_k\|_{L^{\infty}} \lesssim_{\Omega} \lambda^{1/2} \| \phi_k\|_{L^{\infty}} \lesssim \lambda_k^{\frac{n+1}{4}}.
\end{align*}
This shows that second derivatives in the normal direction cannot be much larger than first derivatives, since they are generated as a combination of first derivatives and the curvature. As for second derivatives in directions orthogonal to the normal direction, we note that due to the smoothness of $\Omega$, points at distance $\varepsilon$ are $\sim \varepsilon^2$ away from the boundary, where the implicit constant depends on the curvature of $\partial \Omega$. This allows us to bound the second differential quotient by $\leq c_{\Omega} \| \nabla f\|_{L^{\infty}},$
where $c_{\Omega}$ depends only on the local curvature of $\partial \Omega$.
\subsection{Applying Theorem 1.} Let us fix $x_0 \in \Omega$ and $\nu_0 \in \mathbb{S}^{n-1}$ so that the second derivative in $x_0$ in direction $\nu_0$ is among the largest that can occur, i.e.
$$ \frac{\partial^2 \phi_k}{\partial \nu_0^2}(x_0) = \max_{x \in \Omega, \nu \in \mathbb{S}^{n-1}} \left| \frac{\partial^2 \phi_k}{\partial \nu^2}(x) \right| = c_1 \lambda_k^{\frac{n+3}{4}}.$$
We assume that this largest second directional derivative is positive (for ease of exposition): if it is negative, we consider without loss of generality $-\phi_k$ instead. Our goal is to now deduce a contradiction once $c_1$ is sufficiently large.
Let $\varepsilon$ be a small parameter (our goal will be to show that there is a sufficiently small but positive parameter $\varepsilon > 0$ depending only on $\Omega$ such that all the subsequent arguments work). The first step works for all $\varepsilon>0$: we use Theorem 2 for time $t = \varepsilon \lambda_k^{-1}$ in combination with
$$ e^{t\Delta} \phi_k(x) = e^{-\lambda_k t} \phi_k(x)$$
to obtain
$$ e^{-\varepsilon} \frac{\partial^2 }{\partial \nu_0^2} \phi_k(x_0) \leq \int_{\Omega} p_t(x_0, y) \frac{\partial^2 }{\partial \nu_0^2} \phi_k(y) dy + c_2 \lambda_k^{\frac{n+1}{4}},$$
where the existence of an absolute constant $c_2$ depending only on $\Omega$ follows from \S 3.1. We introduce the set where second directional derivatives in direction $\nu_0$ are `large' (positive and at least half the value of the maximum)
$$ A =\left\{x \in \Omega: \frac{\partial^2\phi_k }{\partial \nu_0^2} (x) \geq \frac{1}{2}\frac{\partial^2 \phi_k}{\partial \nu_0^2} (x_0) \right\}.$$
This allows us to bound the inequality above by replacing the partial derivatives in $y$ by the maximum partial derivative
\begin{align*}
e^{-\varepsilon} \frac{\partial^2 }{\partial \nu_0^2} \phi_k(x_0) &\leq \int_{\Omega} p_t(x_0, y) \frac{\partial^2 }{\partial \nu_0^2} \phi_k(y) dy + c_2 \lambda_k^{\frac{n+1}{4}} \\
&\leq \frac{1}{2} \int_{\Omega \setminus A} p_t(x_0, y) \left(\frac{\partial^2 }{\partial \nu_0^2} \phi_k(x_0)\right) dy \\
&+ \int_{A} p_t(x_0, y) \left(\frac{\partial^2 }{\partial \nu_0^2} \phi_k(x_0) \right) dy + c_2 \lambda_k^{\frac{n+1}{4}}.
\end{align*}
Dividing by the largest derivative, we get
$$ 1 - \varepsilon \leq e^{-\varepsilon} \leq \frac{1}{2} \int_{\Omega \setminus A} p_t(x_0, y) dy + \int_{A} p_t(x_0, y) dy + \frac{c_2}{c_1} \frac{1}{\sqrt{\lambda_k}}.$$
Recall that
$$ \int_{\Omega \setminus A} p_t(x_0, y) dy + \int_{A} p_t(x_0, y) dy = \int_{\Omega} p_t(x,y) dy \leq 1.$$
This shows that once $\lambda_k$ is sufficiently large, say, so large that
$$ \frac{c_2}{c_1} \frac{1}{\sqrt{\lambda_k}} \leq \varepsilon$$
we have
$$ 1 - 2\varepsilon \leq \frac{1}{2} \int_{\Omega \setminus A} p_t(x_0, y) dy + \int_{A} p_t(x_0, y) dy.$$
However, if $1 - 2\varepsilon \leq a/2 + b$ and $a+b \leq 1$, then $a \leq 4\varepsilon$ and thus
$$ \int_{A} p_t(x_0, y) dy \geq 1 - 4 \varepsilon.$$
This means that if we start a Brownian motion in $x_0$ and let it run for $t = \varepsilon \lambda_k^{-1}$ units of time, then the likelihood of never hitting the boundary and ending up in the set $A$ is actually quite large. We note that this argument can be used for each $\varepsilon > 0$ at the cost of excluding finitely many initial eigenfunctions (whose number depends on $\varepsilon$ in the manner outlined above).
\subsection{$x_0$ is far away from the boundary.} This section uses the inequality
$$ \int_{A} p_t(x_0, y) dy \geq 1 - 4 \varepsilon$$
to prove that $x_0$ is at least $\geq c_3 \cdot \sqrt{t}$ away from the boundary. Indeed, we obtain a slightly stronger result and show that we could choose $c_3 = 1$ for $\varepsilon$ sufficiently small (depending only on $\Omega$). Since the boundary is smooth and compact, there exists an length scale $\delta_0$ such that at the scale of $\delta_0$ (or below) the boundary behaves roughly like a hyperplane. Suppose $d(x_0, \partial \Omega) \leq 0.01 \sqrt{t} \ll \delta_0$. Then the geometry looks a bit as in Fig. \ref{fig:wavel}.
\begin{center}
\begin{figure}[h!]
\begin{tikzpicture}[scale=1.3]
\draw [thick] (0,0) to[out=70, in=280] (0,3);
\filldraw (1,1.5) circle (0.04cm);
\node at (1.3, 1.5) {$x_0$};
\draw[thick] (1, 1.5) circle (1.5cm);
\draw [<->] (1, 1.6) -- (1, 2.9);
\node at (1.3, 2.4) {$\sqrt{t}$};
\node at (-0.1, 2) {$\partial \Omega$};
\draw [<->] (0.3, 1.5) -- (0.9, 1.5);
\node at (0.8, 1.2) { $d(x_0, \partial \Omega)$};
\end{tikzpicture}
\caption{A maximal point being closer than $\sqrt{t}$ to the boundary.}
\label{fig:wavel}
\end{figure}
\end{center}
Since $\partial \Omega$ is smooth and we operate at small length scales (relative to $\partial \Omega$), we know that $d(x_0, \partial \Omega) \leq 0.01 \sqrt{t}$ implies that $B_{\sqrt{t}}(x_0)$ intersects the boundary $\partial \Omega$ in a large segment. This also shows that a typical Brownian motion run for $t$ units of time will hit the boundary with nontrivial probability bounded away from 0 which contradicts our lower bound on the survival probability of Brownian motion up to time $t$
$$ 1 - 4 \varepsilon \leq \int_{A} p_t(x_0, y) dy \leq \int_{\Omega} p_t(x_0, y) dy.$$
It remains to make this more quantitative. There is a simple way of bounding the survival probability of a Brownian motion as follows: let us assume that $x_0$ is in the origin, the boundary $\partial \Omega$ is oriented so that it is given by the hyperplane $\left\{x \in \mathbb{R}^n: x_1 = d \right\}$ at distance $d$ from the origin. The boundary $\partial \Omega$ is \textit{not} a hyperplane but for sufficiently small length scales behaves effectively as one, the curvature acts as a lower order term (which we will account for in the subsequent argument). The main question is then whether, within $t$ units of time, the first component of the Brownian motion is ever larger than $d$.
The first component of an $n-$dimensional Brownian motion is a one-dimensional Brownian motion and we can apply the reflection principle to conclude
$$ \mathbb{P}\left( \max_{0 \leq s \leq t} B(s) \geq d \right) = \mathbb{P}\left( |B(t)| \geq d \right).$$
This refines our estimate to
\begin{align*}
1 - 4 \varepsilon &\leq \int_{A} p_t(x_0, y) dy \leq \int_{\Omega} p_t(x_0, y) dy \\
&\leq 1 - \mathbb{P}\left[ |B(t)| \geq 2 \cdot d(x_0, \partial \Omega) \right],
\end{align*}
where the factor of 2 compensates for the higher order term coming from the curvature of $\partial \Omega$ (and thus valid for $t$ sufficiently small depending only on $\partial \Omega$). Thus
$$ \mathbb{P}\left[ |B(t)| \geq 2 \cdot d(x_0, \partial \Omega) \right] \leq 4 \varepsilon.$$
$B(t)$ is distributed like the Gaussian $\mathcal{N}(0,t)$ and thus, by rescaling,
$$ \mathbb{P}\left[ |B(t)| \geq 2 \cdot d(x_0, \partial \Omega) \right] = \mathbb{P}\left[ |B(1)| \geq \frac{2}{\sqrt{t}} \cdot d(x_0, \partial \Omega) \right] \leq 4 \varepsilon.$$
Knowing that this quantity is less than $4\varepsilon$ will lead to a lower bound on $d(x_0, \partial \Omega)$. The standard tail bound, valid for all $z>0$,
$$ \mathbb{P} (B(0,1) \geq z) \geq \frac{1}{\sqrt{2\pi}} \left(\frac{1}{z} - \frac{1}{z^3} \right)e^{-z^2/2}$$
implies, for $z \geq 2$,
$$ 2\varepsilon \geq \mathbb{P} (B(0,1) \geq z) \geq \frac{1}{2\pi z} e^{-z^2/2}.$$
We want to argue that this forces
$$ z \geq z_0 = \sqrt{\log{\frac{1}{\varepsilon}}}$$
because plugging in $z_0$ leads to
$$ \frac{1}{2\pi z_0} e^{-z_0^2/2} = \frac{1}{2\pi} \frac{1}{\sqrt{\log{\frac{1}{\varepsilon}}}} e^{-z_0^2/2} = \frac{1}{2\pi} \frac{\sqrt{\varepsilon}}{\sqrt{\log{\frac{1}{\varepsilon}}}} $$
which is larger than $2 \varepsilon$ for all $0 \leq \varepsilon \leq 0.0009$. Thus, for $\varepsilon$ sufficiently small (which here is an absolute constant), we have the desired inequality.
This forces
$$z \geq \sqrt{\log{\frac{1}{\varepsilon}}} $$
and thus
$$ d(x_0, \partial \Omega) \geq \frac{ \sqrt{t}}{4} \cdot \sqrt{\log{\frac{1}{\varepsilon}}}. $$
Recalling that $t = \varepsilon \lambda_k^{-1}$, we have
$$ d(x_0, \partial \Omega) \geq \frac{1}{4} \cdot \sqrt{\varepsilon} \cdot \sqrt{\log{\frac{1}{\varepsilon}}} \cdot \lambda_k^{-1/2}.$$
For $\varepsilon \leq e^{-16}$, it is at least $\varepsilon^{1/2} \lambda_k^{-1} = \sqrt{t}$ away from the boundary. We will not, strictly speaking, need this and could absorb any constant that arises here in the final step.
Similar arguments have already been used in other settings in the literature, often to prove bounds on the location of the maximum of the solution \cite{bogi, lierl, rachh} and also in the context of Hermite-Hadamard inequalities \cite{jianfeng}.
\subsection{Parts of $A$ are close to $x_0$.} The next step is to argue that
$$ \int_{A} p_t(x_0, y) dy \geq 1 - 4 \varepsilon$$
implies that $A$ has a large intersection with a ball $B_{\sqrt{t}}(x_0)$. We know from the previous section that for $\varepsilon$ sufficiently small the entire ball is contained in $\Omega$. Domain monotonicity of the heat kernel allows to compare the heat kernel $p_t(x_0, \cdot)$ to the heat kernel in $\mathbb{R}^n$ which is strictly larger. This implies
$$ 1 - 4\varepsilon \leq \int_{A} p_t(x_0, y) dy \leq \int_{A} \frac{1}{(4\pi t)^{n/2}} \exp\left( - \frac{\|x_0 -y\|^2}{4t} \right) dy \leq 1.$$
Hence, since the Euclidean heat kernel has total integral 1,
$$ \int_{A^c} \frac{1}{(4\pi t)^{n/2}} \exp\left( - \
|
code degree. As a result, the standard belief propagation (BP) decoder can be used to jointly decode all the users at the destination. We analyze the decoding performance through density evolution analysis and show that the average LLR of each user is proportional to its transmit SNR. Simulation results show that the proposed scheme can approach the sum-rate capacity of the Gaussian multiple access channel in a wide range of SNRs and can be effectively applied to systems with a large number of users due to the linear complexity of the encoding and decoding of AFCs. Throughout the paper, we use boldface letters to denote the vectors, where the $i^{th}$ entry of vector \textbf{v} is denoted by $v_i$. Matrices are represented by boldface capital letters, $\textbf{G}$, where $g_{i,j}$ is the $j^{th}$ entry in the $i^{th}$ row of $\textbf{G}$ and $\textbf{G}^{'}$ is the transpose of \textbf{G}.
The rest of the paper is organized as follows. In Section II, we present the system model. The proposed multiple access scheme is presented in Section III. Section IV presents the decoding algorithm and the asymptotic analysis of the belief propagation decoding of the MA-AFC scheme based on the density evolution analysis. Simulation results are shown in Section V, and finally conclusions are drawn in Section VI.
\section{System Model}
We consider a communication system consisting of $M$ users, U$_1$, U$_2$,..., U$_M$, out of which a subset $\mathcal{S}\ne {\O}$ is active. That is only the users in the set $\mathcal{S}$ are transmitting to a common destination, $D$, as shown in Fig. \ref{sysmodel}. Each active user has $k$ binary information symbols to be delivered at the destination. The transmission model is time slotted, where at each time slot $i\ge 1$, an active user U$_j$ transmits a rateless coded symbol $u_{i,j}$ to the destination. The received signal $y_i$ at the destination at time instant $i$ is given by:
\begin{align}
\textstyle y_i=\sum_{j\in\mathcal{S}}h_ju_{i,j}+n_i,
\end{align}
where $h_j$ denotes the channel coefficient for U$_j$ and $\{n_i\}_{i=1}^{\infty}$ is a sequence of i.i.d. Gaussian random variables with zero mean and unit variance. The set of active users, $\mathcal{S}$, is known to the receiver; however, the transmitters only know whether they belong to $\mathcal{S}$ or not.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.32]{Systemmodel.pdf}
\caption{M-User multiple access channel. Black circles show active users.}
\label{sysmodel}
\end{figure}
From an information theoretic point of view, for a $M$-user Gaussian MAC where there are $M$ users with power constraints $(P_1,P_2,...,P_M)$ and unit variance AWGN noise with zero mean, any rate $R_i$ can be achieved by U$_i$ as follows:
\begin{align}
\label{sumratecap}
\textstyle R_{\mathcal{S}}\triangleq\sum_{i\in\mathcal{S}}R_i<\frac{1}{2}\log_2(1+\sum_{i\in\mathcal{S}}P_i),
\end{align}
where $R_{\mathcal{S}}$ is defined as the sum-rate of active users. Moreover, the upper bound can be achieved when the output signal distribution of each active user follows the Gaussian distribution $\mathcal{N}(0,P_i)$ \cite{EIT}.
\section{Multiple Access Analog Fountain Codes}
Analog fountain codes (AFC) have been recently proposed in \cite{MahyarLetter} as an effective rateless transmission scheme to approach the capacity of wireless channels. AFC is rateless in nature as a potentially limitless number of coded symbols can be generated, which enables the transmitter to effectively adapt to unknown channel conditions. In this section, we first briefly introduce the AFC code and then a general extension of AFC codes, denoted by multiple access analog fountain code (MA-AFC), is proposed.
\subsection{Analog Fountain Codes}
In AFC, the entire message of length $k$ binary symbols is first BPSK modulated to obtain a vector of modulated information symbols, \textbf{b}$_{1\times k}$. To generate each coded symbol, $u_i$, first an integer $d$, called \emph{degree}, is obtained based on a predefined probability distribution function, called \emph{degree distribution}, for $i=1,2,...$ . Then, $d$ randomly selected modulated information symbols are linearly combined with real weight coefficients to generate one coded symbol. For simplicity, we assume that the degree of each coded symbol is fixed, which is denoted by $d_c$ in this paper, and weight coefficients are chosen from a finite \emph{weight set}, $\mathcal{W}_s$, with $f\ge d_c$ positive real members, as follows:
\begin{align}
\mathcal{W}_s=\{w_i\in \mathbb{R}^{+}|i=1,2,...,f\},
\end{align}
where $\mathbb{R}^{+}$ is the set of positive real numbers and $\sigma^2_w\triangleq\frac{1}{f}\sum_{j=1}^{f}w_j^2$ is defined as the average weight set energy. Let $g_{i,j}$ denote the weight coefficient assigned to the $j^{th}$ modulated information symbol in generating the $i^{th}$ coded symbol. It is clear that $g_{i,j}\in\mathcal{W}_S$, if $b_j$ is selected in generating $c_i$; otherwise $g_{i,j}=0$. Let \textbf{G}$_{m\times k}$ denote the matrix of weight coefficients, referred to as the \emph{generator matrix}, where the $j^{th}$ element in the $i^{th}$ row of \textbf{G} is $g_{i,j}$, then AFC encoding process can be shown as a matrix form as follows:
\begin{align}
\label{afcmatrixencoder}
\textbf{u}=\textbf{Gb}',
\end{align}
where \textbf{u} is a $m$ by $1$ vector of coded symbols and $m$ is the number of coded symbols. By considering information and coded symbols as variable and check nodes, respectively, the encoding process of AFC can be further described by a weighted bipartite graph as in Fig. \ref{graph}.
As shown in \cite{MahyarLetter}, if information symbols are selected uniformly at random in the encoding process of an AFC, some information symbols may not have connections to any coded symbol even with a large number of coded symbols, which leads to a decoding error floor. To avoid such an error floor, we have proposed a modified encoder for AFC by maximizing the minimum variable node degree. This way, to generate each coded symbol, $d_c$ information symbols are randomly selected among those which currently have the smallest degrees. The entire data is also precoded using a high rate LDPC code, which has shown to significantly reduce the error floor in AFC codes \cite{MahyarLetter}. Further details of AFC code design can be found in \cite{MahyarLetter}.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.25]{AFCbipartite.pdf}
\caption{Weighted bipartite graph of an AFC with $d_c=2$.}
\label{graph}
\end{figure}
\subsection{Multiple Access Analog Fountain Codes}
Each coded symbol in AFC is a linear combination of modulated information symbols with real weight coefficients. Thus, the sum of two or more AFC coded symbols is still an AFC coded symbol with a larger degree. This motivates us to consider AFCs in a multiple access scenario, where the sum of coded symbols from various users is received at the destination; thus, forming an equivalent AFC code at the destination.
In the proposed scheme, referred to as multiple access analog fountain code (MA-AFC), each active user $\text{U}_j$ uses an AFC with degree $d_j\ge1$ and a weight set $\mathcal{W}_s$ to generate a potentially limitless number of AFC coded symbols. Let $\textbf{u}^{(j)}_{m\times 1}$ denote the vector of coded symbols generated by U$_j$, and \textbf{G}$^{(j)}_{m\times k}$ denote U$_j$'s generator matrix, where $m$ is the number of coded symbols. Then at time instant $i$, the received signal at the destination, $y_i$, is given by:
\begin{align}
\label{recsig}
\textstyle y_i=\sum_{j\in\mathcal{S}}h_{j}u^{(j)}_{i}+n_i,
\end{align}
According to (\ref{afcmatrixencoder}), we have $u^{(j)}_{i}=\sum_{\ell=1}^{k}g^{(j)}_{i,\ell}b^{(j)}_{\ell}$; thus, (\ref{recsig}) can be rewritten as follows:
\begin{align}
\label{recsig2}
\textstyle \textbf{y}=\sum_{j\in\mathcal{S}}h_j\textbf{G}^{(j)}\textbf{b}^{(j)'}+\textbf{n},
\end{align}
where $\textbf{b}^{(j)}$ is the information sequence of U$_j$. Eq. (\ref{recsig2}) can be further simplified as follows:
\begin{align}
\label{finalMAAFCmatrix}
\textbf{y}=\textbf{G}\textbf{b}^{'}+\textbf{n},
\end{align}
where $\textbf{G}\triangleq[h_1\textbf{G}^{(1)}|h_2\textbf{G}^{(2)}|...|h_N\textbf{G}^{(N)}]$ and $\textbf{b}\triangleq[\textbf{b}^{(1)}|\textbf{b}^{(2)}|...|\textbf{b}^{(N)}]$. This clearly shows that the received signal at the destination can be seen as a coded symbol of an equivalent AFC, where modulated information symbols are chosen from all active users. It is important to note that the weight coefficients of the equivalent AFC are the weight coefficients of the original AFC codes at each user, multiplied by the gain of the respective channel. More specifically, the equivalent weight set at the destination can be shown by $\mathcal{W}_e\triangleq\bigcup_{i\in\mathcal{S}}\{h_i\mathcal{W}_s\}$ and the code degree $d_e$ of the equivalent AFC code at the destination is $d_e\triangleq\sum_{i\in\mathcal{S}}d_i$. Fig. \ref{graph211} and Fig. \ref{graph311} show the original AFC code at each user and the equivalent bipartite graph of the AFC code at the destination, respectively, when the number of users is 2, i.e., $M=2$.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.35]{graph2.pdf}
\caption{Bipartite graph of original AFC code graphs.}
\label{graph211}
\end{figure}
\subsection{Weight Set Design for MA-AFC}
As mentioned before, to achieve the capacity of the Gaussian MAC, the output signal distribution of each user should follow the Gaussian distribution. In this section, we propose an optimization problem to find the optimum weight set in order to guarantee that AFC coded symbols from each user follow the Gaussian distribution.
Let us consider an AFC with the code degree $d_c$, where information symbols are selected uniformly at random to generate each coded symbol. For a coded symbol $u_i$ and a given real number $c$, we have
\begin{align}
\textstyle p(u_i=c)=p\left(\sum_{\ell\in\mathcal{M}(i)}g_{i,\ell}b_{\ell}=c\right),
\end{align}
where $\mathcal{M}(i)$ is the set of variable nodes connected to the check node $u_i$. Since each information symbol is either -1 or 1 with the same probability of 0.5, and weights are randomly chosen from $\mathcal{W}_s$, then $s_{i,\ell}\triangleq g_{i,\ell}b_{\ell}$ is uniformly distributed as follows:
\begin{align}
p(s_{i,l}=v)=\frac{1}{2f}, ~ |v|\in\mathcal{W}_s.
\end{align}
Furthermore, the mean and variance of $s_{i,\ell}$ are respectively, $m_s=0$ and $\sigma^2_{s}=\frac{1}{f}\sum_{i=1}^{f}(w_i^2)=\sigma^2_w$. Since $s_{i,\ell}$'s are identical and independent random variables, $u_i$ has mean $0$ and variance $d_c\sigma^2_{s}$. Moreover, when $d_c$ is relatively large, according to the central limit theorem, $u_i$ has zero mean Gaussian distribution with variance $d_c\sigma^2_{w}$. However, for a small value of $d_c$, we need to find the optimum weight set in order for the signal distribution to approach the Gaussian distribution. Therefore, we need to find the weight coefficients such that the following condition is satisfied, for a given $\epsilon>0$ and $\delta>0$:
\begin{align}
\label{WeightOptProb}
|p_{\delta}^{(i)}-q_{\delta}^{(i)}|^2\le\epsilon,~~\text{for}~i=1,2,...,i_{max}
\end{align}
where $\textstyle p_{\delta}^{(i)}=p\left((i-1)\delta\le \sum_{j=1}^{d}b_jw_j<i\delta\right)$, $\textstyle q_{\delta}^{(i)}=Q\left((i-1)\delta\right)-Q(i\delta)$, $i_{max}$ is chosen to be large normally larger than 10, and $\textstyle Q(x)=\frac{1}{\sqrt{2\pi}}\int_x^\infty e^{-z^2/2}dz$. It is clear that when condition (\ref{WeightOptProb}) is satisfied, the output signal distribution will be close to the Gaussian distribution with at most $\sqrt{\epsilon}$ difference at $\delta-$distanced signal points. This optimization problem can be numerically solved for different values of $\delta$ and $\epsilon$. For instance, for $f=8$, $\delta=0.2$ and $\epsilon=10^{-4}$, the optimum weight set is as follows: $\{\frac{1}{2},\frac{1}{3},\frac{1}{5},\frac{1}{7},\frac{1}{11},\frac{1}{13},\frac{1}{17},\frac{1}{19}\}$. Since the sum of Gaussian distributed signals is still Gaussian, the distribution of MA-AFC coded symbol is also Gaussian as long as AFC coded symbols of each active user follows a Gaussian distribution.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.38]{graph3.pdf}
\caption{Bipartite graph of Equivalent AFC at the destination.}
\label{graph311}
\end{figure}
\section{Decoding of MA-AFC}
Let us consider the bipartite graph of
|
a MA-AFC in Fig. \ref{graph311}. The optimum decoding algorithm for MA-AFC produces the posteriori probability distribution of \textbf{b}, i.e., $P_{\textbf{b}|\textbf{y},\textbf{G}}(.|\textbf{y},\textbf{G})$, which is a sufficient statistic for \textbf{b} in an information theoretic sense, where $\textbf{y}=\textbf{Gb}+\textbf{n}$ as in (\ref{finalMAAFCmatrix}). Here, we present the belief propagation (BP) decoding algorithm for a MA-AFC and then an asymptotic performance analysis of the BP decoder is presented.
\subsection{Belief Propagation Decoding of MA-AFC}
Belief propagation (BP) is an efficient message passing algorithm for estimating the marginal posterior distribution based on the observations. In each iteration of BP, messages are first transmitted from variable to check nodes and then new messages are generated at each check node to be sent back to each of its connected variable nodes.
As variable nodes are binary, then the common choice for the messages is the log-likelihood ratio (LLR) \cite{guo2008multiuser}. Let $L_{r\rightarrow \ell}^{(t)}$ represent the message from variable node $b_r$ to check node $y_{\ell}$ and $L_{\ell\rightarrow r}^{(t)}$ represents the message in the reverse direction at the $t^{th}$ iteration of the BP algorithm. The updating rules for the BP algorithm at the $t^{th}$ iteration ($t\ge1$) can then be shown as follows \cite{guo2008multiuser}:
\begin{align}
\label{update1}
L_{\ell\rightarrow r}^{(t)}&=\text{log}\frac{P\left\{y_{\ell}|b_r=+1,\textbf{G},\{L_{r'\rightarrow \ell}^{(t-1)}\}_{r'\in\mathcal{M}(\ell)\backslash r}\right\}}{P\left\{y_{\ell}|b_r=-1,\textbf{G},\{L_{r'\rightarrow \ell}^{(t-1)}\}_{r'\in\mathcal{M}(\ell)\backslash r}\right\}},\\
L_{r\rightarrow \ell}^{(t)}&=\sum_{\ell'\in\mathcal{N}(r)\backslash \ell}L_{\ell'\rightarrow r}^{(t)},
\end{align}
where $\mathcal{M}(\ell)\backslash r$ is the set of all variable nodes connected to check node $y_{\ell}$ except variable node $b_r$ and $\mathcal{N}(r)\backslash \ell$ is the set of all check nodes connected to variable node $b_r$ except check node $y_{\ell}$. By considering that the noise variance is 1, we have
\begin{multline}
\nonumber \textstyle P\left\{y_{\ell}|b_r=x,\textbf{G},\{L_{r'\rightarrow \ell}^{(t-1)}\}_{r'\in\mathcal{M}(\ell)\backslash r}\right\}\\
=\sum_{(b_{r'})_{r'\in\mathcal{M}(\ell)}}\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}\left(y_{\ell}-\sum_{r'\in\mathcal{M}(\ell)}g_{\ell,r'}b_{r'}\right)^2}\prod_{r'\in\mathcal{M}(\ell)}p_X(b_{r'}),
\end{multline}
where $x\in\{-1,+1\}$ and $p_X(b_r=+1)\propto \exp(L_{r\rightarrow \ell}^{(t)})$. The initial messages $L_{r'\rightarrow \ell}^{(0)}$ correspond to prior LLRs of modulated information symbols, which is $0$ due to the fact that information symbols are equally probable binary random variables. These iterations are repeated until a specific number of iterations are performed or convergence is achieved. After a predefined number of iterations, $T$, the final value of LLRs can be calculated as $L_{r}^{(T)}=\sum_{\ell'\in\mathcal{N}(r)}L_{\ell\rightarrow r}^{(T)}$. A variable node $b_r$ is then decoded as $1$, if $L_{r}^{(T)}>0$; otherwise, it is decoded as $0$.
\subsection{Asymptotic Performance Analysis of MA-AFC based on the BP Decoding}
A commonly used analytical tool for analyzing a BP decoder is density evolution, which calculates the evolutions of message passing in the iterative decoding process. In this paper, we focus on the density evolution analysis in the asymptotic case, when the number of variable and check nodes go to infinity.
Let us refer to information symbols of U$_i$ as Type-X$_i$ variable nodes in the bipartite graph of a MA-AFC code, for $i=1,2,...,N$. The following lemma gives an approximation of LLRs for various types of variable nodes in each iteration of the BP decoder in the asymptotic case, when the number of variable and check nodes are very large.
\newtheorem{lemma}{Lemma}
\begin{lemma}
\label{GeneralDensity}
Let $L^{(t)}_{\text{X}_i\rightarrow \ell}$ denote the message passed from a Type-X$_i$ variable node to a check node $y_{\ell}$ in the $t^{th}$ iteration of the BP decoding algorithm. Then, $L^{(t)}_{\text{X}_i\rightarrow \ell}$ can be approximated by a normal random variable with mean $m^{(t)}_{i}$ and variance $2m^{(t)}_{i}$, where $m^{(t)}_{i}$ can be calculated as follows:
\begin{align}
\label{MAAFCM}
m^{(t)}_{i}=h_i^2\sigma^2_w\frac{d_im}{k}\frac{2}{1+\sigma^2_{Y}},
\end{align}
where $\sigma^2_{Y}=\sum_{j\in\mathcal{S}}h_j^2d_j\sigma^2_w S\left(m^{(t)}_{j}\right)$ and $S(x)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{+\infty}\left(1-\tanh(x-y\sqrt{x})\right)e^{-\frac{y^2}{2}}dy$.
\end{lemma}
The proof of this lemma is provided in Appendix \ref{prooflemma}. The BER for each device after $t$ iterations of the BP decoder can then be calculated as $Q\left(\sqrt{m_i^{(t)}}\right)$ \cite{guo2008multiuser}. Let $P_{e,i}$ denote the BER of U$_i$ at the destination, then by using Lemma \ref{GeneralDensity} and for $i,j\in\mathcal{S}$, we have:
\begin{align}
\label{mapprox}
\frac{m_i^{(t)}}{m_j^{(t)}}=\frac{h_i^2d_i}{h_j^2d_j}.
\end{align}
Accordingly, we have:
\begin{align}
\label{pe1pp}
P_{e,i}=Q\left(Q^{-1}(P_{e,j})\sqrt{\frac{h_i^2d_i}{h_j^2d_j}}\right).
\end{align}
This shows that an active user with a higher product of channel gain and code degree, has a lower BER compared to other active users; thus, can be decoded sooner at the destination.
\section{Simulation Results}
For simulation purposes, we consider $k=1000$ and a AFC code with the weight set $\mathcal{W}_s=\{1/2,1/3,1/5,1/7,1/11,1/13,1/17,1/19\}$. Let us first investigate a 2-user MAC, where the users use AFC with the same weight set. The channel gain and code degree for both users are $h_1=h_2=1$ and $d_1=d_2=4$, respectively. Fig. \ref{sim1} shows the sum-rate achieved by the MA-AFC approach versus the received SNR at the destination at target BER$=10^{-4}$. As can be seen in Fig. \ref{sim1}, MA-AFC can closely approach the sum-rate capacity of the Gaussian MAC in (\ref{sumratecap}), where users are transmitting with the same power. We have also shown the rate achieved by each user at SNR$=20$ dB in Fig. \ref{sim1}, which is very close to the capacity of the Gaussian MAC.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.24]{FinalAchMAAFC.pdf}
\caption{Achievable sum-rate of MA-AFC approach versus the received SNR at the destination. The target BER is $10^{-4}$.}
\label{sim1}
\end{figure}
Fig. \ref{sim2} shows the bit error rate (BER) versus the inverse sum-rate for a 4-user MAC at $30$ dB received SNR at the destination, where $d_1=d_2=d_3=d_4=4$, and $h_1=1$, $h_2=2$, $h_3=3$, and $h_4=4$. As can be seen in this figure, U$_4$ which has the highest channel gain has a lower BER compared to other users. Furthermore, U$_1$ has the highest BER as it has the lowest product of channel gain and code degree amongst various users. This validates the results of Lemma 1 as the user with the larger of the product of channel gain and code degree will have the lowest BER at the destination. The average BER for each user obtained from the density evolution analysis in Lemma 1 is also shown in Fig. \ref{sim2}, which is very close to the simulation results.
\section{Conclusions}
In this paper, we proposed a novel rateless multiple access scheme based on capacity-approaching analog fountain codes to approach the sum-rate capacity of the multiple access channel in a wide range of SNRs. In the proposed scheme, all users utilize the same AFC code to transmit at the same time and the same channel; thus, forming an equivalent analog fountain code at the destination. The destination then performs joint decoding to recover all users' information symbols. We further analyzed the proposed multiple access analog fountain code by using the density evolution technique. Simulation results show that the proposed approach can approach the sum-rate capacity of the Gaussian multiple access channel in a wide range of SNRs.
\appendices
\section{Proof of Lemma \ref{GeneralDensity}}
\label{prooflemma}
Let us first define $Y_j\triangleq\sum_{r'\in\mathcal{M}(j)\backslash r}g_{j,r'}b_{r'}$ as the $j^{th}$ coded symbol without the $r^{th}$ information symbol. In \cite{guo2008multiuser}, it has been shown that the LLR passed from check node $y_{j}$ to variable node $b_r$ in the $t^{th}$ iteration of the BP algorithm, $L_{j\rightarrow r}^{(t)}$, can be approximated as follows:
\begin{align}
\nonumber L_{j\rightarrow r}^{(t)}=2\frac{f_1(y_j)}{f_0(y_{j})}g_{j,r},
\end{align}
where $f_m(y)$ is defined as follows for $m\in\{0,1\}$:
\begin{multline}
\textstyle \nonumber f_m(y)=\sum_{r'\in\mathcal{M}(j)\backslash r}\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}\left(y-Y_j\right)^2}\\
\nonumber \textstyle \times(y-Y_j)^m\prod_{r'\in\mathcal{M}(j)\backslash r}p_X(b_{r'}).
\end{multline}
Furthermore, the mean and variance of $L_{r\rightarrow \ell}^{(t)}$ can be calculated as follows \cite{guo2008multiuser}:
\begin{align}
\label{Mean1L}
\textstyle |\text{E}\left\{L_{r\rightarrow \ell}^{(t)}\right\}|=&2\int_{-\infty}^{+\infty}\frac{f_1^2(y)}{f_0(y)}dy\sum_{\ell'\in\mathcal{N}(r)\backslash \ell}g^2_{\ell',r}\\
\textstyle \text{var}\left\{L_{r\rightarrow \ell}^{(t)}\right\}=&2|\text{E}\left\{L_{r\rightarrow \ell}^{(t)}\right\}|.
\label{Var1L}
\end{align}
Moreover, when the number of variable and check nodes go to infinity, the integral part of (\ref{Mean1L}) tends to $\frac{1}{1+\sigma^2_Y}$ \cite{guo2008multiuser}, where $\sigma^2_Y$ is the variance of $Y_j$. It has been shown that in the asymptotic case, $L_{r\rightarrow \ell}^{(t)}$ is normally distributed with the mean and variance calculated in (\ref{Mean1L}) and (\ref{Var1L}), respectively. More specifically, for a Type-X$_i$ variable node, (\ref{Mean1L}) can be rewritten as follows:
\begin{align}
\textstyle m^{(t)}_i\triangleq|\text{E}\left\{L^{(t)}_{\text{X}_i\rightarrow \ell}\right\}|=\frac{2}{1+\sigma^2_Y}h_i^2\sum_{\ell'\in\mathcal{N}(X_i)\backslash \ell}g^2_{\ell',X_i},
\end{align}
and accordingly $\text{var}\left\{L^{(t)}_{\text{X}_i\rightarrow \ell}\right\}=2m^{(t)}_i$. As we assume that the number of variable and check nodes go to infinity, according to the law of large number, for large code degree $d_i$, $\sum_{\ell'\in\mathcal{N}(X_i)\backslash \ell}g^2_{\ell',X_i}$ can be approximated by $\sigma^2_wd_im/k$, where $m$ is the number of coded symbols. As shown in \cite{guo2008multiuser}, $\sigma^2_Y$ can be calculated as follows:
\begin{align}
\textstyle \nonumber\sigma^2_Y=\sum_{i\in\mathcal{S}}d_ih_i^2\sigma^2_wS(m_i^{(t)}),
\end{align}
where $S(x)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{+\infty}\left(1-\tanh(x-y\sqrt{x})\right)e^{-\frac{y^2}{2}}dy$.
This completes the proof.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.33]{BERfinal4users.pdf}
\caption{BER versus the inverse sum-rate for a 4-user MAC at SNR$=30$ dB. }
\label{sim2}
\end{figure}
\bibliographystyle{IEEEtran}
\footnotesize
|
\section{Introduction}
In any superconductor there are impurities either present naturally or
systematically produced using the proton or electron irradiation. The
inhomogeneities both on the microscopic and the mesoscopic scale greatly
affect thermodynamic and especially dynamic properties of type II
superconductors in magnetic field. The magnetic field penetrates the sample
in a form of Abrikosov vortices, which can be pinned by disorder. In
addition thermal fluctuations also greatly influence the vortex matter in
high $T_{c}$ superconductors, for example in some cases thermal fluctuations
will effectively reduce the effects of disorder. As a result the $H-T$ phase
diagram of the high $T_{c}$ superconductors is very complex due to the
competition between thermal fluctuations and disorder, and it is still far
from being reliably determined, even in the best studied superconductor, the
optimally doped $YBCO$ superconductor. \cite{Gammel} Difficulties are both
experimental and theoretical. Experimentally various phases with various
(frequently overlapping) names like liquid \cite{Nelson} (sometimes
differentiated into liquid I and liquid II \cite{Bouquet}), vortex solid,
Bragg glass \cite{GLD} (=pinned solid), vortex glass (=pinned
liquid=entangled solid, \cite{Ertas} the vortex slush \cite{Hu}), were
described. To differentiate various phases one should understand the nature
of the phase transitions between them. Although over the years the picture
has evolved with various critical and tricritical points appeared and
disappeared, several facts become increasingly clear.
1. The first order \cite{Zeldov95,Schilling} melting line seems to merge
with the "second magnetization peak" line forming the universal order -
disorder phase transition line. \cite{Zeldov96,Yeshurun} At the low
temperatures the location of this line strongly depends on disorder and
generally exhibits a positive slope (termed also the "inverse" melting \cite%
{Zeldov01}), while in the "melting" section it is dominated by thermal
fluctuations and has a large negative slope. The resulting maximum at which
the magnetization and the entropy jump vanish was interpreted either as a
tricritical point \cite{Nishizaki,Bouquet} or a Kauzmann point. \cite%
{Lidisorder} This universal "order - disorder" transition line (ODT), which
appeared first in the strongly layered superconductors ($BSCCO$ \cite%
{Zeldov96}) was extended to the moderately anisotropic superconductors ($%
LaSCCO$ \cite{Yeshurun}) and to the more isotropic ones like $YBCO$. \cite%
{Lidisorder,Grover} The symmetry characterization of the transition is
clear: spontaneous breaking of the translation and rotation symmetry.
2. The universal "order - disorder" line is different from the
"irreversibility line" or the "glass" transition (GT) line, which is a
continuous transition. \cite{Deligiannis,Taylor} The almost vertical glass
line clearly represents effects of disorder although the thermal
fluctuations affect the location of the transition. Experiments in $BSCCO$
\cite{Zeldov98} indicate that the line crosses the ODT line right at its
maximum, continues deep into the ordered (Bragg) phase. This proximity of
the glass line to the Kauzmann point is reasonable since both signal the
region of close competition of the disorder and the thermal fluctuations
effects. In more isotropic materials the data are more confusing. In $LaSCCO$
\cite{Forgan} the GT line is closer to the "melting" section of the ODT line
still crossing it. In YBCO we are not aware of a claim that the GT line
continuous into the ordered phase. Most of experiments \cite{Nishizaki}
indicate that the GT line terminates at the "tricritical point" in the
vicinity of the maximum of the ODT line. It is more difficult to
characterize the nature of the GT transition as a "symmetry breaking". The
common wisdom is that "replica" symmetry is broken in the glass (either via
"steps" or via "hierarchical" continuous process) as in the most of the spin
glasses theories. \cite{Dotsenko}
Theoretically the problem of the vortex matter subject to thermal
fluctuations or disorder has a long history. An obvious candidate to model
the disorder is the Ginzburg - Landau model in which coefficients have
random components. However this model is too complicated and simplifications
are required. The original idea of the vortex glass and the continuous glass
transition exhibiting the glass scaling of conductivity diverging in the
glass phase appeared early in the framework of the frustrated $XY$ model
(the gauge glass). \cite{Fisher,Natterman} In this approach one fixes the
amplitude of the order parameter retaining the magnetic field with random
component added to the vector potential. It \ was studied by the RG and the
variational methods and has been extensively simulated numerically. \cite%
{Olsson,Hu} In analogy to the theory of spin glass the replica symmetry is
broken when crossing the GT line. The model ran into several problems (see
Giamarchi and Bhattacharya in Ref. \onlinecite{GiamarchiBhattacharya} for a
review): for finite penetration depth $\lambda $ it has no transition \cite%
{BokilYoung} and there was a difficulty to explain sharp Bragg peaks
observed in the experiments at low magnetic fields. To address the last
problem another simplified model had been proven to be more convenient: the
elastic medium approach to a collection of interacting line-like objects
subject to both the pinning potential and the thermal bath Langevin force.
\cite{Otterlo,Reichhardt} The resulting theory was treated again using the
gaussian approximation \cite{Korshunov,GLD} and RG.\cite{Natterman} The
result was that in $2<D<4$ there is a transition to a glassy phase in which
the replica symmetry is broken following the \textquotedblleft hierarchical
pattern\textquotedblright\ (in $D=2$ the breaking is \textquotedblleft one
step\textquotedblright ). The problem of the very fast destruction of the
vortex lattice by disorder was solved with the vortex matter being in the
replica symmetry broken (RSB) phase and it was termed \textquotedblleft
Bragg glass\textquotedblright . \cite{GLD} \ It is possible to address the
problem of mesoscopic fluctuation using an approach in which one directly
simulates the interacting line-like objects subject to both the pinning
potential and the thermal bath Langevin force. \cite{Otterlo,Reichhardt} In
this context the generalized replicated density functional theory \cite%
{Menon02} was also applied resulting in one step RSB solution.\ Although the
above approximations to the disordered GL theory are very useful in more
\textquotedblleft fluctuating\textquotedblright\ superconductors like $BSCCO$%
, a problem arises with their application to $YBCO$ at temperature close $%
T_{c}$ (where most of the experiments mentioned above are done): vortices
are far from being line-like and even their cores significantly overlap. As
a consequence the behavior of the dense vortex matter is expected to be
different from that of a system of pointlike vortices and of the $XY$ model
although the elastic medium approximation might still be meaningful. \cite%
{Brandt}
To describe the non-pointlike vortices, one has to return to the GL model
and make a different simplification. One of the most developed schemes is
the lowest Landau level (LLL) approximation valid close to the $H_{c2}(T)$
line. \cite{Thouless} Such an attempt was made by Dorsey, Fisher and Huang
\cite{Dorsey} in the liquid phase using the dynamic approach \cite%
{Sompolinsky} and by Tesanovic and Herbut\ for columnar defects in layered
materials using supersymmetry. \cite{Herbut} It is the purpose of this paper
to study the glass transition using the replica formalism. We quantitatively
study the glass transition in the same model, with the disorder represented
by the random component of the coefficients of the GL free energy. The most
general hierarchical homogeneous (liquid) Ansatz \cite{Parisi} and its
stability are considered to obtain the glass transition line and to
determine the nature of the transition for various values of the disorder
strength of the GL coefficients. Then we place the glass line on the phase
diagram of YBCO and compare with experiments and other theories.
The paper is organized as follows. The general disordered GL model is
introduced in section II and the gaussian variational replica method is
presented in section III. Next we study in some detail the model either with
$\left\vert \psi \right\vert ^{2}$disorder in Section IV, with less details
the $\left\vert \psi \right\vert ^{4}$disorder in Section V and obtain the
phase transition lines in those two cases. In Section VI, the general model
containing both the $\left\vert \psi \right\vert ^{2}$disorder and the $%
\left\vert \psi \right\vert ^{4}$disorder is treated briefly. In Section
VII, we compare our results with the experimental data and conclude in
section VIII by summarizing our results.
\section{Disorder effects in the Ginzburg - Landau description of the type
II superconductor}
\subsection{Ginzburg - Landau free energy}
We start from the Gibbs energy of the ideal homogeneous sample (no disorder):%
\begin{equation}
G=\int dx^{3}\frac{{\hbar }^{2}}{2m_{{\tiny ||}}^{\ast }}\left\vert \partial
_{z}\psi \right\vert ^{2}+\frac{{\hbar }^{2}}{2m_{\perp }^{\ast }}\left\vert
\overrightarrow{D}\psi \right\vert ^{2}+a^{\prime }\psi ^{\ast }\psi +\frac{%
b^{\prime }}{2}(\psi ^{\ast }\psi )^{2}+\frac{(H-B)^{2}}{8\pi }.
\end{equation}%
Here $a^{\prime }=\alpha (T-T_{c})$ and $b^{\prime }$ are constant
parameters, $\overrightarrow{D}\equiv (-i{\hbar }\nabla +\frac{e^{\ast }}{c}%
\overrightarrow{A})$ is the covariant derivative, $\overrightarrow{A}$ is
the vector potential, the magnetic field $\overrightarrow{B}=\nabla \times
\overrightarrow{A}$, $H$ is the external magnetic field, $m_{\perp }^{\ast }$
and $m_{{\tiny ||}}^{\ast }$ are the effective masses in directions
perpendicular and parallel to the field respectively. Mesoscopic thermal
fluctuations are accounted for via Boltzmann weights%
\begin{equation}
Z=\int_{\psi ^{\ast },\psi }\exp \left\{ -\frac{G[\psi ^{\ast },\psi ]}{T}%
\right\} \label{freeen}
\end{equation}
The model provides a good description of thermal fluctuations as long as $%
1-t-b<<1$, where $t=\frac{T}{T_{c}}$, $b=B/H_{c2}$ ($h=H/H_{c2}$), $%
H_{c2}=\Phi _{0}/(2\pi \xi ^{2})$ and $\xi $ is the coherence length. In
this case the higher order terms like $\left\vert \psi \right\vert ^{6}$ can
be omitted (detail notation can be found in Ref. \onlinecite{LiLiq}). The 3D
GL model describes materials with not too high anisotropy (for a recent
evidence of \ validity of this assumption in $YBCO$ see Ref. %
\onlinecite{Schilling2}). In strongly anisotropic materials, a \ model of
the Lawrence -- Doniach type is more appropriate. \cite{Blatter}
Within the GL approach the pointlike quenched disorder on the mesoscopic
scale is described by making \textbf{each} of the coefficients of the
mesoscopic GL energy a random value centered around a certain constant value
given in eq. (\ref{freeen}). For example effective masses can be disordered
\begin{eqnarray}
m_{\perp }^{\ast -1} &\rightarrow &m_{\perp }^{\ast -1}\left( 1+U(x)\right) ;
\\
\overline{\text{\ }U(x)U(y)} &=&P\delta (x-y). \notag
\end{eqnarray}%
The parallel effective mass $m_{{\tiny ||}}^{\ast }$ might also have the
random component which we neglect (it is relatively small since $m_{{\tiny ||%
}}^{\ast }$ is typically very large), though it can be incorporated with no
additional difficulties. This type of disorder is sometimes called the $%
\delta l$ disorder since it originates in part from the inhomogeneity of the
electron mean free path $l$ in Gor'kov's derivation. From the BCS theory,
effective mass is $m^{\ast }=2m_{e}\left( 1+\frac{\pi ^{3}{\hbar }v_{F}}{%
168\zeta \left( 3\right) T_{c}l}\right) $ in the clean limit and $m^{\ast
}=2m_{e}\frac{7\zeta \left( 3\right) {\hbar v}_{F}}{2\pi ^{3}T_{c}l}$ in the
dirty limit. Relation to the notations of Ref. \onlinecite{Blatter} (chapter
II in this reference) is the following: $U$ is $-\delta m_{ab}/m_{ab}$ and $%
P=\gamma _{m}/m_{ab}^{2}$. Note however that, in addition to the random
distribution of $l,$ disorder in $v_{F}$ and $T_{c}$ (the density of states
and interaction strength) can also affect $m^{\ast }.$
\ The other two parameters in the GL equations are $\alpha =\frac{12\pi
^{2}T_{c}}{7\zeta \left( 3\right) \epsilon _{F}}$ and $b^{^{\prime }}=\frac{%
18\pi ^{2}}{7\zeta \left( 3\right) N\epsilon _{F}}\left( \frac{T_{c}}{%
\epsilon _{F}}\right) ^{2}$. The coefficient of the quadratic term is called
$\delta T$ disorder since it describes a local deviation of the critical
temperature. Introducing a random component in $\left\vert \psi \right\vert
^{2}$term:
\begin{equation}
a^{\prime }\rightarrow a^{\prime }(1+W(x));\text{ }\overline{\text{\ }%
W(x)W(y)}=R\delta (x-y).
\end{equation}%
In notations of Ref. \onlinecite{Blatter} the random field $W(x)=-\delta
_{a}/a,$ $R=\gamma _{a}/a^{2}.$ When thermal fluctuations of the vortex
degrees of freedom can be neglected, these two random fields would be
sufficient (they control the two relevant scales $\xi $ and $\lambda $). The
reason is that one can set the coefficient of the third term $|\psi |^{4}$
to a constant by rescaling. However in the presence of thermal fluctuations
the coefficient of $|\psi |^{4}$ also should be considered as having a
random component. It cannot be \textquotedblleft rescaled
out\textquotedblright\ since it affects the Boltzmann weights. We will see
later that at least within the lowest Landau level approximation this term
is crucial in inducing certain glassy properties of the vortex matter state.
We therefore introduce its disorder via%
\begin{equation}
b^{\prime }\rightarrow b^{\prime }(1+V(x));\text{ }\overline{\text{\ }%
V(x)V(y)}=Q\delta (x-y).
\end{equation}
In unconventional superconductors, even without disorder, the
phenomenological GL model has not been reliably derived \ microscopically.
The coefficients and their inhomogeneities therefore should be considered as
phenomenological parameters to be fitted to experiments. We assume that $U,$
$R$ and $Q$ have weak dependencies on field and temperature. The assumption
of the weak temperature and field dependence of the disorder strengths $U,$ $%
R$ and $Q$, as that of any parameter in the GL approach, should be derived
in principal from a microscopic theory assuming random chemical potential or
should be justified by fitting to experiments. For simplicity the white
noise distribution is considered
\begin{equation*}
p[U,W,V]=\exp \left[ -\int_{x}\frac{U(x)^{2}}{2P}+\frac{W(x)^{2}}{2R}+\frac{%
V(x)^{2}}{2Q}\right]
\end{equation*}%
for random components. The free energy of superconductor after averaging
over the disorder is
\begin{eqnarray}
\overline{F} &=&-\frac{T}{norm}\int_{U,W,V}p[U,W,V]\log \left[ \text{ }%
\int_{\psi }\text{exp}[-g[\psi ]-f_{dis}[U,W,V,\psi ]]\right] ; \label{21}
\\
g &=&G/T;\text{ \ }f_{dis}[U,W,V,\psi ]=\frac{1}{T}\int_{x}\frac{{\hbar }^{2}%
}{2m_{\perp }^{\ast }}U(x)\left\vert \overrightarrow{D}\psi \right\vert
^{2}+a^{\prime }W(x)|\psi |^{2}+\frac{1}{2}V(x)|\psi |^{4},
\end{eqnarray}%
where$\ norm\equiv \int_{U,W,V}p[U,W,V]$ is a normalization factor.
To make the physical picture clear, we rescale the coordinates as $%
x\rightarrow \xi x,$ $y\rightarrow \xi y,$ $z\rightarrow \frac{\xi z}{\gamma
}$ with anisotropy parameter defined by $\gamma =\left( m_{{\tiny ||}}^{\ast
}/m_{\perp }^{\ast }\right) ^{1/2}$. The order parameter is scaled as $\psi
^{2}\rightarrow \frac{2\alpha T_{c}}{b^{\prime }}\psi ^{2}$. The
dimensionless free energy becomes simpler looking:
\begin{equation}
g[\psi ]=G/T=\frac{1}{\omega }\int_{x}\frac{{1}}{2}\left\vert \partial
_{z}\psi \right\vert ^{2}+\frac{{1}}{2}\left\vert \overrightarrow{D}\psi
\right\vert ^{2}+\frac{t-1}{2}|\psi |^{2}+\frac{1}{2}|\psi |^{4}+\frac{%
\kappa ^{2}\left( b-h\right) ^{2}}{4},
\end{equation}%
where $\omega =\sqrt{2Gi}\pi ^{2}t$. The Ginzburg number is $Gi\equiv 32%
\left[ \pi \lambda {^{2}}T_{c}\gamma {/(}\Phi {_{0}^{2}}\xi {)}\right] ^{2},$
{where} $\lambda $ is the magnetic penetration depth. The last term can be
ignored in calculating $\overline{F}$ as the $\kappa $ is very big in high $%
T_{c}$ superconductors and the last term is order of $\frac{1}{\kappa ^{2}}$%
. Similarly the random component and the distribution become:
\begin{eqnarray}
\text{\ }f_{dis}[U,W,V,\psi ] &=&\frac{1}{\omega }\int_{x}\left\{ -\frac{{1}%
}{2}U(x)\psi ^{\ast }D^{2}\psi +\frac{t-1}{2}W(x)\left\vert \psi \right\vert
^{2}+\frac{1}{2}V(x)\left\vert \psi \right\vert ^{4}\right\} \\
p[U,W,V] &=&\exp \left[ -\frac{\xi ^{3}}{\gamma }\int_{x}\left( \frac{%
U(x)^{2}}{2P}+\frac{W(x)^{2}}{2R}+\frac{V(x)^{2}}{2Q}\right) \right] .
\end{eqnarray}%
The model however is highly nontrivial even without disorder, and to make
progress, further approximation is needed.
\subsection{Lowest Landau level approximation}
The lowest Landau level (LLL) approximation \cite{Thouless} is based on
constraint $-D^{2}\psi =b\psi $. Over the years this model has been studied
by various methods, analytic and numerical. \cite{Ruggeri,Menon94,LiLiq} The
(effective) LLL model is applicable in a surprisingly wide range of fields
and temperatures determined by the condition that the relevant excitation
energy $\varepsilon $ is much smaller than the gap between Landau levels $%
2\hbar eB/(cm_{\perp })$.\cite{Lidisorder}
The free energy after further rescaling $x\rightarrow x/\sqrt{b}%
,y\rightarrow y/\sqrt{b},z\rightarrow z\left( \frac{2^{5/2}\pi }{b\omega }%
\right) ^{1/3},\psi ^{2}\rightarrow \left( \frac{2^{5/2}\pi }{b\omega }%
\right) ^{2/3}\psi ^{2}$, simplifies within the LLL approximation to:
\begin{equation}
f_{LLL}=\frac{1}{2^{5/2}\pi }\int d^{3}x\left[ \frac{1}{2}|\partial _{z}\psi
|^{2}+a_{T}|\psi |^{2}+\frac{1}{2}|\psi |^{4}\right] . \label{GLscal}
\end{equation}%
Not surprisingly the number of independent constants in LLL is one less than
in the general model. This fact leads to the \textquotedblleft LLL
scaling\textquotedblright\ relations (of course the disorder terms will
break LLL scaling). As a result the simplified model without disorder has
just one parameter -- the (dimensionless) scaled temperature:
\begin{subequations}
\begin{equation}
a_{T}=-\left( \frac{2\pi }{b\omega }\right) ^{2/3}\left( 1-t-b\right) .
\label{aT3D}
\end{equation}%
The disorder term becomes:
\end{subequations}
\begin{equation}
f_{LLL}^{dis}=\frac{1}{2^{5/2}\pi }\int d^{3}x\left\{ \Omega (x)|\psi |^{2}+%
\frac{1}{2}V(x)|\psi |^{4}\right\} ,
\end{equation}%
in which only combination of $W$ and $U$ enters $\Omega (x)=\frac{1}{2}\left[
2\left( t-1\right) \left( \frac{2\pi }{b\omega }\right) ^{2/3}W(x)-bU(x)%
\right] $. Its distribution is still gaussian%
\begin{equation}
\overline{p}(\Omega ,V)=\exp \left[ -\int_{x}\frac{\Omega (x)^{2}}{%
2r^{\prime }}+\frac{V(x)^{2}}{2q}\right]
\end{equation}%
with two variances%
\begin{eqnarray}
r^{\prime } &=&\frac{\gamma }{4\sqrt{2}\xi ^{3}}\left\{ \frac{8\pi }{\omega }%
\left( 1-t\right) ^{2}R+\left( \frac{b^{2}\omega }{2\pi }\right)
^{1/3}b^{2}P\right\} \\
\text{\ \ \ }q^{\prime } &=&\frac{\gamma }{\sqrt{2}\xi ^{3}}\left( \frac{%
b^{2}\omega }{2\pi }\right) ^{1/3}Q. \notag
\end{eqnarray}%
To treat both the thermal fluctuations and disorder we will use the replica
method to integrate over impurity distribution followed by gaussian
approximation.
\bigskip
\section{Replica trick and gaussian approximation}
\subsection{Replica trick}
We will use the replica trick to evaluate the disorder averages. The replica
method is widely used to study disordered electrons in the theory of spin
glasses, \cite{Dotsenko} disordered metals and was applied to vortex matter
in the London limit. \cite{Giamarchi,Korshunov} Applying a simple
mathematical identity to the disorder average of the free energy one
obtains:
\begin{equation}
\overline{F}=\overline{-T\underset{n\rightarrow 0}{\lim }\frac{1}{n}(Z^{n}-1)%
}. \label{23}
\end{equation}%
The averages\ of $Z^{n}$ is the statistical sum over $n$ identical "replica"
fields $\psi _{a}$ , $a=1,...,n$:
\begin{equation}
\overline{Z^{n}}=\frac{1}{norm}\int_{\Omega ,V}p[\Omega
,V]\prod_{a}\int_{\psi _{a}}\text{exp}\left\{ -f[\psi _{a}]-f_{dis}[\Omega
,V,\psi _{a}]\right\} .
\end{equation}%
The integral over the disorder potential is gaussian and results in:
\begin{eqnarray}
\overline{Z^{n}} &=&\int_{\psi _{a}}\exp \left[ -\sum_{a}f(\psi _{a})+\frac{1%
}{2\left( 2^{5/2}\pi \right) ^{2}}\sum_{a,b}f_{ab}\right] \\
f_{ab} &=&r^{\prime }\left\vert \psi _{a}\right\vert ^{2}\left\vert \psi
_{b}\right\vert ^{2}+\frac{q^{\prime }}{4}(\psi _{a}^{\ast }\psi
_{a})^{2}(\psi _{b}^{\ast }\psi _{b})^{2}. \notag
\end{eqnarray}%
This model is a type of scalar field theory and the simplest nonperturbative
scheme commonly used to treat such a model is gaussian approximation. Its
validity and precision can be checked only by calculating corrections.
\subsection{\protect\bigskip Gaussian approximation}
We have assumed that the order parameter is constrained to the LLL and
therefore can be expanded in a basis of the standard LLL eigenfunctions in
Landau gauge:
\begin{equation}
\psi _{a}(x)=norm\int_{k_{z},k}e^{i\left( zk_{z}+xk\right) }\exp \left\{ -%
\frac{1}{2}(y+k)^{2}\right\} \widetilde{\psi }_{a}(k).
\end{equation}%
We now apply the gaussian approximation which has been used in disorder in
the elastic medium approach, \cite{Korshunov,Giamarchi} following its use in
polymer physics. \cite{Mezard} The gaussian approximation was applied to the
vortex liquid within the GL approach in \onlinecite{Thouless,Ruggeri} The
gaussian effective free energy is expressed via variational parameter \cite%
{Mezard,LiLiq} $\mu _{ab}$ which in the present case is a matrix in the
replica space. The correlator is parametrized as%
\begin{equation}
\left\langle \psi _{a}^{\ast }(k,k_{z})\psi _{b}(-k,-k_{z})\right\rangle
=G_{ab}(k_{z})=\frac{2^{5/2}\pi }{\frac{k_{z}^{2}}{2}\delta _{ab}+\mu
_{ab}^{2}}
\end{equation}%
The bubble integral appearing in the free energy is very simple:%
\begin{equation*}
\left\langle \psi _{a}^{\ast }(x,y,z)\psi _{b}(x,y,z)\right\rangle =\frac{%
\sqrt{2}}{\pi }\int_{k_{z}}\frac{1}{\frac{k_{z}^{2}}{2}\delta _{ab}+\mu
_{ab}^{2}}=2\mu _{ab}^{-1}\equiv 2m_{ab}.
\end{equation*}%
As a result the gaussian effective free energy can be written in a form:%
\begin{eqnarray}
n\text{ }f_{eff} &=&\sum_{a}\left\{ \frac{2^{5/2}\pi }{\left( 2\pi \right)
^{3}}\int_{k_{z}}\left[ LogG^{-1}(k_{z})+\left( \frac{k_{z}^{2}}{2}%
+a_{T}\right) G(k_{z})-I\right] _{aa}+4\left( m_{aa}\right) ^{2}\right\}
\notag \\
&&-\sum_{a,b}\left\{ \frac{1}{2^{3/2}\pi }r^{\prime }\left\vert
m_{ab}\right\vert ^{2}+\frac{\sqrt{2}}{\pi }q^{\prime }\left( \left\vert
m_{ab}\right\vert ^{4}+4m_{aa}m_{bb}\left\vert m_{ab}\right\vert ^{2}\right)
\right\} \\
&=&2\sum_{a}\left\{ \mu _{aa}+a_{T}m_{aa}+2\left( m_{aa}\right) ^{2}\right\}
\notag \\
&&-2\sum_{a,b}\left\{ r\left\vert m_{ab}\right\vert ^{2}+q\left( \frac{1}{4}%
\left\vert m_{ab}\right\vert ^{4}+m_{aa}m_{bb}\left\vert m_{ab}\right\vert
^{2}\right) \right\} , \notag
\end{eqnarray}%
where we discarded an (ultraviolet divergent) constant and renormalization
of $a_{T}$ and rescaled the disorder strength: $r=\frac{1}{2^{5/2}\pi }%
r^{\prime },$ \ $q=\frac{2^{5/2}}{\pi }q^{\prime }.$
We start with a simple case in which only the $\left\vert \psi \right\vert
^{2}$ type of disorder is present. More precisely we take $q=0$ and return
to the general case in section IV. This model has been already discussed
using different method (the Sompolinsky dynamic approach) in the unpinned
phase in Ref. \onlinecite{Dorsey}.
\section{Nonzero Edwards - Anderson order parameter and absence of the
replica symmetry breaking when only the $\left\vert \protect\psi \right\vert
^{2}$ disorder is present}
\subsection{Hierarchical matrices and impossibility of the continuous
replica symmetry breaking}
In this section we neglect the $\left\vert \psi \right\vert ^{4}$ disorder
term. It is convenient to introduce real (not necessarily symmetric) matrix $%
Q_{ab},$ which is in one to one linear correspondence with Hermitian
(generally complex) matrix $m_{ab}$ via%
\begin{equation}
Q_{ab}=\text{re}[m_{ab}]+\text{im}[m_{ab}]. \label{Qdef}
\end{equation}%
Unlike $m_{ab}$, all the matrix elements of $Q_{ab}$ are independent. In
terms of this matrix the free energy can be written as
\begin{equation}
\frac{n}{2}f_{eff}=\sum_{a}\left\{ \left( m^{-1}\right)
_{aa}+a_{T}Q_{aa}+2\left( Q_{aa}\right) ^{2}\right\} -r\sum_{a,b}Q_{ab}^{2}.
\label{fR}
\end{equation}%
Taking derivative with respect to $Q_{ab}$ gives the saddle point equation
for this matrix element:
\begin{equation}
\frac{n}{2}\frac{\delta f}{\delta Q_{ab}}=-\frac{1}{2}\left[ \left(
1-i\right) \left( m^{-2}\right) _{ab}+c.c.\right] +a_{T}\delta
_{ab}+4Q_{aa}\delta _{ab}-2rQ_{ab}=0. \label{mateqR}
\end{equation}%
Since the electric charge (or the superconducting phase) $U(1)$ symmetry is
assumed, we consider only solutions with real $m_{ab}.$ In this case $%
m_{ab}=Q_{ab}$ is a symmetric real matrix. General hierarchical matrices $m$
are parametrized using the diagonal elements $\widetilde{m}$ and the
Parisi's (monotonically increasing) function $m_{x}$ specifying the off
diagonal elements with $0<x<1$. \cite{Mezard} Physically different $x$
represent time scales in the glass phase. In particular the Edwards -
Anderson (EA) order parameter is $m_{x=1}=M>0$.
A nonzero value for this order parameter signals that the annealed and the
quenched averages are different. The dynamic properties of such phase are
generally quite different from those of the nonglassy $M=0$\ phase. In
particular it is expected to exhibit infinite conductivity. \cite%
{Fisher,Dorsey} We will refer to this phase as the "ergodic pinned liquid"
(EPL) distinguished from the "nonergodic pinned liquid" (NPL) in which, in
addition, the ergodicity is broken.
However in the present model RSB does not occur. In terms of $\ $Parisi
parameter\ \ $\widetilde{m}$ and $m_{x}$ the matrix equation eq.(\ref{mateqR}%
) takes a form:
\begin{equation}
-\widetilde{m^{-2}}+a_{T}+\left( 4-2r\right) \widetilde{m}=0 \label{spR}
\end{equation}%
\begin{equation*}
\left( m^{-2}\right) _{x}+2rm_{x}=0.
\end{equation*}%
Dynamically if $m_{x}$ is a constant, \ pinning does not results in the
multitude of time scales. Certain time scale sensitive phenomena like
various memory effects \cite{Andrei} and the responses to \textquotedblleft
shaking\textquotedblright\ \cite{Zeldov01} are expected to be different from
the case when $m_{x}$ takes multiple values. If $m_{x}$ takes a finite
different number of $n$ values, we call $n-1$ step RSB. On the other hand,
if $m_{x}$ is continuous, the continuous replica symmetry breaking (RSB)
occurs.
In order to show that $m_{x}$ is a constant, it is convenient to rewrite the
second equation via the matrix $\mu ,$ the matrix inverse to $m$:
\begin{equation}
\left( \mu ^{2}\right) _{x}+2r(\mu ^{-1})_{x}=0.
\end{equation}%
Differentiating this equation with respect to $x$ one obtains;
\begin{equation}
2\left[ \left\{ \mu \right\} _{x}-r\left( \left\{ \mu \right\} _{x}\right)
^{-2}\right] x\frac{d\mu _{x}}{dx}=0, \label{proof1}
\end{equation}%
where we used a set of standard notations in the spin glass theory: \cite%
{Mezard}
\begin{equation}
\left\{ \mu \right\} _{x}\equiv \widetilde{\mu }-\left\langle \mu
_{x}\right\rangle -[\mu ]_{x};\text{ \ \ }\left\langle \mu _{x}\right\rangle
\equiv \int_{0}^{1}dx\mu _{x};\text{ \ \ }[\mu ]_{x}=\int_{0}^{x}dy\left(
\mu _{x}-\mu _{y}\right) .
\end{equation}%
If one is interested in a continuous monotonic part $\frac{d\mu _{x}}{dx}%
\neq 0,$ the only solution of eq.(\ref{proof1}) is%
\begin{equation}
\left\{ \mu \right\} _{x}=r^{1/3}
\end{equation}%
Differentiating this again and dropping the nonzero derivative $\frac{d\mu
_{x}}{dx}$ again, one further gets a contradiction: $\frac{d\mu _{x}}{dx}=0$
. This proves that there are no such monotonically increasing continuous
segments. One can therefore generally have either the replica symmetric
solutions, namely $m_{x}=M$ or look for a several steplike RSB solutions.
\cite{Dotsenko} We can show that the constant $m_{x}$ solution is stable.
Therefore, if a steplike RSB solution exists, it might be only an additional
local minimum. We explicitly looked for a one step solution and found that
there is none.
\subsection{Two replica symmetric solutions\ and the third order transition
between them}
\subsubsection{The unpinned liquid and the \textquotedblleft ergodic
glass\textquotedblright\ replica symmetric solutions}
\bigskip
Restricting to RS solutions, $m_{x}=M,$ the saddle point equations eq.(\ref%
{spR}) simplify:%
\begin{eqnarray}
-\varepsilon ^{-2}+\left( a_{T}+4\widetilde{m}\right) -2r\varepsilon &=&0;
\label{Req} \\
M\left( \varepsilon ^{-3}-r\right) &=&0, \notag
\end{eqnarray}%
where $\varepsilon \equiv \widetilde{m}-M$. Energy of such a solution is
given by
\begin{equation}
\frac{f_{eff}}{2}=2\varepsilon ^{-1}-2-\varepsilon ^{-2}M+2a_{T}\widetilde{m}%
+4\widetilde{m}^{2}-2r\left( \varepsilon ^{2}+2\varepsilon M\right) .
\end{equation}%
The second equation eq.(\ref{Req}) has a replica index independent
(diagonal) solution $M=0$. In addition there is a non diagonal one. It turns
out that there is a third order transition between them.
For the diagonal solution $\varepsilon =\widetilde{m}$ and the first
equation is just a cubic equation:
\begin{equation}
-\widetilde{m}^{-2}+\left( a_{T}+4\widetilde{m}\right) -2r\widetilde{m}=0.
\label{diageq}
\end{equation}%
For the non diagonal solution the second equation gives $\varepsilon
=r^{1/3} $, which, when plugged into the first equation, gives:%
\begin{equation}
\widetilde{m}=\frac{1}{4}\left( 3r^{2/3}-a_{T}\right) ;\text{ \ }M=\frac{1}{4%
}\left( 3r^{2/3}-a_{T}\right) -r^{-1/3}.
\end{equation}%
The matrix $m$ therefore is
\begin{equation}
m_{ab}=r^{-1/3}\delta _{ab}+M,
\end{equation}%
which results in the following value of the free energy: $f=6r^{1/3}-\frac{1%
}{4}\left( 3r^{2/3}-a_{T}\right) ^{2}$.
The two solutions coincide for%
\begin{equation}
a_{T}=r^{-1/3}\left( 3r-4\right) . \label{dynlineq}
\end{equation}%
Since in addition to the energy, the first and second derivatives of the
energy, $\frac{df}{da_{T}}=2r^{-1/3}$ and $\frac{d^{2}f}{da_{T}^{2}}=-\frac{1%
}{2}$ respectively, coincide (the fourth derivatives are different though),
the transition is a third order one.
\subsubsection{Stability domains of the two solutions}
In order to prove that a solution is stable beyond the set of replica
symmetric matrices $m$ one has to calculate the second derivative of free
energy (called Hessian in Refs. \onlinecite{Alemeida,Dotsenko}) with respect
to arbitrary real matrix $Q_{ab}$ defined in eq.(\ref{Qdef}):
\begin{eqnarray}
H_{(ab)(cd)} &\equiv &\frac{n}{2}\frac{\delta ^{2}f_{eff}}{\delta
Q_{ab}\delta Q_{cd}} \\
&=&\frac{1}{2}\left[ \left( m^{-2}\right) _{ac}\left( m^{-1}\right)
_{db}-i\left( m^{-2}\right) _{ad}\left( m^{-1}\right) _{cb}\right] + \notag
\\
&&\frac{1}{2}\left[ \left( m^{-1}\right) _{ac}\left( m^{-2}\right)
_{db}-i\left( m^{-1}\right) _{ad}\left( m^{-2}\right) _{cb}\right] +cc
\notag \\
&&+4\delta _{ac}\delta _{bd}\delta _{ab}-2r\delta _{ac}\delta _{bd}. \notag
\end{eqnarray}%
We will use a simplified notation for\ the product of the Kronecker delta
functions with more than two indices: $\delta _{ac}\delta _{bd}\delta
_{ab}\equiv \delta _{abcd}.$ For the diagonal solution the Hessian is a very
simple operator on the space of real symmetric matrices:
\begin{equation}
H_{(ab)(cd)}=c_{I}I_{abcd}+c_{J}J_{abcd},
\end{equation}%
where the operators $I$ (the identity in this space) and $J$ are defined as
\begin{equation}
I\equiv \delta _{ac}\delta _{bd};\text{ \ \ \ \ }J=\delta _{abcd}
\end{equation}%
and their coefficients in the diagonal phase are:
\begin{equation}
c_{I}=2\left( \widetilde{m}^{-3}-r\right) ,\text{ \ \ }c_{J}=4
\end{equation}%
with $\widetilde{m}$ being a solution of eq.(\ref{diageq}). The
corresponding eigenvectors in the space of symmetric matrices are $%
v_{(cd)}\equiv A\delta _{cd}+B.$ To find eigenvalues $\lambda $ of $H$ we
apply the Hessian on $V.$ The result is (dropping terms vanishing in the
limit $n\rightarrow 0$):%
\begin{equation}
H_{(ab)(cd)}v_{cd}=A\left( c_{I}+c_{J}\right) \delta _{ab}+B\left(
c_{I}+c_{J}\delta _{ab}\right) =\lambda \left( A\delta _{ab}+B\right)
\end{equation}%
There two eigenvalues: $\lambda ^{(1)}=c_{I}$ and $\lambda
^{(2)}=c_{I}+c_{J} $. Since $c_{J}=4>0,$ the sufficient condition for
stability is:
\begin{equation}
c_{I}=2\left( \widetilde{m}^{-3}-r\right) >0.
\end{equation}%
It is satisfied everywhere below the transition line of eq.(\ref{dynlineq}),
see Fig 1.( $a_{T},r$ phase diagram). The analysis of stability of the non
diagonal solution is slightly more complicated. The Hessian for the non
diagonal solution is:
\begin{equation}
H_{(ab)(cd)}=c_{V}V+c_{U}U+c_{J}J,
\end{equation}%
where new operators are%
\begin{equation}
V_{(ab)(cd)}=\delta _{ac}+\delta _{bd};U_{(ab)(cd)}=1
\end{equation}%
and coefficients are
\begin{equation}
c_{V}=-3Mr^{2/3};\text{ \ \ }c_{U}=4M^{2}r^{1/3};c_{J}=4
\end{equation}%
In the present case, one obtains three different eigenvalues, \cite%
{Alemeida,Dotsenko} $\lambda ^{(1.2)}=2\left( 1\pm \sqrt{1-4Mr^{2/3}}\right)
$ and $\lambda ^{(3)}=0.$ Note that the eigenvalue of Hessian on the
antisymmetric matrices are degenerate with eigenvalue $\lambda ^{(1)}$ in
this case (we will come back later on this eigenvalue). For $M<0$ the
solution is unstable due to negative $\lambda ^{(2)}.$ For $M>0,$ both
eigenvalues are positive and the solution is stable. The line $M=0$
coincides with the third order transition line, hence the non diagonal
solution is stable when the diagonal is unstable and \textit{vise versa}. We
conclude that there is no glass state in the vortex liquid without the $%
\left\vert \psi \right\vert ^{4}$ disorder term. The transition does not
correspond to RSB. Despite this in the phase with nonzero EA (NEA) $M$ order
parameter there are Goldstone bosons corresponding to $\lambda ^{(3)}$ in
the replica limit of $n\rightarrow 0$. The criticality and the zero modes
due to disorder (pinning) in this phase might lead to great variety of
interesting phenomena in statics and dynamics. These have not been explored
yet. However, as we show in the next section, the random component of the
quartic term changes the character of the transition line: the replica
symmetry is broken on the one side of the line. For simplicity in the next
section, we consider first a case with a random component of $\left\vert
\psi \right\vert ^{4}$ and no random component of $\left\vert \psi
\right\vert ^{2}$, and return to the general case in section V.
\section{\protect\bigskip The glass transition for the $\left\vert \protect%
\psi \right\vert ^{4}$ disorder}
\subsection{\protect\bigskip Continuous replica symmetry breaking solutions}
In this section we neglect the $r\left\vert \psi \right\vert ^{2}$ term
disorder. Although it is always present, as we have seen in the previous
section, at least within the gaussian approximation, it does not cause
replica symmetry breaking. Therefore one expects that although it certainly
influences properties of the vortex matter, for example, the melting
transition line to lower fields and temperatures, \cite{Lidisorder} it's
role in qualitative understanding of RSB effects is minor. The only other
disordered term within the LLL approximation considered in this paper is the
$\left\vert \psi \right\vert ^{4}$ disorder term. As was discussed in
section II, at least within the BCS theory, it is expected to be smaller
than the $\left\vert \psi \right\vert ^{2}$ disorder, $q\ll r$. Even it
could be very small, however, as we show here, it leads to qualitatively new
phenomena in vortex matter. The $r=0$ free energy after integration over $%
k_{z}$ becomes:
\begin{equation}
\frac{n}{2}f_{eff}=\sum_{a}\left\{ \left( m^{-1}\right)
_{aa}+a_{T}m_{aa}+2\left( m_{aa}\right) ^{2}\right\} -q\sum_{a,b}\left(
\frac{1}{4}\left\vert m_{ab}\right\vert ^{4}+m_{aa}m_{bb}\left\vert
m_{ab}\right\vert ^{2}\right) \label{fQ}
\end{equation}%
In terms of the real matrix $Q_{ab}$ defined in eq.(\ref{Qdef}) the free
energy can be written as
\begin{eqnarray}
\frac{n}{2}f_{eff} &=&\sum_{a}\left\{ \left( m^{-1}\right)
_{aa}+a_{T}Q_{aa}+2\left( Q_{aa}\right) ^{2}\right\} \\
&&-q\sum_{a,b}\left( \frac{1}{8}Q_{ab}^{4}+\frac{1}{8}%
Q_{ab}^{2}Q_{ba}^{2}+Q_{aa}Q_{bb}Q_{ab}^{2}\right)
\end{eqnarray}%
Taking a derivative with respect to $Q_{ab}$ gives the saddle point equation
for this matrix:
\begin{equation}
\frac{n}{2}\frac{\delta f}{\delta Q_{ab}}=\left[
\begin{array}{c}
-\frac{1}{2}\left[ \left( 1-i\right) \left( m^{-2}\right) _{ab}+cc\right]
+a_{T}\delta _{ab}+4Q_{aa}\delta _{ab} \\
-q\left( \frac{1}{2}Q_{ab}^{3}+\frac{1}{2}%
Q_{ab}Q_{ba}^{2}+2Q_{ab}Q_{aa}Q_{bb}+\delta _{ab}\sum_{e}Q_{ee}\left(
Q_{ae}^{2}+Q_{ea}^{2}\right) \right)%
\end{array}%
\right] =0
\end{equation}%
Using the hierarchical symmetric matrix parametrization of its symmetric
part (the antisymmetric will not be important for most of our purposes), it
takes a form
\begin{equation}
-\widetilde{m^{-2}}+a_{T}+4\widetilde{m}-q\left( 3\widetilde{m}^{3}+2%
\widetilde{m^{2}}\widetilde{m}\right) =0 \
|
label{tildQ}
\end{equation}%
\begin{equation}
\left( m^{-2}\right) _{x}+q\left( m_{x}^{3}+2\widetilde{m}^{2}m_{x}\right)
=0. \label{xQ}
\end{equation}
As in the previous section, it is convenient to rewrite the second equation
in terms of $\mu ,$ the inverse matrix of $m$:
\begin{equation}
\left( \mu ^{2}\right) _{x}+q\left[ \left( (\mu ^{-1})_{x}\right) ^{3}+2%
\widetilde{m}^{2}(\mu ^{-1})_{x}\right] =0.
\end{equation}%
Differentiation of this equation with respect to $x$ leads to:
\begin{equation}
\left\{ 2\left\{ \mu \right\} _{x}-q\left[ 3\left( (\mu ^{-1})_{x}\right)
^{2}+2\widetilde{m}^{2}\right] \left( \left\{ \mu \right\} _{x}\right)
^{-2}\right\} x\frac{d\mu _{x}}{dx}=0. \label{eqq}
\end{equation}%
For a continuous segment $\frac{d\mu _{x}}{dx}\neq 0$ one solves eq.(\ref%
{eqq}) for $(\mu ^{-1})_{x}$ in terms of $\left\{ \mu \right\} _{x}$ getting
now a more complicated result:
\begin{equation}
(\mu ^{-1})_{x}=\sqrt{\frac{2}{3}\left[ q^{-1}\left( \left\{ \mu \right\}
_{x}\right) ^{3}-\widetilde{m}^{2}\right] }
\end{equation}%
Differentiating this equation with respect to $x$ again one obtains:
\begin{equation}
\frac{1}{\left( \left\{ \mu \right\} _{x}\right) ^{2}}=\sqrt{\frac{2}{3q%
\left[ \left( \left\{ \mu \right\} _{x}\right) ^{3}-q\widetilde{m}^{2}\right]
}}\left( \left\{ \mu \right\} _{x}\right) ^{2}x.
\end{equation}%
Instead of solving this for $\left\{ \mu \right\} _{x}$ we present $x$ as
function of $\left\{ \mu \right\} _{x}$:
\begin{equation}
x=\sqrt{\frac{3q\left[ \left( \left\{ \mu \right\} _{x}\right) ^{3}-q%
\widetilde{m}^{2}\right] }{2\left( \left\{ \mu \right\} _{x}\right) ^{8}}}.
\label{xq}
\end{equation}
Thus the solution will be given by eq.(\ref{xq}) in the segment if $\frac{%
d\mu _{x}}{dx}\neq 0$ and constant $\mu _{x}$ in the other segments. In
principle this would allow for a numerical solution. One could actually
solve the equation near the transition line using the method in Ref. %
\onlinecite{Parisi}. The situation is completely different compared to that
of the $\left\vert \psi \right\vert ^{2}$ disorder. In the present case a
stable RSB solution exists. We will turn first however to the replica
symmetric solutions and determine their region of stability. In the unstable
region of the replica symmetric solutions, the RSB solution of eq.(\ref{xq})
will be the relevant one.
\subsection{Two replica symmetric solutions\protect\bigskip\ }
\subsubsection{Solutions}
Here we briefly repeat the steps leading to the RS solutions for the $%
\left\vert \psi \right\vert ^{2}$ disorder omitting details. The saddle
point equations eq.(\ref{spR}) for the RS matrices $m_{x}=M$ are:
\begin{equation}
-\varepsilon ^{-2}+\left( a_{T}+4\widetilde{m}\right) -q\left[ 5\widetilde{m}%
^{3}-M^{3}-2M^{2}\widetilde{m}-2\widetilde{m}^{2}M\right] =0 \label{eq1Q}
\end{equation}
\begin{equation}
M\left[ 2\varepsilon ^{-3}-q\left( M^{2}+2\widetilde{m}^{2}\right) \right] =0
\label{eq2Q}
\end{equation}%
where $\varepsilon \equiv \widetilde{m}-M$. Energy of such a solution is
given by
\begin{equation}
\frac{f}{2}=\varepsilon ^{-1}-\varepsilon ^{-2}M+a_{T}\widetilde{m}+2%
\widetilde{m}^{2}-\frac{q}{4}\left( 5\widetilde{m}^{2}-M^{2}\right) \left(
\widetilde{m}^{2}+M^{2}\right) .
\end{equation}%
For the diagonal solution $M=0,$ $\varepsilon =\widetilde{m}$ and the first
equation takes a form:
\begin{equation}
-\widetilde{m}^{-2}+a_{T}+4\widetilde{m}-5q\widetilde{m}^{3}=0.
\end{equation}%
The non diagonal solution in the present case is more complicated, but the
condition determining the transition line between the two (equivalently the
appearance of the nonvanishing EA order parameter) is still very simple: $%
\varepsilon =q^{-1/5}$ as $M=0$ on the line. Along the line the scaled
temperature is:%
\begin{equation}
a_{T}^{d}=2(3q^{2/5}-2q^{-1/5}). \label{lowerl}
\end{equation}%
It is still the third order transition line similar to the $\left\vert \psi
\right\vert ^{2}$ disorder case and one has zero modes in NEA sector, while
no such modes exist in the $M=0$ phase. The stability analysis with respect
to configurations which are replica symmetric however gives completely
different results compared to that of the $\left\vert \psi \right\vert ^{2}$
disorder.
\subsubsection{Stability region of the RS solutions.}
The Hessian now has several additional terms
\begin{eqnarray}
H_{(ab)(cd)} &\equiv &\frac{n}{2}\frac{\delta ^{2}f}{\delta m_{ab}\delta
m_{cd}}=\left( m^{-2}\right) _{ac}\left( m^{-1}\right) _{db}+\left(
m^{-1}\right) _{ac}\left( m^{-2}\right) _{db}+4\delta _{abcd} \\
&&-q\left(
\begin{array}{c}
\frac{3}{2}\delta _{ac}\delta _{bd}Q_{ab}^{2}+\frac{1}{2}\left( \delta
_{ac}\delta _{bd}Q_{ba}^{2}+2\delta _{ad}\delta _{bc}Q_{ab}Q_{ba}\right) \\
2(\delta _{ac}\delta _{bd}Q_{aa}Q_{bb}+\delta _{bcd}Q_{ab}Q_{aa}+\delta
_{acd}Q_{ab}Q_{bb}) \\
+\delta _{ab}\left[ \delta _{cd}\left( Q_{ac}^{2}+Q_{ca}^{2}\right) +2\left(
\delta _{ac}Q_{ad}Q_{dd}+\delta _{ad}Q_{ca}Q_{cc}\right) \right]%
\end{array}%
\right) \notag
\end{eqnarray}
For replica symmetric solutions $Q_{ab}=m_{ab}=\varepsilon \delta _{ab}+M$
the Hessian can be represented as
\begin{equation}
H=c_{+}I_{+}+c_{-}I_{-}+c_{U}U+c_{V}V+c_{J}J+c_{K}K+c_{N}N,
\label{operators}
\end{equation}%
where new operators $I_{\pm }$, $K,N$ are defined as
\begin{equation}
I_{\pm }\equiv \frac{1}{2}\left( \delta _{ac}\delta _{bd}\pm \delta
_{ad}\delta _{bc}\right) ;\text{ }K\equiv \delta _{ab}\delta _{cd};\text{ \
\ }N=\delta _{abc}+\delta _{abd}+\delta _{acd}+\delta _{bcd}.
\end{equation}
The coefficients are
\begin{eqnarray}
c_{+} &=&2\varepsilon ^{-3}-q\left( 3M^{2}+2\widetilde{m}^{2}\right) ; \\
c_{-} &=&2\varepsilon ^{-3}-q\left( M^{2}+2\widetilde{m}^{2}\right) ; \notag
\\
c_{U} &=&4M^{2}\varepsilon ^{-5};c_{V}=-3M\varepsilon ^{-4};\text{ } \notag
\\
\text{\ }c_{J} &=&4-q\left[ 5\left( \widetilde{m}^{2}-M^{2}\right) +8%
\widetilde{m}\left( \widetilde{m}-M\right) \right] ; \notag \\
c_{K} &=&-2qM^{2};\text{ \ }c_{N}=-2q\widetilde{m}M. \notag
\end{eqnarray}%
Generally the Hessian have four different eigenvalues: \cite{Alemeida}
\begin{equation}
\lambda ^{(1,2)}=c_{+}+\frac{1}{2}\left( c_{J}+4c_{N}\pm \sqrt{c_{J}\left(
c_{J}+8c_{V}+8c_{N}\right) }\right) ;\text{ \ }\lambda ^{(3)}=c_{+},\lambda
^{(4)}=c_{-}
\end{equation}%
Note that there are new matrices like $I_{+},I_{-}$ when $q\neq 0$. In the
case of $q=0$, $c_{+}=c_{-}$, so that only operator $I=I_{+}+I_{-}$ appears
in this case. Actually there is $\lambda ^{(4)}$ which is the eigenvalue of
Hessian on the antisymmetric matrices. However $\lambda ^{(4)}=c_{-}\geq 0$
is always hold on the RS solutions so that it can be ignored in determining
the instabilities of those RS solutions. Since the stability analysis is
quite complicated, we divide it into several stages of increasing complexity.
\subsubsection{\protect\bigskip Stability of the states on the diagonal -
off diagonal "transition" line}
The easiest way to see that the RS solutions can be unstable is to look
first at the transition line $a_{T}^{d}$, eq.(\ref{lowerl}). On the
transition line one has
\begin{equation}
c_{\pm }=c_{U}=c_{V}=c_{N}=0;\text{ \ \ }c_{J}=4-13q^{3/5};
\end{equation}%
and the eigenvalues simplify to%
\begin{equation}
\lambda ^{(3)}=0;\text{ \ \ \ }\lambda ^{(1,2)}=4-13q^{3/5}.
\end{equation}%
Therefore it is unstable for $q>q^{t}$
\begin{equation}
q^{t}=\left( \frac{4}{13}\right) ^{5/3},
\end{equation}%
marginally stable at a single point
\begin{equation}
a_{T}^{t}=-\frac{28}{13}\left( \frac{13}{4}\right) ^{1/3}\approx -3.2
\end{equation}%
and stable for $q<q^{t}$. We studied numerically the stability on both sides
of this line, see Fig.2. The diagonal (liquid) solution is stable below the
line ($a_{T}^{d}=2(3q^{2/5}-2q^{-1/5})$) for $q>q^{t}$. The line when $%
q>q^{t}$ the phase transition line (liquid to glass) is changed to a
different line which will be discussed in the next subsection.
\subsubsection{\protect\bigskip Stability of the diagonal solution}
Equation for $\widetilde{m}$, coefficients in Hessian and eigenvalues are:
\begin{equation}
-\widetilde{m}^{-2}+a_{T}+4\widetilde{m}-5q\widetilde{m}^{3}=0;
\end{equation}
\begin{eqnarray}
c_{+} &=&2\widetilde{m}^{-3}-2q\widetilde{m}^{2};\text{ \ \ }c_{J}=4-13q%
\widetilde{m}^{2}; \\
\lambda ^{(1,3)} &=&c_{+};\text{ \ \ }\lambda ^{(2)}=c_{+}+c_{J}=4+2%
\widetilde{m}^{-3}-15q\widetilde{m}^{2}. \notag
\end{eqnarray}%
While $\lambda ^{(1)}$ is positive, $\lambda ^{(2)}$ is positive only below
the line defined parametrically via%
\begin{equation}
a_{T}=\frac{5-8\widetilde{m}^{3}}{3\widetilde{m}^{2}};\text{ \ \ \ }q=\frac{4%
\widetilde{m}^{3}+2}{15\widetilde{m}^{5}}, \label{upperl}
\end{equation}%
marked by dotted line in Fig.2. and this line is the phase transition line
(liquid to glass) when $q>q^{t}$. The former diagonal - off diagonal line
above the tricritical point is not a phase transition and is left as a light
dashed line to show that slope of the line below tricritical point and that
of the real transition line is different. It turns out that the line of eq.(%
\ref{upperl}) is a transition line into a RSB state, namely the
irreversibility line.
\subsubsection{Stability of the off diagonal solution}
The equations take a form:%
\begin{equation}
-\varepsilon ^{-2}+a_{T}+4\widetilde{m}-q\left[ 5\widetilde{m}%
^{3}-M^{3}-2M^{2}\widetilde{m}-2\widetilde{m}^{2}M\right] =0
\end{equation}
\begin{equation}
2\varepsilon ^{-3}-q\left( M^{2}+2\widetilde{m}^{2}\right) =0.
\end{equation}%
The coefficients in the expansion of Hessian are: Unlike the case of the $%
\left\vert \psi \right\vert ^{2}$ disorder, $\lambda ^{(1)}<0$ for each such
a solution. Therefore the diagonal state directly goes over into a RSB glass
state. It follows however two lines. The eq.(\ref{upperl}) above the
tricritical point and eq.(\ref{lowerl}) below it, see Fig. 2.
\section{RSB in the general case}
\subsection{General hierarchical gaussian variational Ansatz\protect\bigskip}
The free energy
\begin{equation}
\frac{f}{2n}=\sum_{a}\left\{ \left( m^{-1}\right) _{aa}+a_{T}m_{aa}+2\left(
m_{aa}\right) ^{2}\right\} -\sum_{a,b}\left\{ 2rm_{ab}^{2}-\frac{q}{4}%
(m_{ab}^{4}+4m_{aa}m_{bb}m_{ab}^{2})\right\}
\end{equation}%
lead on the replica symmetric sector to the following equations:%
\begin{equation}
-\varepsilon ^{-2}+a_{T}+4\widetilde{m}-2r\varepsilon -q\left[ 5\widetilde{m}%
^{3}-M^{3}-2M^{2}\widetilde{m}-2\widetilde{m}^{2}M\right] =0,
\end{equation}
\begin{equation*}
M\left[ 2\varepsilon ^{-3}-2r-q\left( M^{2}+2\widetilde{m}^{2}\right) \right]
=0.
\end{equation*}%
For the diagonal solution $M=0$ one obtains%
\begin{equation}
-\widetilde{m}^{-2}+a_{T}+4\widetilde{m}-2r\widetilde{m}-5q\widetilde{m}%
^{3}=0. \label{qrdiageq}
\end{equation}%
The off diagonal solution on the bifurcation line obeys%
\begin{equation}
\widetilde{m}^{-3}-r-q\widetilde{m}^{2}=0. \label{qroffdiageq}
\end{equation}
Hessian for the general RS solution takes a form of eq.(\ref{operators})
with coefficients
\begin{eqnarray}
c_{+} &=&2\varepsilon ^{-3}-2r-q\left( 3M^{2}+2\widetilde{m}^{2}\right) ;%
\text{ \ }c_{-}=2\varepsilon ^{-3}-2r-q\left( M^{2}+2\widetilde{m}%
^{2}\right) ; \\
c_{U} &=&4M^{2}\varepsilon ^{-5};\text{ \ \ \ }c_{V}=-3M\varepsilon ^{-4};%
\text{ \ \ }c_{K}=-2qM^{2}; \notag \\
c_{N} &=&-2q\widetilde{m}M;\text{ \ \ \ \ }c_{J}=4-q\left[ 5\left(
\widetilde{m}^{2}-M^{2}\right) +8\widetilde{m}\left( \widetilde{m}-M\right) %
\right] . \notag
\end{eqnarray}%
On the bifurcation line it simplifies:%
\begin{equation}
c_{\pm }=2\widetilde{m}^{-3}-2r-2q\widetilde{m}%
^{2};c_{U}=c_{V}=c_{K}=c_{N}=0;c_{J}=4-13q\widetilde{m}^{2}.
\end{equation}%
The eigenvalues are
\begin{equation}
\lambda ^{\left( 1,2\right) }=c_{+};\text{ \ \ \ \ }\lambda
^{(3)}=c_{+}+c_{J}.
\end{equation}%
Therefore the Hessian vanishes $\lambda ^{\left( 1,2\right) }=\lambda
^{(3)}=0$ for the tricritical (branch) point defined by
\begin{equation}
r=\left( \frac{13q}{4}\right) ^{2/3}-\frac{4}{13}. \label{tricritical}
\end{equation}
\subparagraph{Stability of the diagonal solution}
\bigskip Hessian and eigenvalues in this case are:
\begin{eqnarray}
c_{_{\pm }} &=&2\widetilde{m}^{-3}-2r-2q\widetilde{m}^{2};c_{J}=4-13q%
\widetilde{m}^{2}; \label{hessianrq} \\
\lambda ^{\left( 1,2\right) } &=&c_{\pm };\text{ \ \ \ }\lambda ^{\left(
3\right) }=c_{_{\pm }}+c_{J}=4+2\widetilde{m}^{-3}-2r-15q\widetilde{m}^{2}.
\notag
\end{eqnarray}%
Below the tricritical point we solve equation $\lambda ^{(1)}=0$
perturbatively in $q:$%
\begin{equation}
\widetilde{m}=r^{-1/3}\left( 1-\frac{q}{3}r^{5/3}\right) +O(q^{2})
\end{equation}%
and substitute $\widetilde{m}$ into eq.(\ref{qrdiageq}) to determine the
"weak disorder" part of the glass transition line:%
\begin{equation}
a_{T}^{g1}=-\frac{4-r}{r^{1/3}}+\left( \frac{5}{r}+\frac{4r^{4/3}}{3}\right)
q+O(q^{2}).
\end{equation}%
Above the tricritical point, namely for larger disorder, one solves the
equation $\lambda ^{(3)}=0$ perturbatively in $q$ around the tricritical
point of eq.(\ref{tricritical}), $q^{t}(r)=\frac{4}{13}\left( r+\frac{4}{13}%
\right) ^{3/2}:$
\begin{equation}
\widetilde{m}=\left( r+\frac{4}{13}\right) ^{-1/3}\left( 1-\frac{10}{24+13r}%
\ \Delta \right) +O(\Delta ^{2});\text{ \ \ \ \ \ }\Delta =\frac{q}{q^{t}(r)}%
-1.
\end{equation}%
The NEA RS solution is unstable everywhere as $c_{+}<0$ ($c_{+}<c_{-}=0$).
We therefore obtain the glass transition line with RSB in the general case.
To compare it with experiment one has to specify phenomenologically the
precise dependence of the GL model parameters on temperature. Next section
is devoted to this.
\section{Comparison of the theoretical and the experimental phase diagrams}
\subsection{General picture\protect\bigskip\ and comparison with other
theories}
As was discussed in Introduction, the interplay between disorder and thermal
fluctuations makes the phase diagram of high $T_{c}$ superconductors very
complicated. As a result of the present investigation together with the
preceding one \cite{Lidisorder} regarding the order - disorder transition
the following qualitative picture of the $T-H$ phase diagram of a 3D
superconductor at temperatures not very far from $H_{c2}(T)$ (so that the
LLL approximation is valid) emerges, see schematic diagram Fig. 3. There are
two independent transition lines.
\bigskip \textit{1. The positional order-disorder line.}
The unified first order universal order - disorder line comprising
the melting and the second peak sections separates the homogeneous
and the crystalline (Bragg) phases. The transition is therefore
defined by the translation and rotation symmetry only, and the
intensity of the first Bragg peak can be taken as order parameter.
The broken symmetry is not directly related to pinning, however
the location of the line is\textit{\ }sensitive to disorder. One
sees on Fig.3 that as the disorder strength $n$ increases the
melting line (solid line) curves down at lower point merging with
the second peak segment. The effect of disorder is quite minor in
the high temperature region, in which the thermal fluctuations
dominate, but become dominant at low temperatures. The line makes
a wiggle near the experimentally claimed critical point. The
\textquotedblleft critical point\textquotedblright\ is
reinterpreted \cite{Lidisorder} as a (noncritical) Kauzmann point
in which the latent heat vanishes and the line is parallel to the
$T$ axis in the low temperature region. The surprising
\textquotedblleft wiggle\textquotedblright , which appears in 3D
only, has actually been observed in some experiments.
\cite{Grover} It is located precisely in the region in which the
thermal fluctuations and the disorder compete. One might expect
that the line just has a maximum like in $BSCCO,$ but the
situation might be more complex. Thermal fluctuations, on the one
hand side, make the disorder less effective, the less disorder
effect will favor solid, but, on the other hand side, they
themselves melt the solid. The theoretical magnetization, the
entropy and the specific heat discontinuities at melting line
\cite{Lidisorder} compare well with experiments
\cite{Bouquet,Nishizaki}. The low temperature segment of the
disorder dominated second peak line is weakly temperature
dependent. Its field strongly decreases as the disorder strength
increases. \
\bigskip \textit{2. The glass transition line.}
The second line is the glass transition line, the dashed line in Fig.3. We
assume very small $q$ and draw on Fig.3 the glass lines for three different
values of $n.$ One observes as expected, that, as the disorder strength
increases, the line moves towards higher temperatures. We have not
calculated the glass line in the crystalline state, but anticipate that it
depends little on the crystalline order. This is consistent with
observations made in Ref. \onlinecite{Herbut} in which it was noticed (in a
bit different context of layered materials and columnar defects) that
lateral modulation makes a very small difference to the glass line although
it is obviously very important for the location of the order - disorder
line. So we just continue the glass line in the homogeneous phase into the
crystalline side (still marking it by a dotted lines). If the glass lines of
the liquid side and solid side join to a single glass line, then the glass
line must cross the ODT line right at the Kauzmann point. The theoretical
calculated intersection point in Fig. 3 marked by black blob do not appear
at the Kauzmann point. \ We attribute this to different approximations made
to draw the ODT\ and GT line. That the glass line crosses the ODT line right
at the Kauzmann point has been indeed supported by some evidences in an
experiment in $BSCCO$ \cite{Zeldov98}, thought two lines crossing exactly at
the Kauzmann point or not is still an open question experimentally and
theoretically.
Consequently there are four different phases, see Fig. 4: pinned solid (=
Bragg glass), pinned liquid (=vortex glass), unpinned solid (or simply
solid) and unpinned liquid (or simply liquid). The four phases of the vortex
matter are also expected to be present in the case of layered quasi 2D
superconductors and it was shown by in $BSCCO$ in \onlinecite{Zeldov98} that
the conductivity and the magnetization both indicate the glass line crossing
the ODT line near the maximum (Kauzmann point). There is evidence of the
crossing of the ODT and GT line also in $LaSCCO$ \cite{Forgan} and $YBCO$
\cite{Nishizaki,Taylor}, but the line seems to lie very close to the melting
line (the segment of ODT line in the high temperature). In Ref. %
\onlinecite{Taylor} a great care was paid to distinguish the two lines, and
it was found that at low field the GT line crosses again the melting line so
that the lower field segment of the melting line is again separating two
glassy or pinned states.
Now we compare the results on the phase diagram with other theories. The
only known theory providing both the ODT and the glass lines is the density
functional model. The picture advocated in Ref. \onlinecite{Menon02} on
basis of the replica density functional calculation in the framework of
thermodynamics of pinned line objects is qualitatively different from the
present one. In this theory the glass line does not intersect the ODT line
and therefore there is no unpinned solid phase. The comparison of the theory
studied in this paper with the replica density functional theory is
complicated by the fact that the applicability ranges of two theories are
different. In addition there are other phenomenological approaches, most
notably that of ref. \cite{Mikitik}, based on Lindermann criterion to map
the ODT line.
\subsection{Application to optimally doped YBCO\protect\bigskip}
We plot the ODT and the glass transition line in Fig. 5 along with the
experimental data on optimally doped $YBCO$ of Ref. \onlinecite{Nishizaki}.
The theoretical lines use the fitting parameters of \ ref. %
\onlinecite{Lidisorder} which fitted the ODT line and assume the $\left\vert
\psi \right\vert ^{4}$ disorder strength $q$ is small. We believe the value
of this parameter should be measured directly using a replica symmetry
breaking dynamical phenomenon rather than fitted in thermodynamics, though
it might be possible to adjust $q$, so that the curve can fit the glass
transition line better than the case of $q=0$. As was argued in %
\onlinecite{Lidisorder} (and found to be consistent with experiments on the
ODT line and the magnetization, entropy and specific heat discontinuities at
melting line) that the general dependence of the disorder strength on
temperature near $T_{c}$ is: $r(t)=n(1-t)^{2}/t$ with $n=0.12$. The formula
interpolates the one used at lower temperatures in ref. \onlinecite{Blatter}
with our dimensionless pinning parameter $n$ (proportional to the pinning
centers density) related to the \textquotedblleft pinning
strength\textquotedblright\ of Blatter $et$ $al$ \cite{Blatter} by $n=\gamma
\gamma _{T}^{0}/\pi Gi^{1/2}\xi ^{3}$. Note that experimentally the glass
transition line at lower fields is not measured precisely. Different
experiments locate it at various places using different criteria. As
mentioned above experimentally the GT line often crosses the melting line
again at very low fields. This question perhaps cannot be addressed within
the lowest Landau level approximation valid at fields above $1kG.$ The ODT
line in the melting region is very well established experimentally by great
variety of techniques. However the "second peak" segment is only poorly
determined due to difficulty to define it in the essentially dynamic
magnetization loops analysis. Recently developed muon spin rotation and
neutron scattering methods might be very helpful in that respect.
\bigskip
\section{\protect\bigskip Summary\protect\bigskip}
To summarize we considered the effects of both thermal fluctuations and
disorder in the framework of the GL approach using the replica formalism.
Flux line lattice in type II superconductors undergoes a transition\textbf{\
}into three \textquotedblleft disordered\textquotedblright\ phases: vortex
liquid (not pinned), homogeneous vortex glass (the pinned liquid or the
vortex glass) and the Bragg glass (pinned solid) due to both thermal
fluctuations and random quenched disorder. We show that the disordered
Ginzburg -- Landau model (valid not very far from $H_{c2}$) in which only
the coefficient of a term quadratic in order parameter $\psi $ is random,
first considered by Dorsey, Fisher and Huang, leads to a state with nonzero
Edwards - Anderson order parameter, but this state is still replica
symmetric. Namely there is no ergodicity breaking and no multiple time
scales in dynamics are expected. However, when the coefficient of the
quartic term in $\psi $ in the\ GL free energy also has a random component,
replica symmetry breaking effects appear (with ergodicity breaking). The
location of the glass transition line in 3D materials is determined and
compared to experiments. The line is clearly different from both the melting
line and the second peak line describing the translational and rotational
symmetry breaking at high and low temperatures respectively. The phase
diagram is therefore separated by two lines into four phases mentioned
above. In principle we could obtain the RSB solution near the phase
transition line by expanding the equations around the phase transition line
as in the spin glass theory, see, for example ref. \onlinecite{Parisi}, and
we found that the RSB is continuous. Thus RSB states involve multiple time
scales in relaxation phenomena.
It is natural to expect and is confirmed that the glass (irreversibility)
line crosses the \textquotedblleft order - disorder\textquotedblright\ line
not very far from its Kauzmann point. We are not sure if the crossing is
right at Kauzmann point. If two GT lines (on liquid side and solid side) are
joined, the crossing must be right at the Kauzmann point. We believe \
(speculate) that the glass line should cross the \textquotedblleft order -
disorder\textquotedblright\ line right at the Kauzmann point if the
experiments can be done accurately and the theory shall confirm it if the
model can be solved exactly. It is of great interest to solve a solvable toy
model to test this idea. The Kauzmann point is a point in which the
magnetization and the entropy difference between solid and liquid phases
changes sign. In this region the positive slope disorder dominated second
peak segment joins the thermal fluctuations dominated negative slope melting
segment. This is the region in which effects of the disorder and of the
thermal fluctuation are roughly of the same strength.
The replica symmetry breaking solution found here can be used to
calculate the detailed properties inside the glass state. This
however require generalization of the theory to include dynamics,
since most of irreversible phenomena are time dependent. In
particular it would be interesting to estimate the time scales
associated with quenched disorder. This is left for a future work.
Also we have considered only the three dimensional GL model here.
It can be applied to superconductors with rather small anisotropy.
It would be interesting to generalize the calculation to the
Lawrence - Doniach model and to the two dimensional case
describing thermal fluctuations and disorder in more anisotropic
layered superconductors and thin films..
\acknowledgments We are grateful to E.H. Brandt, B. Shapiro, A. Grover, E.
Andrei, F. Lin, J.-Y. Lin and G. Bel for numerous discussions, T. Nishizaki,
E. Zeldov and Y. Yeshurun, L. Paulus for providing details of their
experiments sometimes prior to publications The work was supported by NSC of
Taiwan grant The work of BR was supported by NSC of Taiwan grant
NSC\#93-2112-M-009-023 and the work of DL was supported the Ministry of
Science and Technology of China (G1999064602) and National Nature Science
Foundation (\#10274030). B.R. is very grateful for Weizmann Institute of
Science for warm hospitality during sabbatical leave.
|
\section{Introduction}
\label{intro}
Gaussian processes (GPs) \citep{rasmussen2006} provide a flexible framework for modelling unknown functions: they are robust to overfitting, offer good predictive uncertainty estimates and allow us to incorporate prior assumptions into the model. Given a dataset with some inputs $\mat{X} \in \Reals^{N \times d}$ and outputs $\vy \in \Reals^N$, a GP regression model assumes $y_i = f(\vx_i)+\varepsilon_i$ where $f$ is a GP over $\Reals^d$ and where the $\varepsilon_i$ are normal random variables accounting for observation noise. The model predictions at $\vx \in \Reals^d$ are then given by the posterior distribution $f \given \vy$. However, computing the posterior distribution usually scales $\BigO(N^3)$, because it requires solving a system involving an $N \times N$ matrix.
A range of sparse GP methods (see \citet{quinonero2005unifying} for an overview) have been developed to improve on this $\BigO(N^3)$ scaling. Among them, variational inference is a practical approach allowing regression \citep{titsias2009}, classification \citep{hensman2015scalable}, minibatching \citep{hensman2013} and structured models including latent variables, time-series and depth \citep{titsias2010bayesian, frigola2014variational, hensman2014nested, salimbeni2017doubly}. Variational inference in GPs works by approximating the exact (but intractable) posterior $f \given \vy$. This approximate posterior process is constructed from a conditional distribution based on $M$ pseudo observations of $u_i = f(\vz_i)$ at locations $\mat{Z} = \{\vz_i\}_{i=1}^M$. The approximation is optimised by minimising the Kullback-Leibler divergence between the approximate posterior and the exact one.
The resulting complexity is $\BigO(M^3 + M^2N)$, so choosing $M \ll N$ enables significant speedups compared to vanilla GP models.
However, when using common stationary kernels such as Mat\'ern, Squared Exponential (SE) or rational quadratic, the influence of pseudo-observations are only local and limited to the neighbourhoods of the inducing points $\mat{Z}$, so a large number of inducing points $M$ may be required to cover the input space. This is especially problematic for higher dimensional data as a result of the curse of dimensionality. %
\begin{figure}[t!]
\centering
\includegraphics[trim=2.2cm 1.8cm 1.1cm 2.3cm, clip, width=6cm]{figures/mapping.pdf}
\caption{Illustration of the mapping between a 2D dataset (grey dots) embedded into a 3D space and its projection (orange dots) onto the unit half-circle using a linear mapping.}
\vspace{-.2cm}
\label{fig:mapping}
\end{figure}
Variational Fourier Features~\citep[VFF]{hensman2017variational} inducing variables have been proposed to overcome this limitation. In VFF the inducing variables based on pseudo observations are replaced with inducing variables obtained by projecting the GP onto the Fourier basis. This results in inducing variables that have global influence on the predictions.
For one-dimensional inputs, this construction leads to covariances matrices between inducing variables that are almost diagonal and this can be exploited to reduce the complexity to $\BigO(N + M^2 N)$.
However, for $d$-dimensional input spaces VFF requires construction of a new basis given by the outer product of the one-dimensional Fourier basis. This implies that the number of inducing variables grows exponentially with the dimensionality, which limits the use of VFF to just one dimension or two. Furthermore, VFF is restricted to kernels of the Mat\'ern family.
In this work we improve upon VFF in multiple directions. Rather than using a sine and cosine basis, we use a basis of spherical harmonics to define a new interdomain sparse approximation. As we will show, spherical harmonics are the eigenfunctions of stationary kernels on the hypersphere, which allows us to exploit the Mercer representation of the kernel for defining the inducing variables. In arbitrary dimensions, our method leads to \emph{diagonal} covariance matrices which makes it faster than VFF as we fully bypass the need to compute expensive matrix inverses. Compared to both sparse GPs and VFF, our approximation scheme suffers less from the curse of dimensionality. As VFF, each spherical harmonic inducing function has a global influence, but there is a natural ordering of the spherical harmonics that can guarantee that the best features are picked given an overall budget for the number $M$ of inducing variables. Moreover, our method works for any stationary kernel on the sphere.
Following the illustration in \cref{fig:mapping}, we outline the different steps of our method. We start by concatenating the data with a constant input (bias) and project it linearly onto the unit hypersphere $\ensuremath{\sphere^{d-1}}$. We then learn a sparse GP on the sphere based on the projected data. We can do this extremely efficiently by making use of our spherical harmonic inducing variables, shown in \cref{fig:harmonics}. Finally, the linear mapping between the sphere and the sub-space containing the data can then be used to map the predictions of the GP on the sphere back to the original $(\vx, y)$ space.
This paper is organised as follows. In \cref{sec:background} we give the necessary background on sparse GPs and VFF.
In \cref{sec:vish} we highlight that every stage of the proposed method can be elegantly justified. For example, the linear mapping between the data and the sphere is a property of some specific covariance functions such as the arc-cosine kernel, from which we can expect good generalisation properties (\cref{sec:mapping}). Another example is that the spherical harmonics are coupled with the structure of the Reproducing Kernel Hilbert Space (RKHS) norm for stationary kernels on the sphere, which makes them very natural candidates for basis functions for sparse GPs (\cref{sec:rkhs}). In \cref{sec:definition-sh-inducing-variables} we define our spherical harmonic inducing variables and compute the necessary components to do efficient GP modelling. \Cref{sec:experiments} is dedicated to the experimental evaluation.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{figures/harmonics.png}
\vspace{-.5cm}
\caption{The first four levels of spherical harmonic functions in $\Reals^3$. The domain of the spherical harmonics is the surface of the unit sphere $\ensuremath{\mathbb{S}}^2$. The function value is given by the color and radius.}
\label{fig:harmonics}
\end{figure}
\section{Background}
\label{sec:background}
\subsection{GPs and Sparse Variational Inference}
GPs are stochastic processes such that the distribution of any finite dimensional marginal is multivariate normal. The distribution of a GP is fully determined by its mean $\mu(\cdot)$ and covariance function (kernel) $k(\cdot, \cdot)$. GP models typically consist of combining a latent (unobserved) GP $f\sim \GP(0, k(\cdot, \cdot))$ with a likelihood that factorises over observation points $p(\vy \given f) = \prod_n p(y_n \given f(\vx_n))$. When the observation model is $y_n \given f(\vx_n) \sim \NormDist{f(\vx_n), \tau^2}$, the posterior distributions of $f$ and $y$ given some data are still Gaussian and can be computed analytically. Let $\varepsilon_i \sim \NormDist{0, \tau^2}$ represent the observation noise at $\vx_i$, then the GP posterior distribution is $f \given \vy \sim \GP(m, v)$ with
\begin{align*}
m(\vx) &= \vec{k}_{\vf}(\vx) (\MK_{\vf \vf} + \tau^2 I)\inv \vy,\\
v(\vx, \vx') &= k(\vx, \vx') - \vec{k}_{\vf}(\vx)\transpose (\MK_{\vf \vf} + \tau^2 I)\inv \vec{k}_{\vf} (\vx'),
\end{align*}
where $\MK_{\vf \vf} =\left[ k(\vx_i, \vx_j)\right]_{i,j = 1}^N$ and $\vec{k}_{\vf}(\vx) = [k(\vx_n, \vx)]_{n=1}^N$.
Computing this exact posterior requires inverting the $N \times N$ matrix $\MK_{\vf \vf}$, which has a $\BigO(N^3)$ computational complexity and a $\BigO(N^2)$ memory footprint. Given a typical current hardware specification, this limits the dataset size to the order of few thousand observations. Furthermore, there is no known analytical expression for posterior distribution when the likelihood is not conjugate, as encountered in classification for instance.
Sparse GPs combined with variational inference provide an elegant way to address these two shortcomings \citep{titsias2009, hensman2013, hensman2015scalable}.
It consists of introducing a distribution $q(f)$ that depends on some parameters, and finding the values of these parameters such that $q(f)$ gives the best possible approximation of the exact posterior $p(f \given \vy )$. Sparse GPs introduce $M$ pseudo inputs $\mat{Z} \in \Reals^{M \times d}$ corresponding to $M$ inducing variables $\vu \sim \NormDist{\vm, \mat{S}}$, and choose to write the approximating distribution as $q(f) = q(\vu)p(f\given f(\mat{Z}) \shorteq \vu)$. This results in a distribution $q(f)$ that is parametrised by the variational parameters $\vm \in \Reals^M$, and $\mat{S} \in \Reals^{M\times M}$, which are learned by minimising the Kullback–Leibler (KL) divergence $\KL{q(f)}{p(f \given \vy )}$.
At prediction time, the conjugacy between $q(\vu)$ and the conditioned posterior $f \given f(\mat{Z}) \shorteq \vu$ implies that $q(f)$, where $\vu$ is marginalised out, is a GP with a mean $\mu(\vx)$ and a covariance function $ \nu(\vx, \vx')$ that can be computed in closed:
\begin{align}
\mu(\vx) &= \vec{k}_\vu(\vx) \MK_{\vu \vu}^{-1} \vm \label{eq:qf}
\\
\nu(\vx, \vx') &= k(\vx, \vx') + \vec{k}_\vu(\vx)\transpose \MK_{\vu \vu}^{-1}(\mat{S} - \MK_{\vu \vu}) \MK_{\vu \vu}^{-1} \vec{k}_\vu(\vx'), \nonumber
\end{align}
where $\left[ \MK_{\vu \vu} \right]_{m,m'} = \ExpSymb[f(\vz_m)\, f(\vz_{m'})]$, and $\left[ \vec{k}_\vu(\vx) \right]_{m} = \ExpSymb[f(\vz_m)\,f(\vx)]$.
Sparse GPs result in $\BigO(M^2 N + M^3)$ and $\BigO(M^3)$ computational complexity at training (minimising the KL) and prediction respectively. Picking $M \ll N$ can thus result in drastic improvement, but the lower $M$ is, the less accurate the approximation, as recently shown by \citet{shi2019sparse}. Typical kernels---such as Mat\'ern or Squared Exponential (SE)---depend on a lengthscale parameter that controls how quickly the correlation between two evaluations of $f$ drops for two inputs that move away from another. For short lengthscales, this correlation drops quickly and two observations can be almost independent even for two input points that are close by in the input space. For a sparse GP model, this implies that $\mu(\vx)$ and $\nu(\vx, \vx)$ will rapidly revert to the prior mean and variance when $\vx$ is not in the immediate neighbourhood of an inducing point $\vz_i$. A similar effect can be observed when the input space is high-dimensional: because inducing variables only have a local influence, the number of inducing points required to cover the space grows exponentially with the input space dimensionality. In practice, it may thus be required to pick large values for $M$ to obtain accurate approximate posteriors but this defeats the original intent of sparse methods.
This behaviour where the vector of features $\vec{k}_\vu (\cdot)$ of the approximate distribution are given by kernel function $k(\vz_i, \cdot)$ can be addressed using interdomain inducing variables.
\subsection{Inter-domain GPs and Variational Fourier Features}
Inter-domain Gaussian processes (see \citet{lazaro2009inter} and \citet{GPflow2020multioutput} for a modern exposition) use alternative forms of inducing variables such that the resulting sparse GP models result in more informative features.
In interdomain GPs the inducing variables are obtained by integrating the GP $f$ with an inducing function:
\begin{equation*}
u_m = \int f(\vx)\,g_m(\vx)\,\calcd{\vx}\,.
\end{equation*}
This redefinition of $\vu$ implies that the expressions of $\MK_{\vu \vu}$ and $\vec{k}_\vu$ change, but the inference scheme of interdomain GPs and the mathematical expressions for the posterior mean and variance are exactly the same as classic sparse GPs.
Depending on the choice of $g_m(\cdot)$, interdomain GPs can result in various feature vector $\vec{k}_\vu(\cdot)$. These feature vectors can alleviate the classic sparse GP limitation of inducing variables having only a local influence.%
VFF~\citep{hensman2017variational} is an interdomain method where the inducing variables are given by a Mat\'ern RKHS inner product between the GP and elements of the Fourier basis:
\begin{equation*}
u_m = \langle f, \psi_m \rangle_\ensuremath{{\mathcal{H}}},
\end{equation*}
where $\psi_0 = 1$, $\psi_{2i}=\cos(i x)$ and $\psi_{2i+1}=\sin(i x)$ if the input space is $[0, 2 \pi]$. This leads to
\begin{equation*}
\MK_{\vu \vu} = \left[\langle \psi_i , \psi_{j} \rangle_\ensuremath{{\mathcal{H}}}^{} \right]_{i, j = 0}^{M-1} \text{ and } \vec{k}_\vu (x) = \left[ \psi_i(x)\right]_{i = 0}^{M-1}\, .
\end{equation*}
This results in several advantages. First, the features $\vec{k}_\vu(x)$ are exactly the elements of the Fourier basis, which are independent of the kernel parameters and can be precomputed. Second, the matrix $\MK_{\vu \vu}$ is the sum of a diagonal matrix plus low rank matrices. This structure can be used to drastically reduce the computational complexity, and the experiments showed one or two orders of magnitude speed-ups compared to classic sparse GPs. Finally, the variance of the inducing variables typically decays quickly with increasing frequencies, which means that by selecting the first $M$ elements of the Fourier basis we pick the features that carry the most signal.
\begin{figure}[t!]
\centering
\vspace{.3cm}
\includegraphics[trim=0 1.2cm 0 1.25cm, height=3.5cm]{figures/fig_VFF_decay.pdf}
\caption{Illustration of the variance of the inducing variables when using VFF with a two dimensional input space. Each pair $(i, j)$ corresponds to an inducing function $\psi_i(x_1)\psi_j(x_2)$, and the area of each square is proportional to the variance of the associated inducing variable. For a given number of inducing variables, say $M=49$ as highlighted with the red dashed line, VFF selects features that do not carry signal (north-east quarter of the red dashed square) whereas it ignores features that are expected to carry signal (along the axes $i=0$ and $j=0$).}
\label{fig:variance_decay_VFF}
\end{figure}
The main flaw of VFF comes from the way it generalises to multidimensional input spaces. The approach in~\citet{hensman2017variational} for getting a set of $d$-dimensional inducing functions consists of taking the outer product of $d$ univariate basis and to consider separable kernels so that the elements of $\MK_{\vu \vu}$ are given by the product of the inner products in each dimension. For example, in dimension 2, a set of $M^2$ inducing functions is given by $\{(x_1, x_2) \mapsto \psi_i(x_1)\psi_j(x_2)\}_{0 \leq i, j \leq M-1}$, and entries on $\MK_{\vu \vu}$ are $\langle \psi_i\psi_j, \psi_k\psi_l \rangle_{\mathcal{H}_{k_1 k_2}^{}}^{} = \langle \psi_i, \psi_k \rangle_{\mathcal{H}_{k_1}^{}}^{} \langle \psi_j, \psi_l \rangle_{\mathcal{H}_{k_2}^{}}^{}$. This construction scales poorly with the dimension: for example choosing a univariate basis as simple as $\{1, \cos, \sin\}$ for an eight-dimensional problem already results in more that 6,500 inducing functions. Additionally, this construction is very inefficient in terms of captured variance, as we illustrate in \Cref{fig:variance_decay_VFF} for a 2D input space. The figure shows that the prior variance associated with the inducing function $\psi_i(x_1)\psi_j(x_2)$ vanishes quickly when both $i$ and $j$ increase. This means that most of the inducing functions on which the variational posterior is built are irrelevant, whereas some important ones such as $\psi_i(x_1)\psi_0(x_2)$ or $\psi_0(x_1)\psi_j(x_2)$ for $i, j \geq \sqrt{M}$ are important but ignored. Although we used a 2D example to illustrate this poor behaviour, it is important to bear in mind that the issue gets exacerbated for higher dimensional input spaces. As detailed in \cref{fig:variance_decay_SH} and discussed later, our proposed approach does not suffer from such behaviour.
\section{Variational Inference with Spherical Harmonics}
\label{sec:vish}
In this section we describe the three key ingredients of the proposed approach: the mapping of dataset to the hypersphere, the definition of GPs on this sphere, and the use of spherical harmonics as inducing functions for such GPs.
\subsection{Linear Mapping to the Hypersphere}
\label{sec:mapping}
As illustrated in \cref{fig:mapping}, for a 1D dataset, the first step of our approach involves appending the dataset with a dummy input value $b$ that we will refer to as the \emph{bias} (grey dots in \cref{fig:mapping}). %
For convenience we will overload the notation, and refer to the augmented inputs $(\vx, b)$ as $\vx$ from now on, denote its dimension by $d$, and assume $x_d=b$. %
The next step is to project the augmented data onto the unit hypersphere $\ensuremath{\sphere^{d-1}}$ according to a linear mapping: $(\vx, y) \mapsto (\vx/\norm{\vx}, y/\norm{\vx})$ as depicted by the orange dots. Given the projected data, we learn a GP model on the sphere (orange function), and the model predictions can be mapped back to the hyperplane (blue curve) using the inverse linear mapping (i.e. following the fine grey lines starting from the origin).
Although this setup may seem very arbitrary, it is inspired by the important works on the limits of neural networks as Gaussian processes, especially the \emph{arc-sine} kernel \citep{williams1998computation} and the \emph{arc-cosine} kernels \citep{cho2009kernel}. An arc-cosine kernel corresponds to the infinitely-wide limit of a single-layer ReLU-activated network with Gaussian weights. Let $\vx, \vx' \in \Reals^{d}$, such that $x_d = x'_d = b$, be an augmented input vector whose last entry corresponds to the bias. Then the arc-cosine kernel can be written as
\begin{multline*}
k_{ac}(\vx, \vx') = \frac{1}{\pi} \norm{\vx}\,\norm{\vx'}\,\big( \sin \theta + (\pi - \theta) \cos \theta \big), \\
\text{where}\quad\theta = \arccos\left(\frac{\vx\transpose\vx'}{\norm{\vx}\norm{\vx'}}\right).
\end{multline*}
$\theta$ is the geodesic distance (or the great-circle distance) between the projection of $\vx$ and $\vx'$ on the unit hypersphere.
We observe that $k_{ac}$ is separable in polar coordinates, and the dependence in the radius is just the linear kernel. Since the linear kernel is degenerate, it means that there is a one-to-one mapping between the GP samples restricted to the unit sphere $\ensuremath{\sphere^{d-1}}$ and the samples over $\Reals^d$, and that this mapping is exactly the linear transformation we introduced previously. %
One insight provided by the link between our settings and the arc-cosine kernel is that once mapped back to the original space, our GP samples will exhibit linear asymptotic behaviour, which can be compared to the way ReLU-activated neural networks operate. Although a proper demonstration would require further work, this correspondence suggests that our models may inherit the desirable generalisation properties of neural networks.
Having mapped the data to the sphere, we must now introduce GPs on the sphere and their covariance. The following section provides some theory that allows us to work with various kernels on the sphere, including the arc-cosine kernel as a special case.
\subsection{Mercer's Theorem for Zonal Kernels on the Sphere}
\label{sec:rkhs}
Stationary kernels, i.e. translation invariant covariances, are ubiquitous in machine learning when the input space is Euclidean. When working on the hypersphere, their spherical counterpart are zonal kernels, which are invariant to rotations. More precisely, a kernel $k: \ensuremath{\sphere^{d-1}} \times \ensuremath{\sphere^{d-1}} \mapsto \Reals$ is called zonal if there exists a shape function $s$ such that $k(\vx, \vx') = s(\vx\transpose\vx')$. From hereon, we focus on zonal kernels. Since stationary kernels are functions of $\vx - \vx'$ they have the property that $\Delta_\vx k(\vx, \vx') = \Delta_{\vx'} k(\vx, \vx')$, where $\Delta_\vx = \sum_{i=1}^d \partial^2 / \partial^2 x_i$ is the Laplacian operator. Such property is also verified by zonal kernels: $\Delta^{\ensuremath{\sphere^{d-1}}}_\vx k(\vx, \vx') = \Delta^{\ensuremath{\sphere^{d-1}}}_{\vx'} k(\vx, \vx')$, where we denote by $\Delta_\vx^{\ensuremath{\sphere^{d-1}}}$ the Laplace-Beltrami operator with respect to the variable $\vx$. Combined with an intregration by part, this property can be used to show that the kernel operator $\mathcal{K}$ of a zonal covariance and the Laplace-Beltrami operator commute:
\begin{align*}
\mathcal{K} \left[\Delta^{\ensuremath{\sphere^{d-1}}} g\right] &= \int_{\ensuremath{\sphere^{d-1}}} k(\vx, \cdot) \left[\Delta^{\ensuremath{\sphere^{d-1}}}_\vx g(\vx)\right] \calcd{\vx}\\
&= \int_{\ensuremath{\sphere^{d-1}}} g(\vx) \Delta^{\ensuremath{\sphere^{d-1}}}_\vx k(\vx, \cdot) \calcd{\vx}\\
&= \int_{\ensuremath{\sphere^{d-1}}} g(\vx) \Delta^{\ensuremath{\sphere^{d-1}}} k(\vx, \cdot) \calcd{\vx} = \Delta^{\ensuremath{\sphere^{d-1}}} \mathcal{K} g
\end{align*}
which in turn implies that these two operators share the same eigenfunctions. This result is of particular relevance to us since there is a huge body of literature on diagonalisation of the Laplace-Beltrami operator on $\ensuremath{\sphere^{d-1}}$, and that it is well known that its eigenfunctions are given by the spherical harmonics. This reasoning can be summarised by the following theorem:
\begin{theorem}[Mercer representation]
Any zonal kernel $k$ on the hypersphere can be decomposed as
\begin{equation}
\label{eq:kernel-form}
k(\vx, \vx') = \sum_{\ell=0}^{\infty} \sum_{k=1}^{\ensuremath{N^d_\ell}\xspace} \widehat{a}_{\ell, k} \ensuremath{\phi}_{\ell, k}(\vx) \ensuremath{\phi}_{\ell, k}(\vx'),
\end{equation}
where $\vx,\vx' \in \ensuremath{\sphere^{d-1}}$ and $\widehat{a}_{\ell, k}$ are positive coefficients, $\ensuremath{\phi}_{\ell,k}$ denote the elements of the spherical harmonic basis in $\ensuremath{\sphere^{d-1}}$, and $N_\ell^d$ corresponds to the number of spherical harmonics for a given level $\ell$.
\end{theorem}
Although it is typically stated without a proof, this theorem is already known in some communities (see \citet{wendland2005} for a functional analysis exposition, or \citet{peacock1999cosmological} for its use in cosmology).
Given the Mercer representation of a zonal kernel, its RKHS can be characterised by
\begin{equation*}
\ensuremath{{\mathcal{H}}} = \left\{
g =
\sum_{\ell=0}^\infty \sum_{k=1}^{\ensuremath{N^d_\ell}\xspace} \widehat{g}_{\ell,k} \ensuremath{\phi}_{\ell, k} :
\sum_{\ell=0}^\infty \sum_{k=1}^{\ensuremath{N^d_\ell}\xspace} \frac{|\widehat{g}_{\ell,k}|^2}{\widehat{a}_{\ell, k}} < \infty
\right\}
\end{equation*}
with the inner product between two functions $g(\vx) = \sum_{\ell, k} \widehat{g}_{\ell, k} \ensuremath{\phi}_{\ell, k}(\vx)$ and $h(\vx) = \sum_{\ell, k} \widehat{h}_{\ell, k} \ensuremath{\phi}_{\ell, k}(\vx)$ defined as
\begin{equation}
\langle g, h \rangle_{\ensuremath{{\mathcal{H}}}} =
\sum_{\ell=0}^\infty
\sum_{k=1}^{\ensuremath{N^d_\ell}\xspace}
\frac{\widehat{g}_{\ell, k} \widehat{h}_{\ell, k}}%
{\widehat{a}_{\ell, k}}.
\end{equation}
It is straightforward to show that this inner product satisfies the reproducing property (see Appendix \cref{appendix:proof:reproducing}).
In order to make these results practical, we need to compute the coefficients $\widehat{a}_{\ell, k}$ for some kernels of interest. For a given value of $\vx' \in \ensuremath{\sphere^{d-1}}$ we can see $\vx \mapsto k(\vx, \vx')$ as a function from $\ensuremath{\sphere^{d-1}}$ to $\Reals$, and represent it in the basis of spherical harmonics:
\begin{equation}
\label{eq:kernel-funk}
k(\vx, \vx') = \sum_{\ell=0}^\infty \sum_{k=1}^{\ensuremath{N^d_\ell}\xspace} \langle k(\vx', \cdot), \ensuremath{\phi}_{\ell, k} \rangle_{L^2(\ensuremath{\sphere^{d-1}})} \ensuremath{\phi}_{\ell, k}(\vx).
\end{equation}
Combining equations (\ref{eq:kernel-form}) and (\ref{eq:kernel-funk}) gives the following expression for the coefficients we are interested in: $\widehat{a}_{\ell, k} = \langle k(\vx', \cdot), \ensuremath{\phi}_{\ell, k} \rangle_{L^2(\ensuremath{\sphere^{d-1}})}/ \ensuremath{\phi}_{\ell, k}(\vx')$.
Although this expression involves a $(d-1)$ dimensional integral on the hypersphere, our hypothesis that $k$ is zonal means we can make use of the Funk-Hecke formula (see \cref{appendix:theorem:funk} in the supplementary) and rewrite it as a simpler 1D integral over the shape function of $k$. Following this procedure finally leads to
\begin{equation}
\label{eq:eigenvalues}
\widehat{a}_{\ell, k}
= \frac{\omega_d}{C_{\ell}^{(\alpha)}(1)} \int_{-1}^1\!\!s(t)\,C_\ell^{(\alpha)}(t) (1 - t^2)^{\frac{d-3}{2}} \calcd{t},
\end{equation}
where $\alpha = \frac{d-2}{2}$, $C_\ell^{(\alpha)}$ is the Gegenbauer polynomial of degree $\ell$ and $s(\cdot)$ is the kernel's shape function. The constants $\omega_d$ and $C_{\ell}^{(\alpha)}(1)$ are given in \cref{appendix:theorem:funk} in the supplementary. %
Using \cref{eq:eigenvalues} we are able to compute the Fourier coefficients of any zonal kernel. The details of the calculations for the arc-cosine kernel restricted to the sphere is given in \cref{sec:appendix:arc-cosine} in the supplementary material.
Alternatively, a key result from \citet[eq. 20]{solin2014hilbert} is to show that the coefficients $\widehat{a}_{\ell, k}$ have a simple expression that depends on the kernel spectral density $S$ and the eigenvalues of the Laplace-Beltrami operator. For GPs on $\ensuremath{\sphere^{d-1}}$, the coefficients boil down to $\widehat{a}_{\ell, k } = S(\sqrt{\ell (\ell + d - 2)})$. This is the expression we used in our experiments to define Mat\'ern and SE covariances on the hypersphere. More details on this method are given in \cref{appendix:sec:materns} in the supplementary.
As one may have noticed, the values of $\widehat{a}_{l,k}$ do not depend on the second index $k$ (i.e. the eigenvalues only depend on the degree of the spherical harmonic, but not on its orientation). This is a remarkable property of zonal kernels which allows us to use the \emph{addition theorem} (see supplementary material \cref{appendix:theorem:addition}) for spherical harmonics to simplify \cref{eq:kernel-form}:
\begin{equation*}
k(\vx, \vx') = \sum_{\ell=0}^{\infty} \widehat{a}_{\ell}\, \frac{\ell + \alpha}{\alpha}\, C_\ell^{(\alpha)}\left(\vx\transpose\vx'\right).
\end{equation*}
This representation is cheaper to evaluate than \cref{eq:kernel-form} but it still requires truncation for practical use.
\subsection{Spherical Harmonics as Inducing Features}
\label{sec:definition-sh-inducing-variables}
We can now build on the results from the previous section to propose powerful and efficient sparse GPs models. We want features $\vec{k}_\vu(\vx)$ that exhibit non-local influence for expressiveness, and inducing variables that induce sparse structure in $\MK_{\vu \vu}$ for efficiency. We achieve this by defining the inducing variables $u_m$ to be the inner product between the GP\footnote{Although $f$ does not belong to $\ensuremath{{\mathcal{H}}}$ \citep{kanagawa2018gaussian}, such expression is well defined since the regularity of $\ensuremath{\phi}_{m}$ can be used to extend the domain of definition of the first argument of the inner product to a larger class of functions. See \citet{hensman2017variational} for a detailed discussion.} and spherical harmonics:\footnote{Note that in the context of inducing variables, we switch to a single integer $m$ to index the spherical harmonics and order them first by increasing level $\ell$, and then by increasing $k$ within a level.}
\begin{equation}
u_m = \langle f, \ensuremath{\phi}_m\rangle_{\ensuremath{{\mathcal{H}}}}.
\end{equation}
To leverage these new inducing variables we need to compute two quantities: 1) the covariance between $u_m$ and $f$ for $\vec{k}_\vu (\vx)$, and 2) the covariance between the inducing variables themselves for the $\MK_{\vu \vu}$ matrix. See \citet{GPflow2020multioutput} for an in-depth discussion of these concepts.
The covariance of the inducing variables with $f$ are
\begin{equation*}
\left[\vec{k}_\vu (\vx)\right]_m = \ExpSymb[f(\vx)\,u_m] %
= \langle k(\vx, \cdot), \ensuremath{\phi}_m \rangle_{\ensuremath{{\mathcal{H}}}}
= \ensuremath{\phi}_{m}(\vx),
\end{equation*}
where we relied on the linearity of both expectation and inner product and where we used the reproducing property of $\ensuremath{{\mathcal{H}}}$.
The covariance between the inducing variables is given by
\begin{equation*}
\left[\MK_{\vu \vu} \right]_{m, m'} = \ExpSymb \left[u_m\,u_{m'} \right]
= \langle \ensuremath{\phi}_{m}, \ensuremath{\phi}_{m'} \rangle_{\ensuremath{{\mathcal{H}}}}
= \frac{\delta_{mm'}}{\widehat{a}_m}\, ,
\end{equation*}
where $\delta_{mm'}$ is the Kronecker delta. Crucially, this means that $\MK_{\vu \vu}$ is a diagonal matrix with elements $1/(\widehat{a}_m)$.
Substituting $\MK_{\vu \vu}$ and $\vec{k}_\vu (\vx)$ into the sparse variational approximation (\cref{eq:qf}), leads to the following form for $q(f)$
\begin{equation*}
\GP\left(\tilde{\bm{\Phi}}^\top(\vx) \vm;\ k(\vx, \vx') - \tilde{\bm{\Phi}}^\top(\vx) (\mat{S} - \MK_{\vu \vu}) \tilde{\bm{\Phi}}(\vx') \right),
\end{equation*}
with $\tilde{\bm{\Phi}}(\vx) = [\widehat{a}_m \phi_m(\vx)]_{m=1}^M$.
This sparse approximation has two main differences compared to a SVGP model with standard inducing points. First, the spherical harmonic inducing variables lead to features $k_\vu(\vx)$ with non-local structure. Second, the approximation $q(f)$ does not require any inverses anymore. The computational bottleneck of this model is now simply the matrix multiplication in the variance calculation, which has a $\BigO(N_{\text{batchsize}} M^2)$ cost. Compared to the $\BigO(M^3 + N_{\text{batchsize}} M^2)$ cost of inducing point SVGPs, this gives a significant speedup -- as we show in the experiments.
As is usual in sparse GP methods, the number of inducing variables $M$ is constrained by the computational budget available to the user. Given that we ordered the spherical harmonic by increasing $\ell$, choosing the first $M$ elements means we will select first features with low angular frequency. Provided that the kernel spectral density is a decreasing function (this will be true for classic covariances, but not for quasi-periodic ones), this means that the selected features correspond to the ones carrying the most signal according to the prior. In other words, the decomposition of the kernel can be compared to an infinite dimensional principal component analysis, and our choice of the inducing function is optimal since we pick the ones with the largest variance. This is illustrated in \cref{fig:variance_decay_SH}, which shows the analogue of \cref{fig:variance_decay_VFF} for spherical harmonic inducing functions.
\begin{figure}[t!]
\centering
\vspace{.3cm}
\includegraphics[trim=0 1.5cm 0 1cm, height=3.5cm]{figures/fig_VSH_decay.pdf}
\caption{Illustration of the variance of the \sc{VSH} inducing variables for a 2D input space. Settings are the same as in \cref{fig:variance_decay_VFF} but a spherical harmonic feature $\ensuremath{\phi}_{\ell, k}$ is associated to each pair $(\ell, k)$. For a given number $M$ of inducing variables, the truncation pattern (see red dashed triangle for $M=49$) is optimal since it selects the most influential features.}%
\label{fig:variance_decay_SH}
\end{figure}
\section{Experiments}
\label{sec:experiments}
We evaluate our method Variational Inference with Spherical Harmonics (\sc{VSH}) on regression and classification problems and show the following properties of our method: 1) \sc{VSH} performs competitively in terms of accuracy and uncertainty quantification on a range of problems from the UCI dataset repository. 2) \sc{VSH} is extremely fast and accurate on large-scale conjugate problems (approximately 6 million 8D entries in less than 2 minutes). 3) Compared to VFF, \sc{VSH} can be applied to multi-dimensional datasets and preserve its computational efficiency. 4) On problems with non-conjugate likelihood our method does not suffer from some of the issues encountered by VFF.
We begin with a toy experiment in which we show that the approximation becomes more accurate as we increase the number of basis functions.
\begin{figure*}[t]
\centering
\includegraphics[width=.9\textwidth]{figures/banana_VISH_M2.pdf}
\vspace{-.5cm}
\caption{Classification of the 2D banana dataset with growing number of spherical harmonic basis functions. The right plot shows the convergence of the ELBO with respect to increasing numbers of basis functions.}
\label{fig:banana}
\end{figure*}
\subsection{Toy Experiment: Banana Classification}
The banana dataset is a 2D binary classification problem \citep{hensman2015scalable}. In \cref{fig:banana} we show three different fits of \sc{VSH} with $M\in \{9,\ 225,\ 784\}$ spherical harmonics, which correspond respectively to maximum levels of $2$, $14$, and $27$ for our inducing functions. Since the variational framework provides a guarantee that more inducing variables must be monotonically better \citep{titsias2009}, we expect that increasing the number of inducing functions will provide improved approximations. This is indeed the case as we show in the rightmost panel: with increasing $M$ the ELBO converges and the fit becomes tighter.
While this is expected behaviour for SVGP methods, it is not guaranteed by VFF. Given the Kronecker-structure used by VFF for this 2D experiment \citet{hensman2017variational} report that using a full rank covariance matrix for the variational distribution was intolerably slow. They also show that enforcing the Kronecker structure on the posterior results in an ELBO that {\em decreased} as frequencies were added, and they finally propose a sum-of-two-Kroneckers structure, but provide no guarantee that this would converge to the exact posterior in the limit of larger $M$. In \sc{VSH} we do not need to impose any structure on the approximate covariance matrix $\mat{S}$, so we retain the guarantee that adding more basis functions will move us closer to the posterior process. The method remains fast despite optimising over full covariance matrices: fitting the models displayed in \cref{fig:banana} only takes a few seconds on a standard desktop.
\subsection{Regression on UCI Benchmarks}
\begin{table*}[tb]
\centering
\resizebox{.95\textwidth}{!}{\input{tables/uci_cr} }
\caption{Predictive mean squared errors (MSEs) and negative log predictive densities (NLPDs) with one standard deviation based on 5 splits on 5 UCI regression datasets. Lower is better. All models assume a Gaussian noise model, use a Mat\'ern-3/2 kernel and use the L-BFGS optimiser for the hyper- and variational parameters. \sc{VSH} and SVGP are configured with the same number of inducing points $M$. A-VFF and A-GPR assume an Additive structure over the inputs (see text). For A-VFF and \sc{VSH} the optimal posterior distribution for the inducing variables is set following \citet{titsias2009}.}
\label{tab:uci}
\end{table*}
We use five UCI regression datasets to compare the performance of our method against other GP approaches. We measure accuracy
|
of the predictive mean with Mean Squared Error (MSE) and uncertainty quantification with mean Negative Log Predictive Density (NLPD). For each dataset we randomly select 90\% of the data for training and 10\% for testing and repeat this 5 times to get error bars. We normalise the inputs and the targets to be centered unit Gaussian. We report the MSE and NLPD of the normalised data. In \Cref{tab:uci} we report the performance of \sc{VSH}, Additive-VFF (A-VFF) \citep{hensman2017variational}, SVGP \citep{hensman2013}, Additive-GRP (A-GPR) and GPR.
We start by comparing \sc{VSH} against SVGP, and notice that for Energy, Concrete and Power the change in inductive bias and expressiveness of spherical harmonic inducing variables opposed to standard inducing points improves performance. Also, while the sparse methods (A-VFF, \sc{VSH} and SVGP) are necessary for scalability (as highlighted by the next experiment), they remain inferior to the exact GPR model -- which should be seen as the optimal baseline.
For VFF we have to resort to an additive model (A-VFF) in order to deal with the dimensionality of the data, as a vanilla VFF model can only be used in one or two dimensions. Following \citep[eq. 78]{hensman2017variational}, we assume a different function for each input dimension
\begin{equation}
\label{eq:vff-additive}
f(x) = \sum_{d=1}^D f_d(x_d),\ \text{with}\ f_d \sim \GP\left(0, k_d\right),
\end{equation}
and approximate this process by a mean-field approximate posterior over the processes $q(f_1, \ldots, f_D) = \prod_{d} q(f_d)$, where each process is a SVGP $q(f_d) = \int p(f_d \given \vu_d) q(\vu_d) \calcd{\vu_d}$. We used $M=30$ frequencies per input dimension. As extra baseline we added an exact GPR model which makes the same additive assumption (A-GPR). As expected, not having to impose this structure improves performance; we see that \sc{VSH} beats A-VFF on every dataset.
\paragraph{Limitations}
Our current implementation of \sc{VSH} only supports datasets up to 8 dimensions (9 dimensions when the bias is concatenated). This is not caused by a theoretical limitation because our approach leads to a diagonal $\MK_{\vu \vu}$ matrix in any dimension. The problem stems from the fact that there are no libraries available providing stable spherical harmonic implementations in high dimensions (needed for $k_{\vu}(\cdot)$). Our implementation based on \citet[Theorem~5.1]{dai2013} is stable up to 9 dimensions and future work will focus on scaling this up. Furthermore, \sc{VSH} does not solve the curse of dimensionality for GP models but does drastically improve over the scaling of VFF.
\subsection{Large-Scale Regression on Airline Delay}
\begin{table*}[tb]
\centering
\resizebox{\textwidth}{!}{\input{tables/airline_cr.tex} }
\caption{Predictive mean squared errors (MSEs), negative log predictive densities (NLPDs) and wall-clock time in seconds with one standard deviation based on 10 random splits on the airline arrival delays experiment. Total dataset size is given by $N$ and in each split we randomly select 2/3 and 1/3 for training and testing.}
\label{tab:airline}
\end{table*}
This experiment illustrates three core capabilities of \sc{VSH}: 1) it can deal with large datasets and 2) it is computationally and time efficient 3) the model improves performance in terms of NLPD.
We use the 2008 U.S. airline delay dataset to asses these capabilities. The goal of this problem is to predict the amount of delay $y$ given eight characteristics $\vx$ of a flight, such as
the age of the aircraft (number of years since deployment), route distance, airtime, etc. We follow the exact same experiment setup as \citet{hensman2017variational}\footnote{\url{https://github.com/jameshensman/VFF}} and evaluate the performance on 4 datasets of size 10,000, 100,000, 1,000,000, and 5,929,413 (complete dataset), created by subsampling the original one. For each dataset we use two thirds of the data for training and one third for testing. Every split is repeated 10 times and we report the mean and one standard deviation of the MSE and NLPD. For every run the outputs are normalized to be a centered unit Gaussian. The inputs are normalized to [0, 1] for VFF and SVGP. For \sc{VSH} we normalize the inputs so that each column falls within $[-v_d, v_d]$. The hyperparameter $v_d$ corresponds to the prior variance of the weights of an infinite-width fully-connected neural net layer (see \citet{cho2009kernel}). We can optimise for this weight-variance by back-propagation through $k_u(x)$ w.r.t. the ELBO. This is similar to the lengthscale hyperparameters of stationary kernels.
\Cref{tab:airline} shows the outcome of the experiment. The results for VFF and SVGP are from \citet{hensman2017variational}. We observe that \sc{VSH} improves on the other methods in terms of NLPD and is within error bars in terms of MSE. Given the variability in the data the GP models improve when more data is available during training.
Given the dimensionality of the dataset, a full-VFF model is completely infeasible. As an example, using just four frequencies per dimension would already lead to $M = 4^8 = 65,536$ inducing variables. So VFF has to resort to an additive model with a prior covariance structure given as a sum of Mat\'ern-3/2 kernels for each input dimension, as in \cref{eq:vff-additive}. Each of the functions $f_d$ is approximated using 30 frequencies.
We report two variants of \sc{VSH}: one using all spherical harmonics up to degree 3 ($M$=210) and another up to degree 4 ($M$=660). As expected, the more inducing variables, the better the fit.
We also report the wall clock time for the experiments (training and evaluation) for $N=10,000$ and $N=5,929,413$. All these experiments were ran on a single consumer-grade GPU (Nvidia GTX 1070). On the complete dataset of almost 6 million records, \sc{VSH} took $41\pm0.81$ seconds on average. A-VFF required $75.61\pm0.75$ seconds and the SVGP method needed approximately 15 minutes to fit and predict. This shows that \sc{VSH} is roughly two orders of magnitude faster than SVGP. A-VFF comes close to \sc{VSH} but has to impose additive structure to keep its computational advantage.
\subsection{SUSY Classification}
In the last experiment we tackle a large-scale classification problem. We are tasked with distinguishing between a signal process which produces super-symmetric (SUSY) particles and a background process which does not. The inputs consist of eight kinematic properties measured by the particle detectors in the accelerator. The dataset contains 5 million records of which we use the last 10\% for testing. We are interested in obtaining a calibrated classifier and measure the AuC of the ROC curve.
For SVGP and \sc{VSH} we first used a subset of 20,000 points to train the variational and hyper-parameters of the model with L-BFGS. We then applied Adam to the whole dataset. A similar approach was used to fine-tune the NN baselines by \citet{baldi2014searching}.
\Cref{tab:susy} lists the performance of \sc{VSH} and compares it to a boosted decision tree (BDT), 5-layer neural network (NN), and a SVGP. We observe the competitive performance of \sc{VSH}, which is a single-layer GP method, compared to a 5-layer neural net with 300 hidden units per layer and extensive hyper-parameter optimisation \citep{baldi2014searching}. We also note the improvement over a SVGP with SE kernel.
\begin{table}[tb]
\centering
\scalebox{.9}{\input{tables/susy.tex}}
\caption{Performance comparison for the SUSY benchmark.
The mean AuC is reported with one standard deviation, computed by training five models with different initialisations. Larger is better. Results for BDT and NN are from \citet{baldi2014searching}.}
\vspace{-.3cm}
\label{tab:susy}
\end{table}
\section{Conclusion}
We introduced a framework for performing variational inference in Gaussian processes using spherical harmonics. Our general setup is closely related to VFF, and we inherit several of its advantages such as a considerable speed-up compared to classic sparse GP models, and having features with a global influence on the approximation. By projecting the data onto the hypersphere and using dedicated GP models on this manifold our approach succeeds where other sparse GPs methods fail. First, \sc{VSH} provides good scaling properties as the dimension of the input space increases. Second, we showed that under some relatively weak hypothesis we are able to select the optimal features to include in the approximation. This is due to the intricate link between ``stationary'' covariances on the sphere and the Laplace-Beltrami operator. Third, the Mercer representation of the kernel means that the matrices to be inverted at training time are exactly diagonal -- resulting in a very cheap-to-compute sparse approximate GP.
We showed on a wide range of regression and classification problems that our method performs at or close to the state of the art while being extremely fast. The good predictive performance may be inherited from the connection between infinitely wide neural network and the way we map the predictions on the sphere back to the original space. Future work will explore this hypothesis.
\subsection*{Acknowledgements}
Thanks to Fergus Simpson for pointing us in the direction of spherical harmonics in the early days of this work. Also many thanks to Arno Solin for sharing his implementation of the Mat\'ern kernels' power spectrum.
\section{Spherical Harmonics on \ensuremath{\sphere^{d-1}}}
This section gives a brief overview of some of the useful properties of spherical harmonics. We refer the interested reader to \citet{dai2013, frye2014} for an in-depth overview.
Spherical harmonics are special functions defined on a hypersphere and originate from solving Laplace's equation in the spherical domain. They form a complete set of orthogonal functions, and any sufficiently regular function defined on the sphere can be written as a sum of these spherical harmonics, similar to the Fourier series with sines and cosines. Spherical harmonics have a natural ordering by increasing angular frequency. In the next paragraphs we introduce these concepts more formally.
We adopt the usual $L_2$ inner product for functions $f: \ensuremath{\sphere^{d-1}} \rightarrow \Reals$ and $g: \ensuremath{\sphere^{d-1}} \rightarrow \Reals$ restricted to the sphere
\begin{equation}
\langle f, g\rangle_{L_{2}(\ensuremath{\sphere^{d-1}})} = \frac{1}{\ensuremath{\area_{d-1}}} \int_{\ensuremath{\sphere^{d-1}}} f(x)\,g(x) \, \calcd{\omega(x)},
\end{equation}
where $\calcd{\omega(x)}$ is the surface area measure such that $\ensuremath{\area_{d-1}}$ denotes the surface area of $\ensuremath{\sphere^{d-1}}$
\begin{equation}
\ensuremath{\area_{d-1}} = \int_{\ensuremath{\sphere^{d-1}}} \calcd{\omega(x)} = \frac{2 \pi ^ {d/2}}{\Gamma(d/2)}.
\end{equation}
\begin{definition}
Spherical harmonics of degree (or level) $\ell$, denoted as $\ensuremath{\phi}_\ell$, are defined as the restriction to the unit hypersphere $\ensuremath{\sphere^{d-1}}$ of the harmonic homogeneous polynomials (with $d$ variables) of degree $\ell$. It is the map $\ensuremath{\phi}_\ell : \ensuremath{\sphere^{d-1}} \rightarrow \Reals$ with $\ensuremath{\phi}_\ell$ a homogeneous polynomial and $\Delta \ensuremath{\phi}_\ell = 0$.
\end{definition}
For a specific dimension $d$ and degree $\ell$ there exist
$$
\ensuremath{N^d_\ell}\xspace := (2 \ell + d - 2) \frac{\Gamma(\ell + d -2)}{\Gamma(\ell + 1)\Gamma(d - 2)}
$$
different linearly independent spherical harmonics on $\ensuremath{\sphere^{d-1}}$. We refer to them as the set $\{ \ensuremath{\phi}_{\ell, k}^d \}_{k=1}^{\ensuremath{N^d_\ell}\xspace}$. but in the subsequent we will drop the dependence on dimension $d$. The set is ortho-normal:
\begin{equation}
\left\langle \ensuremath{\phi}_{\ell, k}, \ensuremath{\phi}_{\ell', k'}\right\rangle_{L_2(\ensuremath{\sphere^{d-1}})}
=\delta_{\ell \ell'} \delta_{k k'}.
\end{equation}
\begin{theorem}
Since the spherical harmonics form an ortho-normal basis, every function $f: \ensuremath{\sphere^{d-1}} \rightarrow \Reals$ can be decomposed as
\begin{equation}
f = \sum_{\ell=0}^{\infty} \sum_{k=1}^{\ensuremath{N^d_\ell}\xspace} \widehat{f}_{\ell, k} \ensuremath{\phi}_{\ell, k},\ \text{with}\ \widehat{f}_{\ell, k} = \langle f, \ensuremath{\phi}_{\ell, k} \rangle_{L_2(\ensuremath{\sphere^{d-1}})}.
\end{equation}
\end{theorem}
Which can be seen as the spherical analogue of the Fourier decomposition of a periodic function in $\Reals$ onto a basis of sines and cosines.
\begin{theorem}
\label{appendix:theorem:eigenvalues}
The spherical harmonics are the eigenfunctions of the Laplace-Beltrami operator with eigenvalues $\lambda_{\ell} = \ell (\ell + d - 2)$ so that
\begin{equation}
\Delta^{\ensuremath{\sphere^{d-1}}} \phi_{\ell, k} = \ell (\ell + d - 2) \phi_{\ell, k}.
\end{equation}
\end{theorem}
In the experiments we used \citet[Theorem~5.1]{dai2013} for an explicit expression of $\phi$. We note that while this expression gives us a general form of the spherical harmonics in any dimension, we found that it becomes numerically unstable for $d \ge 10$.
\subsection{Gegenbauer polynomials}
Gegenbauer polynomials $C_\ell^{(\alpha)}: [-1, 1] \rightarrow \Reals$ are orthogonal polynomials with respect to the weight function $(1 - z^2)^{\alpha–1/2}$.
A variety of characterizations of the Gegenbauer polynomials are available. We use the polynomial characterisation for its numerical stability. It is given by
\begin{equation}
\label{appendix:eq:gegenbauer}
C_{\ell}^{({\alpha})}(z)=\sum _{{k=0}}^{{\lfloor \ell/2\rfloor }}{\frac {(-1)^{k}\, \Gamma (\ell-k+\alpha )}{\Gamma (\alpha )\Gamma({k+1})\Gamma{(\ell-2k + 1)}}}(2z)^{{\ell-2k}}.
\end{equation}
The polynomials normalise by
\begin{equation}
\int_{-1}^{1} \left[C_{\ell}^{({\alpha})}(z)\right]^2 (1 - z^2)^{\alpha–\frac{1}{2}} \calcd{z} = \frac{\Omega_{d-1}}{\Omega_{d-2}} \frac{\alpha}{\ell + \alpha} C_{\ell}^{({\alpha})}(1),
\end{equation}
with $C_{\ell}^{({\alpha})}(1) = \frac{\Gamma(2\alpha + \ell)}{\Gamma(\alpha)\,\ell!}$.
There exists a close relationship between Gegenbauer polynomials (also known as \emph{generalized Legendre polynomials}) and spherical harmonics, as we will show in the next theorems.
\begin{theorem}[Addition]
\label{appendix:theorem:addition}
Between the spherical harmonics of degree $\ell$ in dimension $d$ and the Gegenbauer polynomials of degree $\ell$ there exists the relation
\begin{equation}
\sum_{k=1}^{\ensuremath{N^d_\ell}\xspace} \ensuremath{\phi}_{\ell, k}(\vx) \ensuremath{\phi}_{\ell, k}(\vx') = \frac{\ell + \alpha}{\alpha}\,
C_\ell^{(\alpha)}(\vx\transpose\vx'),
\end{equation}
with $\alpha = \frac{d-2}{2}$.
\end{theorem}
As a illustrative example, this property is analogues to the trigonometric addition formula: $\sin(x)\sin(x') + \cos(x)\cos(x') = \cos(x - x')$.
\begin{theorem}[Funk-Hecke]
\label{appendix:theorem:funk}
Let $s(\cdot)$ be an integrable function such that $\int_{-1}^1 \| s(t)\| (1 - t^2)^{(d-3)/2} \calcd{t}$ is finite and $d \ge 2$. Then for every $\ensuremath{\phi}_{\ell,k}$
\begin{equation}
\frac{1}{\ensuremath{\area_{d-1}}} \int_{\ensuremath{\sphere^{d-1}}} s(\vx\transpose \vx')\,\ensuremath{\phi}_{\ell, k}(\vx')\, \calcd{\omega(\vx')} = \widehat{a}_{\ell}\,\ensuremath{\phi}_{\ell,k}(\vx),
\end{equation}
where $\widehat{a}_{\ell}$ is a constant defined by
\begin{equation}
\widehat{a}_{\ell} =
\frac{\omega_{d}}{C_\ell^{(\alpha)}(1)} \int_{-1}^1 s(t)\,C_\ell^{(\alpha)}(t)\,(1 - t^2)^{\frac{d-3}{2}} \calcd{t},
\end{equation}
with $\alpha = \frac{d-2}{2}$ and $\omega_d = \frac{\Omega_{d-2}}{\Omega_{d-1}}$.
\end{theorem}
Funk-Hecke simplifies a $(d-1)$-variate surface integral on $\ensuremath{\sphere^{d-1}}$ to a one-dimensional integral over $[-1, 1]$. This theorem gives us a practical way of computing the Fourier coefficients for any zonal kernel. In \cref{sec:appendix:arc-cosine}, we use it to compute the coefficients of the arc-cosine kernel. Notice how the Fourier coefficients $\widehat{a}_{\ell}$ only depend on the level $\ell$ (or degree) of the spherical harmonic and not the orientation (denoted by the $k$ index).
\section{Zonal kernels}
\label{appendix:sec:zonal-kernels}
\subsection{Mercer's decomposition}
Zonal kernels can be seen as the spherical counterpart of stationary kernels. Stationary kernels are a function of $\vx - \vx'$ and are thus invariant to translations in the input space \citep{rasmussen2006}. Zonal kernels (defined on $\ensuremath{\sphere^{d-1}} \times \ensuremath{\sphere^{d-1}}$) are a function of $\vx^\top \vx'$ and are thus invariant to rotations.
The spherical harmonics are the eigenfunctions of the Laplace-Beltrami operator \cite{dai2013, frye2014}. In the main paper we show the commutativity of the Laplace-Beltrami operator and the kernel operator of zonal kernels. This means that the spherical harmonics are also the eigenfunctions of zonal kernels, as commuting operators share the same eigenfunctions.
Mercer's theorem allows us to express the kernel in terms of its eigenvalues and eigenfunctions.
\begin{theorem}[Mercer representation]
\label{appendix:theorem:mercer}
Any zonal kernel $k$ on the hypersphere can be decomposed as
\begin{equation}
k(\vx, \vx') = \sum_{\ell=0}^{\infty} \sum_{k=1}^{\ensuremath{N^d_\ell}\xspace} \widehat{a}_{\ell, k} \ensuremath{\phi}_{\ell, k}(\vx) \ensuremath{\phi}_{\ell, k}(\vx'),
\end{equation}
where $\vx,\vx' \in \ensuremath{\sphere^{d-1}}$ and $\widehat{a}_{\ell}$ are the positive Fourier coefficients, $\ensuremath{\phi}_{\ell,k}$ denote the elements of the spherical harmonic basis in $\ensuremath{\sphere^{d-1}}$, and $N_\ell^d$ corresponds to the number of spherical harmonics for a given level $\ell$.
\end{theorem}
For zonal kernels the Fourier coefficients within a level are equal: $\widehat{a}_{\ell, k} = \widehat{a}_{\ell}$ for $1 \le k \le N_\ell^d$. This allows us to simplify the Mercer decomposition of a zonal kernel using \cref{appendix:theorem:addition} to
\begin{equation}
k(\vx, \vx') = \sum_{\ell=0}^{\infty} \widehat{a}_{\ell}
\frac{\ell + \alpha}{\alpha}\,
C_\ell^{(\alpha)}(\vx\transpose\vx'),
\end{equation}
with $\alpha = \frac{d-2}{2}$.
\subsection{RKHS}
Given the Mercer representation of a zonal kernel, its RKHS can be characterised by
\begin{equation*}
\ensuremath{{\mathcal{H}}} = \left\{
g =
\sum_{\ell=0}^\infty \sum_{k=1}^{\ensuremath{N^d_\ell}\xspace} \widehat{g}_{\ell,k} \ensuremath{\phi}_{\ell, k} :
\sum_{\ell=0}^\infty \sum_{k=1}^{\ensuremath{N^d_\ell}\xspace} \frac{|\widehat{g}_{\ell,k}|^2}{\widehat{a}_{\ell}} < \infty
\right\}
\end{equation*}
with a reproducing inner product between two functions $g(\vx) = \sum_{\ell, k} \widehat{g}_{\ell, k} \ensuremath{\phi}_{\ell, k}(\vx)$ and $h(\vx) = \sum_{\ell, k} \widehat{h}_{\ell, k} \ensuremath{\phi}_{\ell, k}(\vx)$ defined as
\begin{equation}
\label{appendix:eq:inner-product}
\langle g, h \rangle_{\ensuremath{{\mathcal{H}}}} =
\sum_{\ell=0}^\infty
\sum_{k=1}^{\ensuremath{N^d_\ell}\xspace}
\frac{\widehat{g}_{\ell, k} \widehat{h}_{\ell, k}}%
{\widehat{a}_{\ell}}.
\end{equation}
\begin{proof}{(Reproducing property).}
\label{appendix:proof:reproducing}
The Fourier coefficients for $k(\vx, \cdot): \ensuremath{\sphere^{d-1}} \rightarrow \Reals$ and $f: \ensuremath{\sphere^{d-1}} \rightarrow \Reals$ are $\widehat{a}_{\ell, k} \ensuremath{\phi}_{\ell, k}(\vx)$ and $\widehat{f}_{\ell, k}$, respectively.
Substituting these coefficients in \cref{appendix:eq:inner-product} gives:
\begin{align}
\langle k(\vx, \cdot), f \rangle_{\ensuremath{{\mathcal{H}}}}
& = \sum_{\ell=0}^\infty \sum_{k=1}^{\ensuremath{N^d_\ell}\xspace}
\frac{\widehat{a}_{\ell} \ensuremath{\phi}_{\ell, k}(\vx) \widehat{f}_{\ell,k}}{\widehat{a}_{\ell}}\\
&= \sum_{\ell=0}^\infty \sum_{k=1}^{\ensuremath{N^d_\ell}\xspace}
\widehat{f}_{\ell,k} \ensuremath{\phi}_{\ell, k}(\vx) = f(\vx)
\end{align}
which proofs the reproducing property.
\end{proof}
In the next sections we address the computation of the Fourier coefficients (eigenvalues) of the kernels. In \cref{sec:appendix:arc-cosine} for the Arc-Cosine kernel and in \cref{appendix:sec:materns} for the Mat\'ern family.
\subsection{Fourier coefficients for the Arc-Cosine kernel}
\label{sec:appendix:arc-cosine}
The Fourier coefficients are computed using \cref{appendix:theorem:funk}, where the shape function of the Arc-Cosine kernel of the first order \citep{cho2009kernel} is given by:
\begin{equation}
s(x) = \sin x + (\pi - x) \cos x.
\end{equation}
Notice that we expressed the shape function as a function of the angle between the two inputs $s: [0, \pi] \mapsto \Reals$, rather than the great-circle distance, as it simplifies the subsequent computations.
Using a change of variables we also rewrite \cref{appendix:theorem:funk}
\begin{equation}
\widehat{a}_{\ell}
= c_{d, \ell} \int_{0}^\pi\!\!s(x)\,C_\ell^{\frac{d-2}{2}}(\cos x) \sin^{d-2} x\,\calcd{x},
\end{equation}
with $c_{d, \ell} = \frac{\omega_{d}}{C_\ell^{(\alpha)}(1)}$.
This one dimensional integral can be solved in closed-form for any setting of $d$ and $\ell$. Filling in the definition for the Gegenbauer polynomial (\cref{appendix:eq:gegenbauer}), we observe that we need a general solution of the integral $\int_0^\pi \left[ \sin(x) + (\pi -x) \cos(x) \right] \cos^n(x) \sin^m(x) {d} x$ for $n, m \in \mathbb{N}$.
The first term can be computed with the well-known result:
\begin{gather}
\int_0^{\pi} \sin^n(x) \cos^m(x) {d} x \\ =
\begin{cases}
0 & \text{if}\ m\ \text{odd}\\
\frac{(n-1)!!~(m-1)!!}{(n+m)!!} \pi & \text{if}\ m\ \text{even and }n\ \text{odd},\\
\frac{(n-1)!!~(m-1)!!}{(n+m)!!} 2 & \text{if}\ n,m\ \text{even}.
\end{cases}
\end{gather}
The second term is harder:
\begin{equation}
I = \int_0^{\pi} (\pi - x) \sin^n(x) \cos^m(x) {d} x
\end{equation}
which we solved using integration by parts with $u = \pi - x$ and $dv = \sin^n(x) \cos^m(x) dx$, so that
\begin{equation}
I = u(0) v(0) - u(\pi) v(\pi) + \int_0^{\pi} v(x') {d} x',
\end{equation}
where $v(x') = \int_0^{x'} \sin^n(x) \cos^m(x) {d} x$. This gives $v(0) = 0$ and $u(0) = 0$, simplifying $I = \int_0^\pi v(x') {d} x'$.
We first focus on $v(x')$:
for \underline{$n$ odd}, there exists a $n' \in \mathbb{N}$ so that $n = 2n' + 1$, resulting
\begin{align}
v(x') &= \int_0^{x'} \sin^{2n'}(x) \cos^m(x) \sin(x) {d} x \\
&= -\int_0^{\cos(x')} (1 - u^2)^{n'} u^m {d} u
\end{align}
Where we used $\sin^2(x) + \cos^2(x) = 1$ and the substitution $u = \cos(x) \implies {d} u = - \sin(x) {d} x$.
Using the binomial expansion, we get
\begin{align}
v(x') &= -\int_0^{\cos(x')} \sum_{i=0}^{n'} \binom{k}{i} (-u^2)^i u^m {d} u \\
&= \sum_{i=0}^{n'} (-1)^{i+1} \binom{k}{i} \frac{\cos(x')^{2i+m+1} - 1}{2i+m+1}.
\end{align}
Similarly, for \underline{$m$ odd}, we have $m=2m' + 1$ and use the substitution $u = \sin(x)$, to get
\begin{equation}
v(x') = \sum_{i=0}^{m'} (-1)^{i} \binom{k}{i} \frac{\sin(x')^{2i+n+1}}{2i+n+1}.
\end{equation}
For \underline{$n$ and $m$ even}, we have $n' = n/2$ and $m' = m/2$, we use double-angle identities to get
\begin{equation}
v(x') = \int_0^{x'} \left(\frac{1 - \cos(2x)}{2}\right)^{n'} \left(\frac{1 + \cos(2x)}{2}\right)^{m'} {d} x
\end{equation}
Making use of the binomial expansion twice, we get
\begin{equation}
v(x') = 2^{-(n' + m')} \sum_{i,j=0}^{n', m'} (-1)^{i} \binom{n'}{i} \binom{m'}{j}
\int_0^{x'} \cos(2x)^{i+j} {d} x.
\end{equation}
Returning back to the original problem $I = \int_0^\pi v(x') {d} x'$. Depending on the parity of $n$ and $m$ we need to evaluate:
\begin{equation}
\int_0^\pi \cos(x')^p {d} x' =
\begin{cases}
\frac{(p-1)!!}{p!!} \pi & \text{if}\ p\ \text{even} \\
0 & \text{if}\ p\ \text{odd}
\end{cases}
\end{equation}
and
\begin{equation}
\int_0^\pi \sin(x')^p {d} x' =
\begin{cases}
\frac{(p-1)!!}{p!!} \pi & \text{if}\ p\ \text{even} \\
\frac{(p-1)!!}{p!!} 2 & \text{if}\ p\ \text{odd}.
\end{cases}
\end{equation}
For $m$ and $n$ even we need to solve the double integral
\begin{equation}
\int_0^\pi \int_0^{x'} \cos(2x)^p {d} x {d} x' =
\begin{cases}
\frac{(p-1)!!}{p!!} \frac{\pi^2}{2} & \text{if}\ p\ \text{even} \\
0 & \text{if}\ p\ \text{odd}.
\end{cases}
\end{equation}
Combining these results gives us the solution for the integral $\int_0^\pi s(x) \cos^n(x) \sin^m(x) {d} x$ for any $n, m \in \Naturals$, which is necessary to compute $\widehat{a}_\ell$ for the arc-cosine kernel.
\subsection{Fourier coefficients for the Mat\'ern family kernels}
\label{appendix:sec:materns}
The Mat\'ern covariance between two points $x, x'$ separated by $r=x-x'$ distance units is given by \citep{rasmussen2006}:
\begin{equation}
k_\nu(r)=\sigma ^{2}{\frac {2^{1-\nu }}{\Gamma (\nu )}}{\Bigg (}{\sqrt {2\nu }}{\frac {r}{\rho }}{\Bigg )}^{\nu }K_{\nu }{\Bigg (}{\sqrt {2\nu }}{\frac {r}{\rho }}{\Bigg )},
\end{equation}
where $\Gamma$ is the gamma function, $K_{\nu}$ is the modified Bessel function of the second kind, and $\rho$ (lengthscale) and $\nu$ (differentiability) are non-negative parameters of the covariance. The covariance has a spectral density defined on $\Reals^d$
\begin{equation}
\displaystyle S(\omega)={\frac {2^{d}\pi ^{\frac {d}{2}}\Gamma (\nu +{\frac {d}{2}})(2\nu )^{\nu }}{\Gamma (\nu )\rho ^{2\nu }}}\left({\frac {2\nu }{\rho ^{2}}}+4\pi ^{2}\omega^{2}\right)^{-\left(\nu +{\frac {d}{2}}\right)}.
\end{equation}
A key result from \citet[eq. 20]{solin2014hilbert} is to show that the coefficients $\widehat{a}_{\ell, k}$ have a simple expression that depends on the kernel spectral density $S$ and the eigenvalues of the Laplace-Beltrami operator (\cref{appendix:theorem:eigenvalues}). For GPs on $\ensuremath{\sphere^{d-1}}$ the coefficients boil down to
\begin{equation}
\widehat{a}_{\ell, k } = S\left(\sqrt{\ell (\ell + d - 2)}\right).
\end{equation}
\raggedbottom
|
\section{Introduction}
This article is motivated by the simple observation that the
transformation properties of the eight real components
of a complex Dirac spinor under a Lorentz
transformation may be alternatively
formulated without
any explicit reference
to complex-valued quantities.
This is accomplished by constructing a representation of the
Lorentz group in terms of $4 \times 4$ matrices
defined over the hyperbolic number system
\cite{cap}--\cite{yaglom}.
After studying how this new representation is related to
the familiar complex one, we establish
an automorphism symmetry of the complex Dirac spinor.
We also discuss natural embeddings of this new representation
into a maximal Lie algebra, which turns out to be
isomorphic to the algebra of generators of SO(3,3;${\bf R}$),
and thus distinct from the conformal group SO(2,4;${\bf R}$).
To begin, we revisit the familiar Lie algebra of the
Lorentz group O(1,3;${\bf R}$).
\section{The Lorentz Algebra}
\subsection{A Complex Representation}
Under Lorentz transformations, the complex Dirac 4-spinor $\Psi_{\bf C}$
transforms as follows \cite{ryder}:
\begin{equation}
\Psi_{\bf C} \rightarrow \left(
\begin{array}{cc}
e^{\frac{{\rm i}}{2}
\mbox{${\bf \sigma \cdot}$}
( \mbox{${\bf \theta}$} - {\rm i}
\mbox{${\bf \phi}$})}
& 0 \\
0 & e^{\frac{{\rm i}}{2}
\mbox{${\bf \sigma \cdot}$}
( \mbox{${\bf \theta}$} + {\rm i}
\mbox{${\bf \phi}$})}
\end{array}
\right) \cdot \Psi_{\bf C},
\label{lorentz1}
\end{equation}
where ${\bf \sigma} = (\sigma_{x},\sigma_{y},\sigma_{z})$ represents
the well known Pauli spin matrices:
\begin{equation}
\sigma_x = \left(
\begin{array}{cc}
0 & 1 \\
1 & 0
\end{array}
\right), \hspace{5mm}
\sigma_y = \left(
\begin{array}{cc}
0 & -{\rm i} \\
{\rm i} & 0
\end{array}
\right), \hspace{5mm}
\sigma_z = \left(
\begin{array}{cc}
1 & 0 \\
0 & -1
\end{array}
\right).
\end{equation}
The three real parameters ${\bf \theta} = (\theta_1,\theta_2,\theta_3)$
correspond to the generators for spatial rotations,
while ${\bf \phi} = (\phi_1,\phi_2,\phi_3)$ represents
Lorentz boosts along each of the coordinate axes.
There are thus six real numbers parameterizing a given element
in the Lorentz group.
Let us now introduce the six matrices $E_i$ and $F_i$, $i=1,2,3$,
by writing
\begin{equation}
\begin{array}{ccc}
E_1 = \frac{1}{2} \left(
\begin{array}{cc}
\sigma_x & 0 \\
0 & -\sigma_x
\end{array}
\right)
&
E_2 = -\frac{{\rm i}}{2} \left(
\begin{array}{cc}
\sigma_y & 0 \\
0 & \sigma_y
\end{array}
\right)
& E_3 = \frac{1}{2} \left(
\begin{array}{cc}
\sigma_z & 0 \\
0 & -\sigma_z
\end{array}
\right) \\
F_1 = \frac{{\rm i}}{2} \left(
\begin{array}{cc}
\sigma_x & 0 \\
0 & \sigma_x
\end{array}
\right)
&
F_2 = \frac{1}{2} \left(
\begin{array}{cc}
\sigma_y & 0 \\
0 & -\sigma_y
\end{array}
\right)
&
F_3 = \frac{{\rm i}}{2} \left(
\begin{array}{cc}
\sigma_z & 0 \\
0 & \sigma_z
\end{array}
\right).
\end{array}
\label{ef}
\end{equation}
Then the Lorentz transformation (\ref{lorentz1}) may be written as follows:
\begin{equation}
\Psi_{\bf C} \rightarrow \exp{(\phi_1 E_1 - \theta_2 E_2 + \phi_3 E_3 +
\theta_1 F_1 + \phi_2 F_2 + \theta_3 F_3)} \cdot \Psi_{\bf C}.
\label{transformD}
\end{equation}
We remark that the transformation (\ref{transformD}) has the form
$\Psi_{\bf C} \rightarrow U \cdot \Psi_{\bf C}$, where $U$ may
be thought of as an element of the (fifteen dimensional)
conformal group SU(2,2;${\bf C}$). The Lorentz symmetry
is therefore a six dimensional subgroup of the conformal group.
At this point, it is sufficient to note that the matrices
$E_i$ and $F_i$ defined in (\ref{ef}) satisfy the following
commutation relations:
\begin{equation}
\begin{array}{llll}
[E_1,E_2] = E_3 & [F_1,F_2] = -E_3 & [E_1,F_2] = F_3 & [F_1,E_2]=F_3 \\
\mbox{}[E_2,E_3] = E_1 & [F_2,F_3] = -E_1 & [E_2,F_3] = F_1 & [F_2,E_3]=F_1 \\
\mbox{}[E_3,E_1] = -E_2 & [F_3,F_1] = E_2 & [E_3,F_1] = -F_2 & [F_3,E_1]=-F_2
\end{array}
\label{comm}
\end{equation}
All other commutators vanish. Abstractly, these relations define
the Lie algebra of the Lorentz group O(1,3;${\bf R}$), and the matrices
$E_i$ and $F_i$ defined by (\ref{ef}) correspond to a complex
representation of this algebra.
\subsection{A Hyperbolic Representation}
Our goal in this section is to present an explicit
representation of the Lorentz algebra (\ref{comm}) in terms
of $4 \times 4$ matrices
defined over the hyperbolic number system.
This number system will be briefly discussed next.
\subsubsection{The Hyperbolic Number System}
We consider numbers of the form
\begin{equation}
x+ {\rm j}y,
\end{equation}
where $x$ and $y$ are real numbers, and ${\rm j}$ is a commuting
element satisfying the relation
\begin{equation}
{\rm j}^2 = 1.
\end{equation}
The number system generated by this simple algebra has
a long history \cite{cap}--\cite{yaglom}, and is known as
the `hyperbolic number system'. The
symbol ${\bf D}$ will be used to denote the hyperbolic number
system, where `D' stands for `double' \cite{yaglom}.
In this article, we exploit very basic arithmetical properties
of this algebra. For example,
addition, subtraction, and multiplication are defined in the obvious way:
\begin{eqnarray}
(x_1+{\rm j}y_1) \pm (x_2+{\rm j}y_2) & = &
(x_1 \pm x_2) + {\rm j}(y_1 \pm y_2), \\
(x_1+{\rm j}y_1) \cdot (x_2+{\rm j}y_2) & = &
(x_1 x_2 + y_1 y_2) + {\rm j} (x_1 y_2 + y_1 x_2).
\end{eqnarray}
Moreover, given any hyperbolic number $w=x+{\rm j}y$, we define the
`${\bf D}$-conjugate of $w$', written ${\overline w}$, to be
\begin{equation}
{\overline w} = x - {\rm j}y.
\end{equation}
It is easy to check the following; for any $w_1,w_2 \in {\bf D}$,
we have
\begin{eqnarray}
{\overline{w_1+w_2}} & = & {\overline w_1} + {\overline w_2}, \\
{\overline{w_1\cdot w_2}} & = & {\overline w_1} \cdot {\overline w_2}.
\end{eqnarray}
We also have the identity
\begin{equation}
{\overline w} \cdot w = x^2 - y^2
\end{equation}
for any hyperbolic number $w=x+{\rm j}y$. Thus ${\overline w} \cdot w$
is always real, although unlike
the complex number system, it may take negative values.
At this point, it is convenient to define the `modulus squared'
of $w$, written $|w|^2$, as
\begin{equation}
|w|^2 = {\overline w} \cdot w.
\end{equation}
A nice consequence of these definitions is that for
any hyperbolic numbers $w_1,w_2 \in {\bf D}$, we have
\begin{equation}
|w_1 \cdot w_2|^2 = |w_1|^2 \cdot |w_2|^2.
\end{equation}
Now observe that if $|w|^2$ doesn't vanish, the quantity
\begin{equation}
w^{-1} = \frac{1}{|w|^2} \cdot {\overline{w}}
\end{equation}
is a well-defined unique inverse for $w$. So $w\in {\bf D}$ fails to
have an inverse if and only if $|w|^2 = x^2-y^2 =0$.
The hyperbolic number system is therefore a non-division algebra.
\subsubsection{The Hyperbolic Unitary Groups}
Suppose $H$ is an $n \times n$ matrix defined over ${\bf D}$.
Then $H^{\dagger}$ will denote the $n\times n$ matrix
which is obtained by transposing $H$, and then conjugating
each of the entries: $H^{\dagger} = \overline{H}^{T}$.
We say $H$ is Hermitian with respect to ${\bf D}$ if
$H^{\dagger} = H$, and anti-Hermitian if
$H^{\dagger} = -H$.
Note that if $H$ is an $n \times n$ Hermitian matrix over ${\bf D}$,
then $U = e^{{\rm j}H}$ has the property
\begin{equation}
U^{\dagger}\cdot U = U \cdot U^{\dagger} = 1.
\label{un}
\end{equation}
The set of all $n \times n$ matrices over ${\bf D}$ satisfying
the constraint (\ref{un}) forms a group, which we will
denote as U$(n,{\bf D})$, and call the `unitary group of
$n \times n$ matrices over ${\bf D}$', or
`hyperbolic unitary group'.
The `special unitary' subgroup SU($n,{\bf D}$) will be defined as all
elements $U \in$U($n,{\bf D}$) satisfying the additional constraint
\begin{equation}
\det{U} = 1.
\end{equation}
Note that the hyperbolic unitary groups we have defined above
may be isomorphic to well known non-compact groups
that are usually defined over the complex number field.
For example, the special unitary hyperbolic
group SU$(2,{\bf D})$ is isomorphic to the
complex group SU(1,1;${\bf C}$) by virtue of the
identification\footnote{To show that any $2 \times 2$ matrix
$U \in SU(2,{\bf D})$ has the form given in eqn (\ref{iso2DD}),
we use the facts $U^{\dagger} = U^{-1}$, and $\det U =1$.}
\begin{equation}
\left(
\begin{array}{cc}
a_1 + {\rm i}a_2 & b_1 + {\rm i}b_2 \\
b_1 - {\rm i}b_2 & a_1 - {\rm i} a_2
\end{array}
\right) \hspace{4mm}
\leftrightarrow \hspace{4mm}
\left(
\begin{array}{cc}
a_1 + {\rm j}b_1 & -a_2 + {\rm j}b_2 \\
a_2 + {\rm j}b_2 & a_1 - {\rm j} b_1
\end{array}
\right),
\label{iso2DD}
\end{equation}
where the four real parameters $a_1,a_2,b_1$ and $b_2$ satisfy
the constraint $a_1^2+a_2^2-b_1^2-b_2^2=1$.
Of course, one also has the isomorphism SU$(2,{\bf D}) \equiv SL(2;{\bf R})$
given by the group isomorphism
\begin{equation}
\left(
\begin{array}{cc}
a_1 + {\rm j}b_1 & -a_2 + {\rm j}b_2 \\
a_2 + {\rm j}b_2 & a_1 - {\rm j} b_1
\end{array}
\right) \hspace{4mm}
\leftrightarrow \hspace{4mm}
\left(
\begin{array}{cc}
a_1 + b_1 & -a_2 + b_2 \\
a_2 + b_2 & a_1 - b_1
\end{array}
\right),
\label{iso2D}
\end{equation}
where the real parameters $a_1,a_2,b_1$ and $b_2$ satisfy
the constraint $a_1^2+a_2^2-b_1^2-b_2^2=1$ as before.
Note that this correspondence was obtained by mapping the variable
${\rm j}$ to $+1$. Alternatively, we could have constructed
an alternative isomorphism by mapping ${\rm j}$ to $-1$.
Actually, this example suggests that we might be able
to identify the special unitary groups SU($n$;${\bf D}$)
with the special linear groups
SL($n$;${\bf R}$). An isomorphism was established for $n=2$,
but what can we say about $n > 2$? One approach is to consider
what happens near the identity. In this case, one may construct
the Lie algebra for SU($n$;${\bf D}$), which is generated by
$n^2 -1$ traceless anti-Hermitian $n \times n$
matrices over ${\bf D}$.
Any element sufficiently close to the
identity is therefore obtained by exponentiating
a unique real linear combination of these generators.
We then map such elements into SL(n;${\bf R}$)
by mapping the variable ${\rm j}$ to $+1$. The generators
are now real, traceless $n \times n$ matrices, and so form
the basis of the Lie algebra for SL(n;${\bf R}$). Thus, the groups
SU($n$;${\bf D}$) and SL($n$;${\bf R}$) possess isomorphic
Lie algebras.
\subsubsection{A Hyperbolic Representation}
As promised, we will give an explicit representation of
the Lorentz algebra (\ref{comm}) in terms of matrices
defined over ${\bf D}$. First, we define three $2 \times 2$ matrices
${\bf \tau} = (\tau_1,\tau_2,\tau_3)$ by writing
\begin{equation}
\tau_1 = \left(
\begin{array}{cc}
0 & 1 \\
1 & 0
\end{array}
\right),
\hspace{5mm}
\tau_2 = \left(
\begin{array}{cc}
0 & -{\rm j} \\
{\rm j} & 0
\end{array}
\right),
\hspace{5mm}
\tau_3 = \left(
\begin{array}{cc}
1 & 0 \\
0 & -1
\end{array}
\right).
\hspace{5mm}
\end{equation}
These matrices satisfy the following commutation relations:
\begin{equation}
[\tau_1,\tau_2]=2{\rm j} \tau_3 \hspace{6mm}
[\tau_2,\tau_3]=2{\rm j} \tau_1 \hspace{6mm}
[\tau_3,\tau_1]=-2{\rm j} \tau_2
\end{equation}
Now define the matrices ${\tilde E}_i$ and ${\tilde F}_i$,
$i=1,2,3,$ by setting
\begin{equation}
{\tilde E}_i = \frac{{\rm j}}{2}\left(
\begin{array}{cc}
\tau_i & 0 \\
0 & \tau_i
\end{array}
\right),
\hspace{6mm}
{\tilde F}_i = \frac{1}{2}\left(
\begin{array}{cc}
0 & \tau_i \\
-\tau_i & 0
\end{array}
\right), \hspace{5mm} i=1,2,3.
\label{semirep}
\end{equation}
The $4 \times 4$ matrices ${\tilde E}_i$ and ${\tilde F}_i$
defined above are anti-Hermitian with respect to ${\bf D}$,
and satisfy the Lorentz algebra (\ref{comm})
after making the substitutions $E_i \rightarrow {\tilde E}_i$
and $F_i \rightarrow {\tilde F}_i$, $i=1,2,3$. We may therefore
introduce a 4-component `hyperbolic' spinor
$\Psi_{\bf D} \in {\bf D}^4$
transforming as follows under Lorentz transformations:
\begin{equation}
\Psi_{\bf D} \rightarrow
\exp{(\phi_1 {\tilde E}_1 - \theta_2 {\tilde E}_2 +
\phi_3 {\tilde E}_3 +
\theta_1 {\tilde F}_1 + \phi_2 {\tilde F}_2 + \theta_3
{\tilde F}_3)} \cdot \Psi_{\bf D},
\label{transformII}
\end{equation}
which is evidently the analogue of transformation
(\ref{transformD}). Note that the transformation (\ref{transformII})
has the form $\Psi_{\bf D} \rightarrow U \cdot \Psi_{\bf D}$,
where $U \in$ SU($4,{\bf D}$), since the generators
${\tilde E}_i$ and ${\tilde F}_i$
are traceless and anti-Hermitian with respect to ${\bf D}$.
Thus the Lorentz group is a {\em subgroup} of the hyperbolic
special unitary group SU($4,{\bf D}$).
In the next section, we discuss a relation between the
complex Dirac spinor $\Psi_{\bf C}$, and the 4-component
hyperbolic spinor $\Psi_{\bf D}$ defined above.
\section{Equivalences between Spinor Transformations}
\subsection{An Equivalence}
\label{isomorph}
Consider an {\em infinitesimal} Lorentz transformation of the
complex Dirac spinor,
\begin{equation}
\Psi_{\bf C} \rightarrow \exp{(\phi_1 E_1 - \theta_2 E_2 + \phi_3 E_3 +
\theta_1 F_1 + \phi_2 F_2 + \theta_3 F_3)} \cdot \Psi_{\bf C},
\label{transformsmall}
\end{equation}
where
\begin{equation}
\Psi_{\bf C} =
\left(
\begin{array}{c}
x_1+{\rm i}y_1 \\
x_2+{\rm i}y_2 \\
x_3+{\rm i}y_3 \\
x_4+{\rm i}y_4
\end{array}
\label{cd}
\right),
\end{equation}
and $E_i,F_i$ are specified by (\ref{ef}).
The eight variables
$x_i$ and $y_i$, $i=1,2,3,4$, are taken to be real.
Now consider the corresponding
infinitesimal Lorentz transformation of the hyperbolic spinor
$\Psi_{\bf D}$,
\begin{equation}
\Psi_{\bf D} \rightarrow
\exp{(\phi_1 {\tilde E}_1 - \theta_2 {\tilde E}_2 + \phi_3
{\tilde E}_3 +
\theta_1 {\tilde F}_1 + \phi_2 {\tilde F}_2 + \theta_3
{\tilde F}_3)} \cdot \Psi_{\bf D},
\label{transformsmallII}
\end{equation}
where
\begin{equation}
\Psi_{\bf D} =
\left(
\begin{array}{c}
a_1+{\rm j}b_1 \\
a_2+{\rm j}b_2 \\
a_3+{\rm j}b_3 \\
a_4+{\rm j}b_4
\end{array}
\right).
\end{equation}
The matrices ${\tilde E}_i,{\tilde F}_i$ are given by
(\ref{semirep}), and
the eight variables $a_i$ and $b_i$, $i=1,2,3,4$, are real-valued.
It is now straightforward to check that the infinitesimal
transformations (\ref{transformsmall}) and
(\ref{transformsmallII})
induce {\em equivalent} transformations of the eight real
components of the corresponding spinors ($\Psi_{\bf C}$ and
$\Psi_{\bf D}$)
if we make the following identifications\footnote{The factor of
$1/\sqrt{2}$ is arbitrary, and introduced for later convenience.}:
\begin{equation}
\begin{array}{cccc}
a_1 \leftrightarrow \frac{1}{\sqrt{2}}(y_1+y_3) \hspace{4mm} &
a_2 \leftrightarrow \frac{1}{
|
\sqrt{2}}(y_2 + y_4) \hspace{4mm}&
a_3 \leftrightarrow \frac{1}{\sqrt{2}}(x_1 + x_3) \hspace{4mm} &
a_4 \leftrightarrow \frac{1}{\sqrt{2}}(x_2 + x_4) \\
b_1 \leftrightarrow \frac{1}{\sqrt{2}}(y_1-y_3) \hspace{4mm} &
b_2 \leftrightarrow \frac{1}{\sqrt{2}}(y_2-y_4) \hspace{4mm} &
b_3 \leftrightarrow \frac{1}{\sqrt{2}}(x_1 - x_3) \hspace{4mm}&
b_4 \leftrightarrow \frac{1}{\sqrt{2}}(x_2 - x_4)
\end{array}
\end{equation}
In particular, we have the identification
\begin{equation}
(I) \hspace{3mm} \Psi_{\bf C} =
\left(
\begin{array}{c}
x_1+{\rm i}y_1 \\
x_2+{\rm i}y_2 \\
x_3+{\rm i}y_3 \\
x_4+{\rm i}y_4
\end{array}
\right) \hspace{5mm} \leftrightarrow \hspace{5mm}
\Psi_{\bf D} =
\frac{1}{\sqrt{2}}\left(
\begin{array}{c}
(y_1+y_3) +{\rm j}(y_1-y_3) \\
(y_2 + y_4) +{\rm j}(y_2-y_4) \\
(x_1 + x_3) +{\rm j}(x_1 - x_3) \\
(x_2 + x_4) +{\rm j}(x_2 - x_4)
\end{array}
\right),
\label{iso}
\end{equation}
which establishes an exact equivalence between a complex
Lorentz transformation [Eqn(\ref{transformD})]
acting on the Dirac 4-spinor
$\Psi_{\bf C}$, and the corresponding
Lorentz transformation [Eqn(\ref{transformII})]
acting on a hyperbolic 4-spinor $\Psi_{\bf D}$.
It turns out that
the equivalence specified by the identification (\ref{iso})
is not unique. There are additional identifications that
render the complex and hyperbolic Lorentz transformations
equivalent, and we list three more below:
\begin{equation}
(II) \hspace{3mm} \left(
\begin{array}{c}
x_1+{\rm i}y_1 \\
x_2+{\rm i}y_2 \\
x_3+{\rm i}y_3 \\
x_4+{\rm i}y_4
\end{array}
\right) \hspace{3mm} \leftrightarrow \hspace{3mm}
\frac{1}{\sqrt{2}}\left(
\begin{array}{c}
-(y_2+y_4) +{\rm j}(y_2-y_4) \\
(y_1 + y_3) -{\rm j}(y_1-y_3) \\
(x_2 + x_4) -{\rm j}(x_2 - x_4) \\
-(x_1 + x_3) +{\rm j}(x_1 - x_3)
\end{array}
\right),
\end{equation}
\begin{equation}
(III) \hspace{3mm}
\left(
\begin{array}{c}
x_1+{\rm i}y_1 \\
x_2+{\rm i}y_2 \\
x_3+{\rm i}y_3 \\
x_4+{\rm i}y_4
\end{array}
\right) \hspace{3mm} \leftrightarrow \hspace{3mm}
\frac{1}{\sqrt{2}}\left(
\begin{array}{c}
-(x_1+x_3) -{\rm j}(x_1-x_3) \\
-(x_2 + x_4) -{\rm j}(x_2-x_4) \\
(y_1 + y_3) +{\rm j}(y_1 - y_3) \\
(y_2 + y_4) +{\rm j}(y_2 - y_4)
\end{array}
\right),
\end{equation}
and
\begin{equation}
(IV) \hspace{3mm}
\left(
\begin{array}{c}
x_1+{\rm i}y_1 \\
x_2+{\rm i}y_2 \\
x_3+{\rm i}y_3 \\
x_4+{\rm i}y_4
\end{array}
\right) \hspace{3mm} \leftrightarrow \hspace{3mm}
\frac{1}{\sqrt{2}}\left(
\begin{array}{c}
-(x_2+x_4) +{\rm j}(x_2-x_4) \\
(x_1 + x_3) -{\rm j}(x_1-x_3) \\
-(y_2 + y_4) +{\rm j}(y_2 - y_4) \\
(y_1 + y_3) -{\rm j}(y_1 - y_3)
\end{array}
\right).
\end{equation}
Four more identifications may be obtained
by a simple `reflection' procedure; simply multiply
each hyperbolic spinor appearing in identifications
(I),(II),(III) and (IV) above by the
variable ${\rm j}$. This has the effect of interchanging
the `real' and `imaginary' parts of each component
in the spinor. We have thus enumerated a total of eight
distinct identifications, and an open question is whether
there are additional (linearly independent) identifications
that can be made. We leave this question for future work.
\subsection{Parity}
Under parity, the Dirac 4-spinor $\Psi_{\bf C}$ transforms as follows
\cite{ryder}:
\begin{equation}
\Psi_{\bf C} \rightarrow
\left(
\begin{array}{cc}
0 & \mbox{{\bf 1}}_{2 \times 2} \\
\mbox{{\bf 1}}_{2 \times 2} & 0
\end{array}
\right)
\cdot
\Psi_{\bf C},
\end{equation}
or, in terms of the eight real components $x_i$ and $y_i$,
$i=1,2,3,4$, of the Dirac 4-spinor
$\Psi_{\bf C}$ specified by (\ref{cd}), we have
\begin{equation}
\begin{array}{cccc}
x_1 \rightarrow x_3 \hspace{5mm} &
x_2 \rightarrow x_4 \hspace{5mm} &
x_3 \rightarrow x_1 \hspace{5mm} &
x_4 \rightarrow x_2 \\
y_1 \rightarrow y_3 \hspace{5mm} &
y_2 \rightarrow y_4 \hspace{5mm} &
y_3 \rightarrow y_1 \hspace{5mm} &
y_4 \rightarrow y_2
\end{array}
\end{equation}
According to the identifications (I),(II),(III) and (IV)
of Section \ref{isomorph},
a parity transformation on $\Psi_{\bf C}$ corresponds
to ${\bf D}$-conjugation of each component of $\Psi_{\bf D}$. Thus,
$\Psi_{\bf D} \rightarrow \Psi_{\bf D}^{\ast}$ under parity\footnote{
$\Psi_{\bf D}^{\ast}$ denotes taking the ${\bf D}$-conjugate
of each component in $\Psi_{\bf D}$.} for these identifications.
The `reflected' forms of these identifications
induces the transformation $\Psi_{\bf D} \rightarrow {\rm j}
\Psi_{\bf D}^{\ast}$ under parity. Thus the mathematical
operation of ${\bf D}$-conjugation
is closely related to the parity symmetry operation.
\section{An Automorphism Algebra of the Dirac Spinor}
\label{auto}
The existence of distinct equivalences between the transformation
properties of complex (or Dirac)
and hyperbolic spinors
permits one to construct automorphisms
of the complex Dirac spinor that leave the transformation
properties of its eight real components intact under
Lorentz transformations.
In order to investigate the algebra underlying
the set of all possible automorphisms,
it is convenient to change our current basis to
the so-called `standard representation' of the
Lorentz group \cite{ryder}. The Dirac 4-spinor
$\Psi^{SR}_{\bf C}$
in the standard representation
is related to the original 4-spinor $\Psi_{\bf C}$
according to the relation
\begin{equation}
\Psi^{SR}_{\bf C}
=
\frac{1}{\sqrt{2}}
\left(
\begin{array}{cc}
1 & 1 \\
1 & -1
\end{array}
\right) \cdot \Psi_{\bf C}.
\end{equation}
The identifications (I)-(IV) stated in Section \ref{isomorph}
are now equivalent to the following identifications:
\begin{equation}
(I)' \hspace{7mm}
\Psi_{\bf D} =
\left(
\begin{array}{c}
a_1 +{\rm j}b_1 \\
a_2 +{\rm j}b_2 \\
a_3 +{\rm j}b_3 \\
a_4 +{\rm j}b_4
\end{array}
\right) \hspace{5mm} \leftrightarrow \hspace{5mm}
\Psi^{SR}_{\bf C} =
\left(
\begin{array}{c}
a_3+{\rm i}a_1 \\
a_4+{\rm i}a_2 \\
b_3+{\rm i}b_1 \\
b_4+{\rm i}b_2
\end{array}
\right) \label{firstdash}
\end{equation}
\begin{equation}
(II)' \hspace{7mm}
\Psi_{\bf D} =
\left(
\begin{array}{c}
a_1 +{\rm j}b_1 \\
a_2 +{\rm j}b_2 \\
a_3 +{\rm j}b_3 \\
a_4 +{\rm j}b_4
\end{array}
\right) \hspace{5mm} \leftrightarrow \hspace{5mm}
\Psi^{SR}_{\bf C} =
\left(
\begin{array}{c}
-a_4+{\rm i}a_2 \\
a_3-{\rm i}a_1 \\
b_4-{\rm i}b_2 \\
-b_3+{\rm i}b_1
\end{array}
\right)
\end{equation}
\begin{equation}
(III)' \hspace{7mm}
\Psi_{\bf D} =
\left(
\begin{array}{c}
a_1 +{\rm j}b_1 \\
a_2 +{\rm j}b_2 \\
a_3 +{\rm j}b_3 \\
a_4 +{\rm j}b_4
\end{array}
\right) \hspace{5mm} \leftrightarrow \hspace{5mm}
\Psi^{SR}_{\bf C} =
\left(
\begin{array}{c}
-a_1+{\rm i}a_3 \\
-a_2+{\rm i}a_4 \\
-b_1+{\rm i}b_3 \\
-b_2+{\rm i}b_4
\end{array}
\right)
\end{equation}
\begin{equation}
(IV)' \hspace{7mm}
\Psi_{\bf D} =
\left(
\begin{array}{c}
a_1 +{\rm j}b_1 \\
a_2 +{\rm j}b_2 \\
a_3 +{\rm j}b_3 \\
a_4 +{\rm j}b_4
\end{array}
\right) \hspace{5mm} \leftrightarrow \hspace{5mm}
\Psi^{SR}_{\bf C} =
\left(
\begin{array}{c}
a_2+{\rm i}a_4 \\
-a_1-{\rm i}a_3 \\
-b_2-{\rm i}b_4 \\
b_1+{\rm i}b_3
\end{array}
\right).
\end{equation}
In addition, we have four
more which correspond to the `reflected' form
of the above identifications, and are obtained by interchanging
the `real' and `imaginary' parts of the components of
$\Psi_{\bf D}$:
\begin{equation}
(V)' \hspace{7mm}
\Psi_{\bf D} =
\left(
\begin{array}{c}
a_1 +{\rm j}b_1 \\
a_2 +{\rm j}b_2 \\
a_3 +{\rm j}b_3 \\
a_4 +{\rm j}b_4
\end{array}
\right) \hspace{5mm} \leftrightarrow \hspace{5mm}
\Psi^{SR}_{\bf C} =
\left(
\begin{array}{c}
b_3+{\rm i}b_1 \\
b_4+{\rm i}b_2 \\
a_3+{\rm i}a_1 \\
a_4+{\rm i}a_2
\end{array}
\right)
\end{equation}
\begin{equation}
(VI)' \hspace{7mm}
\Psi_{\bf D} =
\left(
\begin{array}{c}
a_1 +{\rm j}b_1 \\
a_2 +{\rm j}b_2 \\
a_3 +{\rm j}b_3 \\
a_4 +{\rm j}b_4
\end{array}
\right) \hspace{5mm} \leftrightarrow \hspace{5mm}
\Psi^{SR}_{\bf C} =
\left(
\begin{array}{c}
-b_4+{\rm i}b_2 \\
b_3-{\rm i}b_1 \\
a_4-{\rm i}a_2 \\
-a_3+{\rm i}a_1
\end{array}
\right)
\end{equation}
\begin{equation}
(VII)' \hspace{7mm}
\Psi_{\bf D} =
\left(
\begin{array}{c}
a_1 +{\rm j}b_1 \\
a_2 +{\rm j}b_2 \\
a_3 +{\rm j}b_3 \\
a_4 +{\rm j}b_4
\end{array}
\right) \hspace{5mm} \leftrightarrow \hspace{5mm}
\Psi^{SR}_{\bf C} =
\left(
\begin{array}{c}
-b_1+{\rm i}b_3 \\
-b_2+{\rm i}b_4 \\
-a_1+{\rm i}a_3 \\
-a_2+{\rm i}a_4
\end{array}
\right)
\end{equation}
\begin{equation}
(VIII)' \hspace{7mm}
\Psi_{\bf D} =
\left(
\begin{array}{c}
a_1 +{\rm j}b_1 \\
a_2 +{\rm j}b_2 \\
a_3 +{\rm j}b_3 \\
a_4 +{\rm j}b_4
\end{array}
\right) \hspace{5mm} \leftrightarrow \hspace{5mm}
\Psi^{SR}_{\bf C} =
\left(
\begin{array}{c}
b_2+{\rm i}b_4 \\
-b_1-{\rm i}b_3 \\
-a_2-{\rm i}a_4 \\
a_1+{\rm i}a_3
\end{array}
\right).
\end{equation}
Recall what these identifications mean; namely, under any
given Lorentz transformation [Eqn(\ref{transformII})] of
$\Psi_{\bf D}$, the eight real components $a_i$ and $b_i$
($i=1,2,3,4$) transform in exactly the same way as the
eight real components $a_i$ and $b_i$ that appear in
the (eight) complex spinors
$\Psi^{SR}_{\bf C}$ listed
above, after being acted on by the
corresponding complex Lorentz transformation\footnote{
We assume the $E_i$'s and $F_i$'s are now in the standard representation.}
[Eqn(\ref{transformD})].
We now define an operator $\rho_{II}$ which takes the complex spinor
$\Psi^{SR}_{\bf C}$ in the identification (I)' above
and maps it to the complex spinor $\Psi^{SR}_{\bf C}$
in the identification (II)'. Thus $\rho_{II}$ is defined by
\begin{equation}
\rho_{II}\cdot
\left(
\begin{array}{c}
x_1+{\rm i}y_1 \\
x_2+{\rm i}y_2 \\
x_3+{\rm i}y_3 \\
x_4+{\rm i}y_4
\end{array}
\right)
=
\left(
\begin{array}{c}
-x_2+{\rm i}y_2 \\
x_1-{\rm i}y_1 \\
x_4-{\rm i}y_4 \\
-x_3+{\rm i}y_3
\end{array}
\right),
\end{equation}
for any real variables $x_i$ and $y_i$.
Similarly, we may construct the operators
$\rho_{III}, \rho_{IV}, \dots , \rho_{VIII}$,
whose explicit form we omit for brevity.
If we let
${\cal V}(\Psi^{SR}_{\bf C})$ denote the eight-dimensional
vector space
formed by all {\em real} linear combinations of
complex 4-spinors, then the linear map $\rho_{II}$,
for example, is
an automorphism of ${\cal V}(\Psi^{SR}_{\bf C})$.
In particular, the transformation properties
of the eight real components of $\Psi^{SR}_{\bf C}$
under a Lorentz transformation is identical to the
transformation properties of the transformed spinor
$\rho_{II}(\Psi^{SR}_{\bf C})$ under the same Lorentz transformation.
One can show that the set of eight operators
\begin{equation}
\{1,\rho_{II},\rho_{III},\dots,\rho_{VIII}\}
\end{equation}
generate an eight dimensional closed algebra with respect to
the real numbers..
The subset $\{1,\rho_{II},\rho_{III},\rho_{IV}\}$, for
example, generates the algebra of quaternions.
One may also consider all commutators of the seven elements
$\rho_{II},\rho_{III},\dots,\rho_{VIII}$. These turn out to
generate a Lie algebra that is isomorphic to
$\mbox{SU(2)}\times\mbox{SU(2)}\times\mbox{U(1)}$.
The $\mbox{SU(2)}\times\mbox{SU(2)}$ part
is a Lorentz symmetry. The U(1) factor is
intriguing.
As we pointed out earlier, we have not established that
the algebra generated by the eight operators
$\{1,\rho_{II},\rho_{III},\dots,\rho_{VIII}\}$ is maximal;
additional independent automorphism operators could exist.
We leave this question for a future investigation.
\section{Discussion}
In this work, we constructed a representation of
the six-dimensional Lorentz group in terms of
$4 \times 4$ generating matrices
defined over the hyperbolic number system, ${\bf D}$.
The transformation properties
of the eight real components of the
corresponding `hyperbolic' 4-spinor under a Lorentz
transformation
was shown to be equivalent to the transformation
properties of the eight real components
of the familiar complex Dirac spinor, after
making an appropriate identification of components.
The non-uniqueness
of this identification led to an automorphism algebra
defined on the vector space of Dirac spinors.
These automorphisms have the property of preserving the
transformation properties of the eight-real components
of a Dirac 4-spinor in any given Lorentz frame. Properties
of this algebra were studied, although we were unable to
prove that the algebra studied here was maximal.
It is interesting to note that the hyperbolic representation
of the Lorentz group turns out to be a subgroup of
the (fifteen dimensional) special unitary group SU(4,${\bf D}$).
A simple consequence is that $\Psi_{\bf D}^{\dagger} \Psi_{\bf D}$
is a Lorentz invariant scalar. Moreover, after identifying
$\Psi_{\bf D}$ with $\Psi^{SR}_{\bf C}$, as in equation
(\ref{firstdash}), for example, it becomes manifest that
the six-dimensional complex representation of the Lorentz
group is a subgroup of SU(2,2;${\bf C}$). This group
is also fifteen dimensional, and it is tempting to assume
that SU(4,${\bf D}$) and SU(2,2;${\bf C}$) are isomorphic.
This seems to be supported by the proven correspondence
SU$(2,{\bf D}) \cong \mbox{SU}(1,1;{\bf C})$.
However, from general arguments, we were able to assert that
SU$(n,{\bf D})$ and $\mbox{SL}(n,{\bf R})$
possess isomorphic Lie algebras for $n \geq 2$. But we also know
SL(4,${\bf R}$) $\cong$ SO(3,3;${\bf R}$) \cite{group}, and
so we conclude that the Lie algebra of SU$(4,{\bf D})$
is isomorphic to the Lie algebra of SO(3,3;${\bf R}$).
But this symmetry evidently
differs from the algebra of generators of the conformal
group SU($2,2;{\bf C}$), which is equivalent to the algebra for
SO(2,4;${\bf R}$). Thus SU$(4,{\bf D})$ and SU(2,2;${\bf C}$)
are inequivalent groups.
Thus, from the viewpoint of naturally
embedding the Lorentz symmetry into some larger group,
the hyperbolic and complex representations stand apart.
We leave the physics of SU(4,${\bf D}$) as an intriguing
topic yet to be studied.
\medskip
\begin{large}{\bf Acknowledgment}
\end{large}
I would like to thank the British Council for support
during the early stages of this work.
|
\subsubsection{The Teukolsky equation}\noindent
In the absence of sources, the spin-$2$ perturbed Weyl scalars satisfy the homogeneous Teukolsky equations \cite{Teukolsky:1972my,Teukolsky:1973ha,Chandrasekhar:1985kt} (see \cite{Pound:2021qin} for a review with conventions consistent with those used here), $\mathcal{O}\Psi_0 = 0 = \mathcal{O}' \Psi_4 \equiv \zeta^{-4} \mathcal{O} \zeta^{4} \Psi_4$. The Teukolsky equations admit a separation of variables: working with the Kinnersley tetrad and inserting the ansatz $\zeta^4 \Psi_4 = R_{-2}(r) S_{-2}(\theta) e^{-i \omega t + i m \phi}$ yields
\begin{equation}
\mathcal{O}' \Psi_4 = \zeta^{-4} \left[\Delta \mathcal{D}^\dagger_{-1} \mathcal{D} + \mathcal{L}_{-1} \mathcal{L}_2^\dagger - 6 i \omega \bar{\zeta} \right] (\zeta^4 \Psi_4) = 0 , \label{eq:Teuk}
\end{equation}
where the directional derivatives are $\mathcal{D} \equiv l_+^\mu \partial_\mu$, $\mathcal{D}^\dagger \equiv l_-^\mu \partial_\mu$, $\mathcal{L}^\dagger = m_+^\mu \partial_\mu$, $\mathcal{L} = m_-^\mu \partial_\mu$ with $\mathcal{D}_n = \mathcal{D} + n (\partial_r \Delta) / \Delta$ and $\mathcal{L}_n = \mathcal{L} + n \cot \theta$. The functions $R_{-2}(r)$ and $S_{-2}(\theta)$ therefore satisfy a set of decoupled ordinary differential equations. A similar result also holds for $\Psi_0$.
There is substantial {\it gauge freedom} in perturbation theory, linked to the freedom
to make an infinitesimal coordinate transformation $x^\mu \rightarrow x^\mu + \epsilon \, \xi^\mu$, where $\epsilon=1$ is an order-counting parameter. Under such a transformation, a tensor field $\mathrm{T} = T + \epsilon \, \delta T$ changes at perturbative order as
$
\mathrm{T} \rightarrow T + \epsilon \left( \delta T - \pounds_{\xi} T \right) + O(\epsilon^2)
$,
where $\pounds_{\xi}$ denotes the Lie derivative along the gauge vector $\xi^{\mu}$. Applying this rule to the perturbed metric $\mathrm{g}_{\mu \nu} = g_{\mu \nu} + \epsilon \, h_{\mu \nu}$ yields a transformation law for the metric perturbation $h_{\mu \nu}$ under a change of gauge, namely, $h_{\mu \nu} \rightarrow h_{\mu \nu} - 2 \xi_{(\mu ; \nu)}$, where a semi-colon denotes the covariant derivative and parentheses indicate symmetrization over the indices.
On a vacuum black hole background ($R_{\mu \nu} = 0$), the perturbed Ricci tensor $\delta R_{\mu \nu}$ is gauge-invariant at linear order (as $\pounds_{\xi} R_{\mu \nu} = 0$). Consequently, any pure-gauge metric perturbation $h_{\mu \nu} = - 2 \xi_{(\mu ; \nu)}$ satisfies the vacuum field equations; furthermore if the vector satisfies $\Box \xi^\mu = 0$ then $h_{\mu \nu}$ is in Lorenz gauge and the metric perturbation satisfies Eq.~\eqref{Lorenz-field-equation} with $T_{\mu \nu} = 0$.
In principle, given a vacuum metric perturbation $h_{\mu \nu}$, one may
apply a gauge transformation to transform it to Lorenz gauge, such that
\begin{equation}
h^{L}_{\mu \nu} \equiv h_{\mu \nu} - 2 \xi_{(\mu ; \nu)} \label{eq:hLorenz}
\end{equation}
satisfies Eq.~(\ref{eq:Lorenz-gauge}). It follows that the gauge vector $\xi^\mu$ must satisfy a sourced wave equation,
\begin{equation}
\Box \xi^\mu = \nabla_\nu \widehat{h}^{\mu \nu}. \label{eq:gauge-source}
\end{equation}
\textit{Reconstruction of Lorenz gauge solutions from scalar potentials.---}
Our main result is that one can construct solutions to the Lorenz gauge equations from separable solutions of the Teukolsky equation. These solutions are divided into scalar (spin-$0$), vector (spin-$1$), and tensor (spin-$2$) type, alongside ``completion'' pieces \cite{Merlin:2016boc,vanDeMeent:2017oet} associated in the Kerr case with infinitesimal changes in the mass and angular momentum of the black hole.
In the absence of sources, the spin-$0$ and spin-$1$ perturbations are pure-gauge modes. In the presence of sources, we anticipate that solutions of all types ($s=0$, $1$, $2$) will be required to construct a physical solution that is free from gauge discontinuities, as is found to be the case on Schwarzschild spacetime \cite{Berndtson:2007gsc}.
\textit{Spin-2 solutions.---}
To obtain Lorenz gauge solutions derived from spin-$2$ scalars, we start with the ingoing radiation-gauge solution of Chrzanowski (Ref.~\cite{Chrzanowski:1975wv}, Table I) and seek a transformation to Lorenz gauge. Chrzanowski's solution can be expressed in covariant form as \cite{Aksteiner:2016pjt}
\begin{equation}
h_{\mu \nu} = -\frac12 \nabla_\beta \left[ \zeta^{4} \nabla_\alpha \left( \zeta^{-4} \tensor{\mathcal{H}}{_{(\mu} ^\alpha _{\nu)} ^\beta} \right) \right] \label{eq:hrad}
\end{equation}
where
\begin{equation}
\mathcal{H}^{\mu \alpha \nu \beta} = 4 \psi \, l^{[\mu} m^{\alpha]} l^{[\nu} m^{\beta]},
\end{equation}
and where $\psi$ is a spin-weight $-2$ potential. In the absence of sources it satisfies a homogeneous $s=-2$ Teukolsky equation, $\mathcal{O} \psi = 0$.
The metric perturbation in Eq.~\eqref{eq:hrad} is manifestly trace-free ($h = 0$). The inclusion of $\zeta^4$ is required in order to satisfy the linearised Einstein equation but violates the Lorenz gauge condition; without it the metric perturbation would automatically satisfy the Lorenz gauge condition but not the linearised Einstein equation \cite{Stewart:1978tm}. Finally, in order to obtain a real metric perturbation that generates a physical Weyl tensor one typically adds the complex conjugate of this metric perturbation; for now we omit the complex conjugate and will return to it later.
We now seek to transform $h_{\mu \nu}$ to Lorenz gauge by solving Eq.~\eqref{eq:gauge-source}, while preserving the trace-free condition. That is, we seek a gauge vector $\xi^\mu$ satisfying
\begin{equation}
\Box \xi^\mu = -j^\mu \equiv \nabla_{\nu} h^{\mu \nu} , \quad \quad \nabla_\mu \xi^\mu = 0 .
\end{equation}
This we recognise as a well-formed electromagnetic field equation in (vector) Lorenz gauge. The effective four-current $j^\mu$ is divergence-free ($\nabla_\mu j^\mu = 0$) by virtue of the fact that $h^{\mu \nu}$ in Eq.~\eqref{eq:hrad} satisfies $\nabla_{\mu} \nabla_{\nu} h^{\mu \nu} = 0$.
The above becomes clearer when written in terms of forms:
\begin{equation}
\delta \mathrm{d} \xi = j , \quad \quad \delta \xi = 0, \quad \quad \delta j = 0 . \label{eq:forms1}
\end{equation}
Here $\mathrm{d}$ is the exterior derivative, $\delta = {}^\star \mathrm{d} {}^\star$ is the coderivative, ${}^\star$ is the Hodge dual operation, $\Box \xi = \mathrm{d} \delta \xi - \delta \mathrm{d} \xi$ on a Ricci-flat spacetime, and a key identity is $\mathrm{d} \mathrm{d} = 0 = \delta \delta$.
By Poincar\'e's lemma, a divergence-free vector is locally the coderivative of a (non-unique) two-form. A short calculation establishes that $j = \delta J$, that is, $j^\mu = \nabla_\nu J^{\mu \nu}$ with the two-form
\begin{equation}
J^{\mu \nu} = \nabla_\beta\left[ U_\alpha \mathcal{H}^{\beta\alpha\mu\nu}\right] = \frac{ \sqrt{2}}{\Sigma} l^{[\mu} m^{\nu]} \left[\mathcal{L}_2^\dagger - i a \sin \theta \mathcal{D} \right] \psi ,
\end{equation}
where $U_a = - \nabla_a \ln \zeta$.
Equation \eqref{eq:forms1} can be written as $\delta (\mathrm{d} \xi - J + {}^\star \mathrm{d} \varsigma ) = 0$, where $\varsigma$ is an arbitrary vector field (i.e.~a gauge vector of the third kind \cite{Cohen:1974cm}). The recent work of Green {\it et al.}~\cite{Green-talk-2021,Green-paper} suggests the ansatz
\begin{equation}
\xi = \zeta^2 \delta H - \mathrm{d} \chi, \label{eq:xi-ansatz}
\end{equation}
where $H$ is a two-form and $\chi$ is a scalar; and we choose the gauge vector of the third kind to be $\varsigma = - i \zeta^2 \delta H$ so that the field equation becomes \cite{Green-Toomani}
\begin{equation}
\delta \left( (1 - i {}^\star) \mathrm{d} \zeta^2 \delta H - J \right) = 0 .
\end{equation}
The operator $\mathrm{d} \zeta^2 \delta$ generates decoupled equations for the three anti-self-dual degrees of freedom in the two-form $H$ \cite{Green-talk-2021}; and the operator $(1 - i {}^\star)$ annihilates the self-dual components of the equation \cite{Mustafa:1987hertz,Green-talk-2021}. The ansatz
\begin{equation}
H^{\mu \nu} = \frac{\sqrt{2}}{\zeta} l^{[\mu} m^{\nu]} \alpha \label{eq:Hansatz}
\end{equation}
then leads to a single decoupled second-order equation,
\begin{equation}
\left( \Delta \mathcal{D}^\dagger \zeta^2 \mathcal{D} + \mathcal{L} \zeta^2 \mathcal{L}_1^\dagger \right) \alpha = - \zeta \left(\mathcal{L}_2^\dagger - i a \sin \theta \mathcal{D} \right) \psi . \label{eq:Heqn}
\end{equation}
Assuming harmonic time dependence $e^{- i \omega t}$ for $\psi$, and by application of the vacuum Teukolsky equation \eqref{eq:Teuk}, we find that Eq.~\eqref{eq:Heqn} has an elementary solution,
\begin{equation}
\alpha = - \frac{1}{6 i \omega \zeta} \mathcal{D} \mathcal{L}_2^\dagger \psi . \label{eq:Hsol}
\end{equation}
To obtain the gauge vector in Eq.~\eqref{eq:xi-ansatz} we must also solve $\delta \mathrm{d} \chi = \Box \chi = ( \nabla_{\mu} \zeta^2 ) \nabla_{\nu} H^{\mu \nu}$, that is,
\begin{equation}
\Box \chi = \frac{1}{\bar{\zeta}} \left( \mathcal{L}_1^\dagger - i a \sin \theta \mathcal{D} \right) \alpha .
\end{equation}
This also has an elementary solution,
\begin{equation}
\chi = \frac{1}{48 \omega^2} \mathcal{D} \mathcal{D} \mathcal{L}_1^\dagger \mathcal{L}_2^\dagger \psi . \label{eq:chisol}
\end{equation}
In summary, the gauge vector that transforms the radiation-gauge solution \eqref{eq:hrad} to Lorenz gauge via \eqref{eq:hLorenz} is
\begin{equation}
\xi^\mu = \zeta^2 \nabla_{\nu} H^{\mu \nu} - g^{\mu \nu} \nabla_\nu \chi , \label{eq:gauge-vector-sol}
\end{equation}
where the key ingredients are in Eqs.~\eqref{eq:Hansatz}, \eqref{eq:Hsol} and \eqref{eq:chisol}.
\textit{Reformulation in terms of GHP calculus.---}
We now rewrite the previous results using the Geroch-Held-Penrose (GHP) formalism \cite{Geroch:1973am} (see Sec.~4.1.1 of Ref.~\cite{Pound:2021qin} for a review). This allows us to: reformulate the results in a compact and coordinate-independent way; eliminate the need for a mode ansatz; and extend the results to the full Kerr-NUT class of Petrov type-D spacetimes. It also allows us to obtain a similar result for the gauge transformation from {\it outgoing} radiation gauge by applying the GHP prime operator along with the identifications $\psi' = \psi^{\rm ORG}$, $\chi' = \chi^{\rm ORG}$ and $H'_{\mu\nu} = H^{\rm ORG}_{\mu\nu}$.
Translating the key ingredients in the gauge transformation to GHP expressions and introducing the Lie derivative, $\pounds_{T}$, along the time-translation Killing vector, $T^\mu$, we get
\begin{subequations}
\begin{align}
\label{eq:chi-GHP}
\pounds_{T}^{2} \chi &= -\frac{1}{24} \hbox{\ec\char'360}^2 \bar{\zeta^2} \hbox{\ec\char'336}^2 \psi, \\
\label{eq:H-GHP}
\pounds_{T} H_{\mu \nu} &= l_{[\mu} m_{\nu]} \frac{1}{3 \zeta^2} \hbox{\ec\char'360} \bar{\zeta} \hbox{\ec\char'336} \psi.
\end{align}
\end{subequations}
\textit{Metric perturbation from Weyl scalars.---} We now seek to express the Lorenz-gauge metric perturbation $h_{\mu\nu}^L$ in terms of the Weyl tensor that it generates. In particular, we consider projections $\Psi_0 = C_{lmlm}$ and $\Psi_4 = C_{n{\bar{m}} n {\bar{m}}}$ which are invariant under gauge and infinitesimal tetrad transformations. For the metric perturbation \eqref{eq:hrad} or its conjugate, prime, or prime conjugate, one finds after imposing the Teukolsky equation that, respectively, (see e.g.~\cite{Pound:2021qin})
\begin{subequations}
\begin{align}
\Psi_0 &=
\frac14 \{0,\, \hbox{\ec\char'336}^4 \bar{\psi},\,
|
3 M \pounds_{T} \zeta^{-4} \psi',\, \hbox{\ec\char'360}^4 \bar{\psi}'\}, \\
\Psi_4 &=
\frac14 \{-3 M \pounds_{T} \zeta^{-4} \psi, \, \hbox{\ec\char'360}'^4 \bar{\psi},\, 0,\, \hbox{\ec\char'336}'^4 \bar{\psi}'\}.
\end{align}
\end{subequations}
If we work with a metric perturbation $h_{\mu \nu} + \bar{h}_{\mu \nu}$ or $h'_{\mu \nu} + \bar{h}'_{\mu \nu}$ alone, then we recover the standard radiation gauge relations between the Hertz potentials and the Weyl scalars \cite{Pound:2021qin}. Alternatively, we can choose the ``antisymmetric'' combination
$h^-_{\mu \nu} = \frac12[h'_{\mu \nu} + \bar{h}'_{\mu \nu} - (h_{\mu \nu} + \bar{h}_{\mu \nu})]$. After imposing the Teukolsky-Starobinsky identities, this leads to the remarkably simple relations \cite{Aksteiner:2016pjt,Aksteiner:2016mol}
\begin{subequations}
\begin{align}
\Psi_0 = \frac{3M}{4} \pounds_{T} \zeta^{-4} \psi', \quad
\Psi_4 = \frac{3M}{4} \pounds_{T} \zeta^{-4} \psi.
\end{align}
\end{subequations}
Note in particular that $\psi$ and $\psi'$ are {\it not} the same as the radiation gauge potentials, and similarly the $h_{\mu\nu}$ and $h'_{\mu\nu}$ appearing in $h^-_{\mu\nu}$ are also different to the radiation gauge metric perturbations.
We can thus reinterpret this as
\begin{equation}
M \pounds_{T} h^-_{\mu \nu} = - \frac{1}{3} \nabla_\beta [\zeta^4 \nabla_\alpha \mathcal{C}_{(\mu}{}^\alpha{}_{\nu)}{}^\beta] + \text{c.c.}
\end{equation}
where $\mathcal{C}^{\mu\alpha\nu\beta} = 4 (\Psi_0 \, n^{[\mu} {\bar{m}}^{\alpha]} n^{[\nu} {\bar{m}}^{\beta]} - \Psi_4 \, l^{[\mu} m^{\alpha]} l^{[\nu} m^{\beta]})$
is the spin-2 part of the self-dual Weyl tensor with the sign of $\Psi_4$ flipped.
Since $\Psi_0$ and $\Psi_4$ are gauge invariant, these relations also hold after transforming to Lorenz gauge using \eqref{eq:gauge-vector-sol} (or its prime, conjugate, or prime conjugate).
In all three cases, imposing the Teukolsky-Starobinsky identities and the Teukolsky equation reduces four components of the Lorenz-gauge metric perturbation to second-order operators acting on $\Psi_0$ and $\Psi_4$,
\begin{subequations}
\label{eq:h-components-Weyl}
\begin{align}
\label{eq:hll-LG}
\pounds_{T}^{2} h^L_{ll} &= -\frac{1}{3} \big[\bar{\zeta}^{-2} \hbox{\ec\char'360}^2( \bar{\zeta}^4 \bar{\Psi}_0) + \zeta^{-2} \hbox{\ec\char'360}'^2( \zeta^4 \Psi_0)\big], \\
\label{eq:hnn-LG}
\pounds_{T}^{2} h^L_{nn} &= -\frac{1}{3} \big[\bar{\zeta}^{-2} \hbox{\ec\char'360}'^2( \bar{\zeta}^4 \bar{\Psi}_4) + \zeta^{-2} \hbox{\ec\char'360}^2( \zeta^4 \Psi_4)\big], \\
\label{eq:hmm-LG}
\pounds_{T}^{2} h^L_{mm} &= -\frac{1}{3} [\bar{\zeta}^{-2} \hbox{\ec\char'336}^2( \bar{\zeta}^4 \bar{\Psi}_4) + \zeta^{-2} \hbox{\ec\char'336}'^2( \zeta^4 \Psi_0)], \\
\label{eq:hmbmb-LG}
\pounds_{T}^{2} h^L_{{\bar{m}}\mb} &= -\frac{1}{3} [\bar{\zeta}^{-2} \hbox{\ec\char'336}'^2( \bar{\zeta}^4 \bar{\Psi}_0) + \zeta^{-2} \hbox{\ec\char'336}^2( \zeta^4 \Psi_4)].
\end{align}
A fifth component is obtained from the fact that this metric perturbation is traceless,
\begin{equation}
\label{eq:h-LG}
h = 2(h^L_{m{\bar{m}}}-h^L_{ln}) = 0.
\end{equation}
\end{subequations}
No such simplification appears possible for the remaining five components, but they can be written in terms of a sixth order operator acting on $\Psi_0$ and $\Psi_4$.
\textit{Spin-$1$ solutions.---}
A set of spin-$1$ solutions satisfying $\Box \xi^\mu = 0$ and $\nabla_\mu \xi^\mu = 0$ were obtained in Ref.~\cite{Dolan:2019hcw,Wardell:2020naz} (see also Ref.~\cite{Frolov:2018ezx,Lunin:2017drx}). They take the form
\begin{align}
\xi^\mu_{(s=1)} = \nabla_\nu \left( \zeta \mathcal{H}_{(s=1)}^{\nu \mu} \right) + \text{c.c.}, \label{eq:spin1-xi}
\end{align}
where
\begin{align}
\mathcal{H}^{\mu \nu}_{(s=1)} = 2 \pounds_{T}^{-1} \left[ \phi_0 \bar{m}^{[\mu} n^{\nu]} - \phi_2 l^{[\mu} m^{\nu]} \right] .
\end{align}
Here, $\phi_0$ and $\phi_2$ are Maxwell scalars that satisfy the Teukolsky equations for $s=+1$ and $s=-1$, respectively (i.e. $\mathcal{O}\phi_0 = 0 = \mathcal{O}' \phi_2$), and which are linked by the spin-1 Teukolsky-Starobinsky identities. A traceless spin-1 Lorenz-gauge metric perturbation can be constructed from $\xi^\mu_{(s=1)}$ in the now-familiar way, $h^{(s=1)}_{\mu \nu} = -2 \xi^{(s=1)}_{(\mu ; \nu)}$.
\textit{Spin-$0$ solutions.---}
So far, we have only considered trace-free solutions, $h=0$. The trace of the metric perturbation must satisfy
\begin{equation}
\Box h = 0
\end{equation}
in the homogeneous case by virtue of the contraction of Eq.~\eqref{Lorenz-field-equation}. It is natural to ask: what (non-unique) homogeneous Lorenz-gauge metric perturbation generates a trace $h$?
A suitable metric perturbation is pure-gauge, i.e.,
\begin{equation}
h^{(s=0)}_{\alpha \beta} = -2\xi^{(s=0)}_{(\alpha;\beta)} ,
\end{equation}
and is generated by a gauge vector that satisfies
\begin{equation}
\nabla_\alpha \xi^\alpha_{(s=0)} = - \frac12 h, \quad \Box \xi^\alpha_{(s=0)} = 0 . \label{eq:xi0-properties}
\end{equation}
A vector with precisely these properties is
\begin{equation}
\xi^\alpha_{(s=0)} = \frac12 \pounds_{T}^{-1} f^{\alpha \beta} h_{;\beta} + 2 \kappa^{;\alpha}, \label{eq:xi-scalar}
\end{equation}
where
\begin{equation}
f_{\alpha \beta} = (\zeta + \bar{\zeta}) n_{[\alpha} l_{\beta]} - (\zeta - \bar{\zeta}) {\bar{m}}_{[\alpha} m_{\beta]},
\end{equation}
is the conformal Killing-Yano tensor (we follow here the definition of \cite{Aksteiner:2014zyp}, which differs from that of Ref.~\cite{Frolov:2017kze} by an overall sign), and where $\kappa$ is a scalar field satisfying
\begin{equation}
\Box \kappa = \frac{1}{2} h .
\end{equation}
It is straightforward to show that the requirements \eqref{eq:xi0-properties} are satisfied by using the properties of the conformal Killing-Yano tensor, namely
\begin{equation}
f_{\alpha(\beta;\gamma)} = g_{\beta\gamma} T_\alpha - g_{\alpha (\beta} T_{\gamma)}, \;
f_{\alpha\beta} = f_{[\alpha\beta]},
\; T^\alpha = \tfrac13 f^{\alpha \beta}{}_{;\beta}.
\end{equation}
In the Schwarzschild case, the two spin-$0$ degrees of freedom, $h$ and $\kappa$, map on to those identified by Berndtson \cite{Berndtson:2007gsc} (see also Khavkine \cite{Khavkine:2020ksv}).
\textit{Completion pieces on Kerr spacetime.---}
In addition to spin-$s$ contributions, the metric perturbation may also contain ``completion'' pieces \cite{Merlin:2016boc,vanDeMeent:2017oet,Aksteiner:2018vze} associated in the Kerr case with infinitesimal changes in the mass and angular momentum of the black hole. Completion pieces are constructed from varying the mass $M$ and specific angular momentum $a=J/M$ parameters, viz.,
\begin{equation}
h^{(\partial M)}_{\mu \nu} \equiv \left. \frac{\partial \mathrm{g}_{\mu \nu}}{\partial M} \right|_{a} , \quad h^{(\partial a)}_{\mu \nu} \equiv \left. \frac{\partial \mathrm{g}_{\mu \nu}}{\partial a} \right|_{M} ,
\end{equation}
where $\mathrm{g}_{\mu \nu}$ is the Kerr metric. Moreover, the conformal mode $h_{\mu \nu}^{(2g)} = 2 \, \mathrm{g}_{\mu \nu}$ automatically satisfies the linearised vacuum field equations. These three pieces are linearly related by the equation
\begin{equation}
h_{\mu \nu}^{(2g)} = M h^{(\partial M)}_{\mu \nu} + a \, h^{(\partial a)}_{\mu \nu} + 2 N_{(\mu ; \nu)} , \label{eq:completion-relation}
\end{equation}
with the gauge vector $N^\mu \partial_\mu = t \,\partial_t + r \partial_r$.
Unlike the conformal mode, the perturbations $h^{(\partial M)}_{\mu \nu}$ and $h^{(\partial a)}_{\mu \nu}$ (for $a\neq0$) are {\it not} in Lorenz gauge. To shift to Lorenz gauge, we apply a gauge transformation,
\begin{equation}
h_{\mu \nu}^{L(\partial M)} = h_{\mu \nu}^{(\partial M)} - 2 Y_{(\mu ; \nu)} .
\end{equation}
As $h^{(\partial M)}_{\mu \nu}$ is traceless, it follows that $\Box Y_\mu = \nabla^\nu h_{\mu \nu}^{(\partial M)} = 2 \delta_{\mu}^r / \Delta$. Since the right-hand side is a gradient, the gauge vector is also a gradient, $Y_{\mu} = \nabla_{\mu} y$, and using $\Box ( \nabla_\mu y ) = \nabla_\mu ( \Box y )$, the potential $y$ must satisfy
\begin{equation}
\Box y = \int \frac{2}{\Delta} dr = \left(\frac{2}{r_+ - r_-} \right) \ln \left(\frac{r - r_+}{r - r_-}\right) + \text{const}.
\end{equation}
This equation can be solved by separation of variables. The Lorenz-gauge mode $h_{\mu \nu}^{L(\partial a)}$ follows via Eq.~(\ref{eq:completion-relation}).
The mass and angular momentum content of the $h^{(\partial M)}_{\mu \nu}$ and $h^{(\partial a)}_{\mu \nu}$ modes is assessed by evaluating the conserved charges associated with the background Killing vectors (see Sec.~IIE in Ref.~\cite{Dolan:2012jg}, and Ref.~\cite{Abbott:1981ff}); we find $ Q_{(t)} = 1, Q_{(\phi)} = - a$ and $Q_{(t)} = 0, Q_{(\phi)} = - M$, respectively.
\textit{Discussion.---}
We have obtained a set of Lorenz-gauge metric perturbations which satisfy the vacuum field equations [Eq.~(\ref{Lorenz-field-equation}) with $T_{\mu \nu} = 0$]. In the frequency domain, the spin-$0$, spin-$1$ and spin-$2$ metric perturbations can be expressed in terms of separable modes, that is, radial and angular functions ${}_sR_{\ell m \omega}(r)$ and ${}_sS_{\ell m \omega }(\theta)$ satisfying the vacuum Teukolsky equations for $s=0$, $s=\pm 1$ and $s=\pm 2$. It is notable that, although the construction of the spin-2 modes starts with the radiation-gauge potentials $\psi$, the Lorenz-gauge metric components in Eq.~\eqref{eq:h-components-Weyl} can be written in terms of Weyl scalars only, without reference to $\psi$. We also note however that it is likely that the zero frequency modes of the spin-$2$ case will need to be treated separately, as has been done for the spin-$1$ case \cite{Wardell:2020naz}.
Several extensions of this work suggest themselves. First, extending the Lorenz-gauge formalism to include source terms ($T_{\mu \nu} \neq 0$). Second, constructing solutions for GSF particle-inspiral scenarios by demanding global regularity (in vacuum regions) on a metric perturbation constructed from a sum over a complete set of vacuum modes. Third, the application of these Lorenz-gauge solutions in second-order GSF applications \cite{Pound:2019lzj,Warburton:2021kwk,Wardell:2021fyy}, ultimately leading to the production of waveforms for extreme mass ratio systems with a spinning primary (larger) black hole.
\begin{acknowledgments}
\textit{Acknowledgements.---}
With thanks to Leanne Durkan, Vahid Toomani, Stephen Green, Stefan Hollands, Adam Pound, Leor Barack, Adrian Ottewill, Amos Ori, Saul Teukolsky, Bernard Whiting and Lars Andersson for discussions. Many of the calculations in this work were enabled by the \textsc{xAct} \cite{xTensor,xTensorOnline} tensor algebra package for \textsc{Mathematica}.
S.D.~acknowledges financial support from the Science and Technology Facilities Council (STFC) under Grant No.~ST/P000800/1, and from the European Union's Horizon 2020 research and innovation programme under the H2020-MSCA-RISE-2017 Grant No.~FunFiCO-777740.
\end{acknowledgments}
\bibliographystyle{apsrev4-1}
|
\section{Supplemental Material}
\section*{Ionic diffusion and copper spin-lattice relaxation}
Information on the diffusion of ions and pretransitional increase of copper defect numbers is mainly obtained from line shape and spin-lattice relaxation measurements on copper. The total relaxation rate is a sum of the phonon contribution and the two defect parts:
\begin{equation}
\label{T1}
1/T_{1} = \left.1/T_{1}\right|_{ph}+\left.1/T_{1}\right|_{Cu-def}+\left.1/T_{1}\right|_{Hg-def}
\end{equation}
Quadruploar spin 3/2 copper nuclei couple to electric field gradients due to phonons and defects; the phonon part is well approximated by $\left.1/T_{1}\right|_{ph} = C T^{2}$ above the Debye temperature (with $C$ a constant) \cite{abragam}. In the simplest approximation of point charges and exponential autocorrelation, the relaxation rate due to one kind of defects is given by Eq. (1) in the paper. Hopping times and defect numbers are temperature-dependent and in principle Arrhenius-like, except copper defects close to the order-disorder transition. A linear scale plot of spin-lattice relaxation vs. temperature below $T_{c}$ (Fig. \ref{T1_lin}) clearly demonstrates that two distinct defect contributions are neccesary to reproduce the temperature dependence of the relaxation. We must therefore fit a superposition of the form of (\ref{T1}), with many adjustable parameters. To constrain the fit, we used several additional results. We easily determined the number of copper defects using measured integrals of the NMR line, as described in the paper. An additional field-dependent relaxation measurement was performed at 280 K to increase the reliability of the remaining fitting parameters---two activation energies for mercury (defect formation and jump) and one for copper (jump), and three multiplicative constants; field and temperature dependecies were fitted simultaneously. Attempt frequencies were constrained to one order of magnitude using Raman spectroscopy data (where anomalous low-frequency modes are identified with the attempt frequencies \cite{raman}), and agree with typical values for similar compounds (e.g. CuI \cite{CuI}).
The activation energies resulting from the fit are given in the paper. The approximation of independent defects is most probably adequate for temperatures not very close to $T_{c}$, but the reliability of our fitting model can be questioned near $T_{c}$, as we suppose that only the number of copper defects grows dramatically. However, the line shape measurements indicate that indeed copper defects are much more prevalent than mercury---intensity of the $^{63}$Cu line drops sharply, while $^{199}$Hg shows no appreciable change (within the relatively large error margin). Above $T_{c}$ the relaxation rate is significantly higher than in the low temperature phase, due to the large number of fast mercury ions. If we assume that the relaxation model of Eq. (1) in the paper is at least approximately valid above $T_{c}$, clearly the contribution of mercury ion diffusion will dominate the relaxation rate (as we know from the absence of motional narrowing in Cu that copper motion is much slower, and $\omega_{0} \tau_{h} \gg 1$). We can thus use mercury hopping times obtained directly from NMR line shapes to make a comparison with the $^{63}$Cu relaxation. In this limit $1/^{63}T_{1}$ is roughly proportional to $n/\tau_{h} \sim \sigma_{0}$ (with $n$ the number of mobile ions, and $\sigma_{0}$ the conductivity as defined in Eq. (2) in the paper). When $1/
|
^{63}T_{1}$, $\sigma_0$ and $\sigma_{dc}$ are plotted on top of each other, we see that the relaxation rate follows the conductivities within the error margin (Fig. \ref{T1} insert), confirming our qualitative analysis.
\begin{figure}[h]
\includegraphics[width=90mm]{T1_lin.eps}
\caption{\label{T1_lin} $^{63}$Cu spin-lattice relaxation rate below $T_{c}$, on a linear scale. Raman phonon process and two defect diffusion contributions are plotted separately. Insert is relaxation rate above $T_{c}$ (circles), which is seen to roughly fall between $\sigma_{0}$ (squares) and $\sigma_{dc}$ (dotted line).}
\end{figure}
\section*{Stimulated echo measurements on mercury}
Motional narrowing enables us to make selective NMR measurements on moving mercury ions. The spin-spin relaxation time of stationary ions, $T_{2,trap}$, is of the order of $2 \pi/\Delta \omega_{0} \sim 10$ $\mu$s, while the fast ions have some ten times longer relaxation times (Fig. \ref{stimecho}a). If we use a stimulated spin echo sequence\cite{hahn} (Fig. \ref{stimecho}b) we can exploit this difference. The first two pulses excite only spins which are moving at the moment of excitation (if $T_{2,trap} \ll \delta$ and $\delta \sim T_{2,fast}$) and thus 'tag' them magnetically. After a diffusion time $\Delta$ the third pulse is applied, causing a stimulated echo.
\begin{figure}[h]
\includegraphics[width=77mm]{SE_seq.eps}
\caption{\label{stimecho} a) Spin-spin relaxation for mercury at 380 K, obtained using a conventional spin echo sequence. b) The stimulated spin echo (SSE) sequence. Radio-frequency pulses and spin echo signal are represented schematically.}
\end{figure}
However, tagged spins which have been trapped during the time $\Delta$ and are stationary at the moment of the application of the third pulse, have shorter $T_{2} \sim T_{2,trap}$ and do not contribute to the echo signal. Thus the echo amplitude should be anomalously small for diffusion times comparable to and larger than the dynamic correlation time $\tau_{corr}$. The size of this 'dip' in diffusion time dependence provides an estimate of the absolute number of correlated ions. Such behavior is observed in simulation as well---even the same functional dependence (a stretched exponential) can be fitted to experimental and simulation results. It is possible to obtain a good agreement between experiment and simulation in both correlation time and fraction of correlated ions (for a vacancy fraction of 0.4 and effective copper diffusion coefficient $D^{*} = 0.0008$), but the simplifying assumption of single copper and mercury jump probabilities makes the stretching parameter of the simulation curve somewhat closer to 1. This could be quickly amended by introducing a jump probability distribution 'by hand', but was deemed physically untransparent. The agreement between simulation and experiment is already quite impressive in the simple version, and more sophisticated models are needed to account for the finer effects. In absence of trapping, the echo decay should be a simple exponential in $\Delta$, with the decay time equal to the spin-lattice relaxation time $T_{1}$. The spin-spin relaxation is single exponential (Fig. \ref{stimecho}a), the line shape doesn't change noticeably for any $\Delta$, and $^{199}$Hg is a spin 1/2 nucleus. Hence the deviations from an exponential decay in the SSE experiment can only be due to dynamical trapping.
|
\section{Introduction}
Theoretical physicists' wild dream of mapping the space of quantum field theories, even when restricted to unitary, local, and Poincar\'e-invariant ones, is probably unattainable. But add in enough supersymmetry and the dream becomes much tamer, discrete structures emerge, and enumerating them seems within reach. Here we take a step in this direction in the case of ${\mathcal N}\geq3$ supersymmetric field theories in 4 dimensions. In particular, we carry out a classification of the possible moduli space geometries of such theories with rank less than or equal to 2.
\begin{table}[t!]
\footnotesize
\begin{adjustbox}{center}
$\begin{array}{|c|c|c|c|c|c|}
\hline
\multicolumn{6}{|c|}{\multirow{2}{*}{\qquad {\large \text{\bf ${\mathcal N}\geq 3$ Rank-2 Orbifold geometries $\C^2/ \mu_{\tau} ({\Gamma})$}}}}\\
\multicolumn{6}{|c|}{}\\
\hline\hline
\boldsymbol{|{\Gamma}|} & \textbf{Group} & {\Delta}\ \textbf{CB} & \textbf
|
{CFT realization} & \mathcal{N}=4 &
\ \boldsymbol{4c=4a}\ \\
\hline
1 & \mathds{1} & 1, 1& \cellcolor{yellow!50} I_0 \times I_0 & \checkmark \times \checkmark & 2 \\
\cline{1-6}
\multirow{2}{*}{2} & \multirow{2}{*}{$\Z_2$}&1,2& \cellcolor{yellow!50} I_0 \times I_0^*& \checkmark \times \checkmark & 4\\
\cline{3-6}
&& \cellcolor{black!10} 2,2,2&\cellcolor{orange!50}[U(1){\times} U(1)\ {\mathcal N}{=}4]_{\Z_{2}}& \checkmark & 2\\
\cline{1-6}
3 &\Z_3&1,3&\cellcolor{yellow!50} I_0{\times} IV^*& \checkmark \times \xmark & 6\\
\cline{1-6}
\multirow{3}{*}{4}&\Z_2{\times}\Z_2&2,2& \cellcolor
|
inhomogeneous nebulae, as well as, if required, multiple, non-centrally located exciting stars. The time taken to run/converge simulations and renderings on a standard desktop ranges form a number of hours to days. An alternative to this is the pseudo 3D technique of {\mdseries\cloudy}{}\footnote{Cloudy is available under general use under an open source licence: \url{http://www.nublado.org}} \citep{1998CLOUDY90, 2000CLOUDY00, 2013Cloudy}. {\mdseries\cloudy}{} is a large-scale spectral synthesis code designed to simulate physical conditions within an astronomical plasma and then predict the emitted spectrum. {\mdseries\cloudy}{} 3D\footnote{Cloudy3D is available online \url{https://sites.google.com/site/cloudy3d/}} is an IDL library to compute pseudo 3D photoionisation models by interpolating radial profiles between several 1D {\mdseries\cloudy}{} models. Users can generate emission line ratio maps, position-velocity (PV) diagrams, and channel maps, once an expansion velocity field is given. It is significantly faster than full 3D photoionisation codes, allowing users to explore a wide space of parameters quickly. \cite{2013pyCloudy}
developed {\mdseries\pycloudy}{},\footnote{Python wrapper for cloudy. Available online: \url{https://sites.google.com/site/pycloudy/}} a Python library that handles input and output files of the {\mdseries\cloudy}{} photoionisation code. {\mdseries\pycloudy}{} can also generate pseudo 3D renderings from various runs of the 1D {\mdseries\cloudy}{} code.
Typically only 1D velocity information is available from the Doppler shifting of lines along the lines of sight. In special cases, if velocities are high enough and the sources are close enough, then {\it proper motions} can be observed, where, over time, the nebula features can be observed to advance on the plane of the sky. Regarding novae and nebulae, photographic images provide a 2D integration of the emission and absorption along the line-of-sight, but the depth information is flattened. Additional information may be assumed based on the symmetry and orientation of the object being observed. Depth information may also be extrapolated from velocity fields, e.g. mapping between velocity and position in radially expanding nova shells. This requires prior knowledge of the properties of the object being modelled. Alternative methods of modelling are required when theoretical or observational constraints are insufficient, one such example of this is {\mdseries\shape}{}\footnote{\textsc{{\mdseries\shape}{}} is available online: \url{http://www.astrosen.unam.mx/shape/index.html}}\citep{2011SHAPE}.
{\mdseries\shape}{} is a morpho-kinematic modelling and reconstruction tool for astrophysical objects. Users bring any knowledge of the structure and physical characteristics of the source (e.g. symmetries, overall appearance, brightness variations) to construct an initial model which can be visualised. The model can be compared to observational data allowing for interactive and iterative refinement of the model. Once all necessary physical information are reflected in the model, its parameters can be automatically optimised, minimising the difference between the model and the observational data. The final model can then be used to generate various types of graphical output, \autoref{fig:m29} shows a {\mdseries\shape}{} model of PN M2-9 under an image of M2-9 acquired by NASA (refer to \autoref{fig:m29}). Recent examples where \textsc{{\mdseries\shape}{}} has been employed to model PNe and novae include: \cite{2014Clyne} create a kinematical model of PN MyCn 18, utilising expansion velocities of its nebular components by means of position velocity (PV) arrays, to ascertained the kinematical age of the nebula and its components. \cite{2016Harvey_etal} modelled the Firework nebula and discovered that the shell was cylindrical and not spherical as previously believed. The lower velocity polar structure in this model gave the best fit to the spectroscopy and imaging available. \cite{2019Sophia} presented a morpho-kinematical model of PN HB4 using new Echelle spectroscopic data and high-resolution HST images. \citeauthor{2019Sophia} concluded that HB4 had an absolute mean expansion velocity of $14\,\rm{km\,s}^{-1}$ along the line of sight and proposed that the central part of the nebula consists of a binary system that has a Wolf--Rayet (WR) type companion evolved through the common-envelope channel. Refer to the following for more examples of {\mdseries\shape}{} software used to model PNe and nova: \cite{2016Akras_etal, 2017Starfish, 2017NGC6302, 2018PNe_ExtendedStructure}.
\begin{figure}[!h]
\centering
{\includegraphics[width=3.5in]{M2-9C.jpg}} %
\caption[PNe M2-9]{PNe M2-9: Top image captured from NASA Hubble telescope and processed by Judy Schmidt\footnote{}, filters used: [S\,\textsc{ii}], [O\,\textsc{iii}] and H\,$\alpha$. Bottom image \textsc{Shape} 3D model. \textit{Top image reproduced from \url{https://www.nasa.gov/feature/goddard/hubble-sees-the-wings-of-a-butterfly-the-twin-jet-nebula} }. Refer to \cite{M2-9_Doyle_2000} for more information regarding the morphology of M2-9.}
\label{fig:m29}
\end{figure}
\footnotetext{\url{https://photographingspace.com/apod-judy-schmidt/}}
High resolution imaging may detect shapes, however better understanding of the three dimensional structure is achieved when this data is coupled with spatially resolved, high resolution spectra to determine the kinematics of the gas within the nebula. To this end we present a new application and pipeline that uses {\mdseries\shape}{} software to create 3D models with density, velocity and temperature properties. The output of this is a data-cube which is processed by {\mdseries\pycross}{} to generate {\mdseries\cloudy}{} photoionisation models of nebulae. This is achieved using an intuitive interactive graphical user interface (GUI) that does not require programming experience, execution scripts or the need to install additional compilers or libraries. The paper outline is as follows: Section \ref{Section2PyCross} discusses the development, installation, functionality and operational overview of {\mdseries\pycross}{}. Section \ref{Section3Novae} demonstrates {\mdseries\pycross}{}' scientific pipeline outlining its use on novae V5668 Sagittarii (2015) and V4362 Sagittarii (PTB 42). Section \ref{Section4PNe} again demonstrates the pipeline for PN LoTr1.
Section \ref{Section5Conslusion} consists of our discussion and conclusions.
\begin{figure*}[!t]
\centering
{\includegraphics[width=\textwidth]{PyCross_FlowChart.png}} %
\caption[PyCross Operational Flowchart GUI]{\textsc{PyCross} Operational Flowchart. Steps for generating new models, updating archived models by modifying \textsc{Cloudy} parameters and visualising current and archived models. A high-level overview of the path to generate a new model is illustrated in \autoref{fig:GUI_Overview}.}
\label{fig:PyCross_FlowChart}
\end{figure*}
\section{\textbf{\textsc{PyCross} }} \label{Section2PyCross}
The steep learning curves encountered when installing, understanding and utilising specialist software are further compounded when astronomers are often required to develop/code software for specific tasks. \cite{2015_Astro_Software_Survey} carried out a survey, between December 2014 and February 2015, that focused on the use of software and the software skills of participants in the astronomy community worldwide. Participants consisted of 1142 astronomers ranging from students to postdocs and research scientists. All survey participants responded ``yes" when asked if they used software as part of their research; 11\% of responders said they used software developed by others; 57\% used software developed by themselves and others; while 33\% said they used software they developed themselves for specific purposes, as there was no software readily available. This research also revealed the open source language, Python, to be the programming language of choice.
With advances in techniques, hardware and software it is of no surprise the number of astronomy software applications made available has increased considerably over the last decade. More recently \cite{FITZGERALD2019} developed a standalone application (written in Python) for plotting photometric Colour Magnitude Diagrams (CMDs) using object orientated programming (OOP), a formal Software Design Lifecycle (SDLC) and Test Driven Development (TDD). This stand alone application worked ``\textit{out of the box}" required no installation of any additional software to function and emphasised the importance of quality and standards when developing software for astronomy. A SDLC defines a structured sequence of stages in software engineering to develop the intended software product, a TDD approach relies on a shorter development cycle, where requirements become specific test cases that the software must pass and OOP allows developers to break software projects down into smaller, more manageable modular problems, one object at a time.
\subsection{Software development}
{\mdseries\pycross}{} was developed and tested on OSX using the PyCharm IDE (integrated development environment) 2018 Community Edition, Python 2.7 with later iterations using Python 3.7. The free community edition of PyCharm offers usage of both testing frameworks\footnote{PyCharm Testing Frameworks: \url{https://www.jetbrains.com/help/pycharm/testing-frameworks.html}} and code analysis tools\footnote{PyCharm Editions Comparison: \url{https://www.jetbrains.com/pycharm/features/editions_comparison_matrix.html}}. There were 24 iterations of this application before the code was ``\textit{Frozen}", for this release, allowing for the creation of a single executable file that can be distributed to users\footnote{Currently this application is only available as an executable on MAC OS and Windows.}. A frozen application contains all the code and any additional resources required to run the application, and includes the Python interpreter that it was developed on. The major advantage for distributing applications in this manner is that it will work immediately, without the need for the user to have the required version of Python (or any additional necessary libraries) installed on their system. A disadvantage of generating a single file is that it will be large (approximately 183\,MB), as all necessary libraries are incorporated, the increase in file size is acceptable when considering other issues, for example ease of installation, running, and portability to other platforms.
The Agile SDLC and TDD approaches employed in \cite{FITZGERALD2019} were adapted in the development of this software, refer to this text for a comprehensive review of employing specific SDLC and TDD approaches in astronomy. The remainder of this section will outline installation, features and operational overview of the functionality when using \textsc{PyCross}
\begin{figure*}[!th]
\centering
{\includegraphics[width=\textwidth]{HL_overview.png}} %
\caption[High-level operational overview of {\mdseries\pycross}{}]{A high level operational overview following the path for creating a new model outlined in \autoref{fig:PyCross_FlowChart}. A new 3D \textsc{Shape} model is created with appropriate structure, densities and velocities. A quadrant of the overall model is then sliced to produce a data cube, in the format of a text file \textbf{(\autoref{shapetextfile})}, which is inputted into {\mdseries\pycross}{}. \textsc{Cloudy} options/parameters are then specified and the model is run. {\mdseries\pycross}{}'s console will update informing the user of the progress. Once completed the user opens the plotting interface to plot the photoionisation and cut models (see bottom right of diagram) at different angles and inclinations. Colour map used for emission is gist\_earth\_r, one of 71 available to users. The model used here was designed as a theoretical nova with cosmic ray background and an approximate age of 8114 days, blackbody effective temperature of $50\times10^3$\,K, luminosity of $3.6\,\mathrm{L_\odot}$ and equatorial expansion velocity of $500\,\rm{km\,s^{-1}}$ and position angle of 45\degree. This model is available online at \url{https://github.com/karolfitzgerald/PyCross_OSX_App}, filename: \textit{Demo1.zip}. A scientific approach for determining optimum \textsc{Cloudy} parameters is outlined in \autoref{fig:pipeline}}
\label{fig:GUI_Overview}
\end{figure*}
\subsection{ \textsc{{\mdseries\pycross}{}} functionality}\label{pycrossfunctionality}
\textsc{PyCross} was comprehensively designed to allow for intuitive usability while also providing automated features allowing the user to be more productive. A high-level operational overview of \textsc{PyCross} functionality is outlined in the operational flowchart in \autoref{fig:PyCross_FlowChart}. Then, \autoref{fig:GUI_Overview} follows the path for generating \textsc{PyCross} photoionisation rendering of a 3D \textsc{Shape} model utilising the application's two main GUIs, for creating and plotting models. Smaller windows are used to manage any \textsc{Cloudy} preferences.
\subsubsection{Installation \& directory setup} As previously discussed, this application is ``\textit{frozen}" allowing for the creation of a single executable which contains all code and any additional python libraries to run. The only requirement is that \textsc{Cloudy} is pre-installed on the user's system prior to running this programme.
A new directory structure is created in the root folder of the users computer when \textsc{PyCross} is loaded for the first time. This folder is named ``\textit{PyCrossArchive}" and is the root/destination for all work generated by the application: the following folders and files are generated upon startup and modified during the execution of a model:
\begin{itemize}
\item \textit{\textbf{Model-Name-Timestamp.zip}}: \textsc{PyCross} models are automatically saved using the name assigned followed by a time-stamp of when they are created. The purpose of this is as follows: 1) To archive all models, this allows users the opportunity to review any change in parameters and resulting models generated. 2) A basic model can generate in excess of 100 files, each approximately 14.4\,MB in size. While maintained within a well structured directory this can quickly consume disk space when generating a lot of models. Automatically zipping outputs to a single file significantly reduces file sizes (to approximately 3\,MB zipped for a 14.4\,MB un-zipped file) while also making it easier to process data at a later stage. For example, when handling these archived models, \textsc{PyCross} automatically extracts the contents of the selected zipped file into a temporary folder, and when finished the contents of the temporary folder are deleted. A time-stamp incorporated into the file name allows users to distinguish between models, if the model name does not change.
\begin{itemize}
\item \textit{\textbf{LogFile.txt}}: A text file that records the parameters used by the user and information relating to the progress of a model as it is being run. This information is also displayed on the main GUI. This file can be used at a later stage to compare models based on their parameters but also to recreate models if needed.
\item \textit{\textbf{MakeData}}: This folder contains the output files generated by \textsc{Cloudy} based on the \textsc{Shape} model.
\item \textit{\textbf{TempData}}: This folder contains data generated by \textsc{PyCross} that allows \textsc{Cloudy} to run the selected \textsc{Shape} model. It is also used to extract archive model data when modifying or plotting archived models.
\end{itemize}
\item \textit{\textbf{Plots}}: This folder contains the following sub-folders which store the generated plots of a model based on their type. Each time plots are generated the contents of this folder are deleted and replaced with new plots.
\begin{itemize}
\item \textit{\textbf{Cuts}}: This folder contains plots of the cuts generated. Cuts will be plotted based on their element and ionisation state. An additional N$^+$/O$^+$/N/O weight N\textsc{ii} cut, adapted from \textsc{PyCloudy}, is also plotted and saved here.
\item \textit{\textbf{Emissions}}: Folder containing plots of the generated emission simulations.
\item \textit{\textbf{PlotArchiveInformation.txt}}: This text file is updated automatically each time new plots are generated and contains information relating to the parameters entered by the user in the plotting options GUI, Plot GUI (see \autoref{fig:GUI_Overview}), this feature is useful for tracking differences in plots based on their parameters.
\end{itemize}
\item \textit{\textbf{PlotArchive}}: All data generated and stored in the \textit{\textbf{Plots}} folder can be exported here and automatically saved as zip files. This reduces time recreating plots at a later stage while also keeping track of parameters changed and carried out on Plot GUI:
\begin{itemize}
\item \textit{\textbf{Auto Archive}}: If this option is selected in the Plotting Options GUI then all current \& subsequent data generated will automatically be exported to the PlotArchive folder as a zip file. If the user does not change the model name then the naming convention is updated to include the time generated, this ensures that no work is overwritten or lost.
\item \textit{\textbf{Archive Plot}}: There is an archive current data option that can be accessed by clicking this button in the Plotting Options GUI.
\end{itemize}
\end{itemize}
\begin{figure}[!htb]
\centering
{\includegraphics[width=2.5in]{O3_Colour_Map_Example_V2.png}} %
\caption[Colour Map Options]{Three colour map examples from a choice of 71. The emission model of [O\textsc{iii}]\, 5007$\AA$ is derived from an example model (see \autoref{fig:GUI_Overview}) with a position angle of 45\degree. Colour-bars correspond to the effective temperature of the ionising source, flux units in ergs $\mathrm{s^{-1}}$. Scaled units of physical size are on \textit{x} and \textit{y} axes.}
\label{fig:colourMapOptions}
\end{figure}
\begin{figure}[!ht]
\centering
{\includegraphics[width=\columnwidth]{Zoom_4.png}} %
\caption[Zoom Functionality]{Photoionisation models of [N\,\textsc{ii}]\, 6584$\AA$ (a, b) and [O\,\textsc{iii}]\, 5007$\AA$ (c, d) derived from the theoretical model in \autoref{fig:GUI_Overview}. Each emission plot initially plotted at an inclination angle of 90\degree (a, c) then at 50\degree (b, d). Colour-bars correspond to the effective temperature of the ionising source, flux units in ergs $\mathrm{s^{-1}}$. Plots are zoomed to better visualise features like the torus across different emissions and angles. The colour map used was hot\_r. }
\label{fig:zoom}
\end{figure}
\subsection{{\mdseries\shape}{} data cube text file }\label{shapetextfile}
The {\mdseries\shape}{} model is based on observational data and can be informed by a combination of narrow-band imaging, medium-high resolution spectroscopy and polarimetry. If these data types are not all at hand, then they can be used individually with the application of some theory or in certain combinations, given data of good quality. Integral field unit spectroscopy can be a particularly powerful observational tool to aid in the derivation of nebular morpho-kinematics\footnote{To download and for a full description of the {\mdseries\shape}{} software please visit: \url{http://bufadora.astrosen.unam.mx/shape/}}. In this section the process on how to construct the output data cube from {\mdseries\shape}{} for input into {\mdseries\pycross}{} which is used in the \emph{MakeData} and \emph{TempData} files will be explained.
First users must derived the morphology of the nebular structure under study and assign a fixed or variable density structure. Next users must create a `cut' of the nebula so it can be represented by a 2D array. To do this the user must place the nebula with its long axis pointing along the x-axis. Then a slice of a quarter of the major-axis of axisymmetric shape is taken, see \autoref{fig:GUI_Overview} (2D Slice), \autoref{fig:pipeline}, panel 2(a) and \autoref{fig:lotr1_Shape} (c). This is done in the {\mdseries\shape}{} 3D module. Users must first go to the object primitive, then take a slice of the nebula by setting $\phi_{min} = 90^{\degree}$, $\phi_{max} = 180^{\degree}$, $\theta_{min} = -91^{\degree}$ and $\theta_{min} = -89^{\degree}$.
Once users have made an appropriate 2D slice of their nebula they move to the {\mdseries\shape}{} render tab. In this tab the renderer must be set to be \emph{Physical}. The slit and image geometries are set to be square such that they overlap and cover the entirety of the 2D slice, again refer to \autoref{fig:GUI_Overview} (2D Slice), \autoref{fig:pipeline}, panel 2(a) and \autoref{fig:lotr1_Shape} (c) for how the structure should appear in the image render window. Finally users set up a text file to output the position and velocities along the x, y and z axes (Px, Py, Pz, Vx, Vy Vz) along with the corresponding density and temperature. This is done under \emph{``Render"} in the Parameters section of {\mdseries\shape}{}. When users click the Render button a text file will be generated in a folder designated by the user. This text file is then imported directly into {\mdseries\pycross}{} via its GUI. Currently only Px, Py, Pz and density arguments are used as input to the {\mdseries\pycross}{} simulations. Temperature and velocity information will be utilised in the next iteration of \textsc{PyCross}.
\subsubsection{User Interface \& Operational Overview} The Graphical user interface (GUI) is designed to be as intuitive as possible and consists of two main GUI windows. The first allows the user to set parameters and run models while the second controls the plotting, these interfaces pass information to each other to enhance efficiency. Other GUIs allow users to set additional \textsc{Cloudy} options, emission lines and abundances. The operational flowchart of this application is illustrated in \autoref{fig:PyCross_FlowChart} and a high level overview of the GUIs taking the path of a new model is illustrated in \autoref{fig:GUI_Overview}. A scientific approach to determine optimum \textsc{Cloudy} parameters is discussed in \autoref{Section3Novae} (see \autoref{fig:pipeline}).
Once a \textsc{Shape} model, with a suitable morphology, density and velocity have been created it is then placed at an inclination of 90\degree. This 3D model must be represented two dimensionally, by taking a 2D slice of one quarter of the model. A data cube is created that describes the velocity and density at each position in the shell, this is processed by \textsc{PyCross}.
A series of 1D \textsc{Cloudy} simulations are computed along the 2D \textsc{Shape} model slice. Lastly, the 2D photoionisation map is wrapped and flipped in order to create the full pseudo 3D photoionisation model. Currently this technique is constrained to axisymmetric nebulae, but as discussed later this allows for the modelling of a large majority of PNe and novae.
The main GUI allows users to set the name of the current model, blackbody temperature, total luminosity, angle range (start-finish) and the number of angles in the range. \textsc{Cloudy} preferences for the model general input, i.e. options, abundances and emissions are entered into smaller windows and when saved, will remain when the programme is started again. There is no limit to the number of parameters entered into these windows provided that they conform to valid \textsc{Cloudy} commands. This removes any learning curve, complex commands and the need to run shell scripts, thus increasing productivity. Once a model is successfully created the emissions list, set in the main GUI will be available and visible for selection in the plotting option GUI. Button functionality on the main GUI is as follows:
\begin{itemize}
\item \textit{\textbf{Select Shape File:}} Allows the user to select the .txt data cube output from {\mdseries\shape}{} (see \autoref{shapetextfile}).
\item \textit{\textbf{Cloudy Options:}} Opens up a new window allowing the user to type/paste {\mdseries\cloudy}{} options, visible at the top left of \autoref{fig:GUI_Overview}. For example when creating a model for a nova, options might include:\\ \\
\textit{``cosmic ray background\\
element limit off -8\\
age 8114 days"}\\
\item \textit{\textbf{Abundances:}} Opens a new window allowing the user to type/paste abundances to be used as input to the model. Examples of abundances can be found by clicking \textit{\textbf{Reference \(|\) Abundance Examples}} in the menu bar.
\item \textit{\textbf{Emissions:}} Opens a new window allowing the user to type/paste emissions to be used as input to the model. Emissions must be added in a particular format of no more than 10 characters. To ensure that emissions are entered correctly, {\mdseries\pycross}{} has an extensive library of emission lines that is available by clicking \textit{\textbf{Reference \(|\) Emission Line Index}} on the menu bar.
\item \textit{\textbf{Open Archive Folder:}} Opens a new Finder/Explorer window where all {\mdseries\pycross}{} data is stored.
\item \textit{\textbf{Run {\mdseries\pycross}{} - New {\mdseries\shape}{} Model:}} This button is clicked to run a new model; the user must first have selected a {\mdseries\shape}{} data cube and entered all necessary {\mdseries\cloudy}{} options.
\item \textit{\textbf{Run {\mdseries\pycross}{} - Archive {\mdseries\shape}{} Model:}} If users create a model and are not satisfied with the outcome or feel that certain parameters need to be changed then they can modify specific {\mdseries\cloudy}{} options outlined above. By clicking this button and selecting an archived model, users can run the model again with updated parameters. This feature saves a lot of time as it does not require the user to start anew.
\item \textit{\textbf{Plot Current Model:}} Open the plotting GUI to plot the current {\mdseries\pycross}{} model. Any emissions entered will automatically be loaded into the plotting options window.
\item \textit{\textbf{Plot Archive Model:}} Select a zipped archived model then open the plotting GUI and proceed to plot. This again saves time not having to create a model from the start, while also offering the user the choice of building a database of models to plot from.
\end{itemize}
An $\lq$emission' as discussed here is a simulated narrow-band image at the wavelength of a specific spectral emission line. Users can plot from one to a maximum of six emissions (plotted in flux units ergs $\mathrm{s^{-1}}$) in a single plot by highlighting desired emission, and adding them to the ``\textit{Selected Emissions}" list. Once selected, a plotting grid can be adapted to fit the required number of emissions. The number of angles, inclination and labels for x-axis and y-axis are set prior to plotting. Corresponding emission lines will also be displayed at the top of each plot regardless of the number of plots/subplots, the effective temperature of the ionising source is located in the colour-bar to the right of each model.
Users can view a plot $\lq$cut' with a weighted by N\textsc{ii} of N$^+$/O$^+$/N/O, which is automatically generated. These weighted plots are adapted from the tutorial for {\mdseries\pycloudy}{} \citep{2013pyCloudy}\footnote{\url{https://sites.google.com/site/pycloudy/examples/example-3?authuser=0}}, where one of the foundations that {\mdseries\pycross}{} was developed on and show the ionised fraction versus neutral fraction for the two most commonly studied astrophysical metals, i.e. O and N. Selection of the cuts is performed by first selecting an element, then its ionising state; a maximum of three cuts is allowed per plot. Users may open the plotting folder directly from this GUI and automatically archive all plots generated in that session. This is achieved by selecting the \textbf{\textit{Auto Archive}} checkbox; archive plots are saved using the name and time-stamp of creation similar to saving of newly generated models discussed earlier.
The latest version of {\mdseries\pycross}{} includes the functionality that gives users the ability to plot in 71 different colour maps and magnify the centre regions of nebulae, allowing to better visualise regions and/or possible hidden features, see \autoref{fig:colourMapOptions} and \autoref{fig:zoom}. On analysis of \autoref{fig:colourMapOptions}, the outer regions of the lobes are more clearly defined in (a) than (b) or (c) where the following colour maps were applied: (a) nipy\_spectral\_r, (b) gist\_stern\_r and (c) gist\_heat\_r\footnote{The r signifies that this particular colour map has been revised in matplotlib \url{https://matplotlib.org/3.1.1/gallery/color/colormap_reference.html}}. Colour maps may be misleading, for example differences of regions within the outer lobes of (a) and (b) in \autoref{fig:colourMapOptions} are greatly exaggerated as compared to that in (c) which is more realistic. Offering this feature allows users to find a trade off for investigating features of nebulae.
The axes on all plots are normalised distance units (0-100), the colour-bar corresponds to the effective temperature of the ionising source, flux units in ergs s-1. Users can add additional text to the axis labels via the Plotting Options GUI, see \autoref{fig:GUI_Overview}. Mathematical notation can also be written for the plot axis labels when using a subset TeX\footnote{\url{https://matplotlib.org/3.1.1/tutorials/text/mathtext.html}} markup by placing text inside a pair of dollar signs (\$).
\begin{figure}[]
\centering
{\includegraphics[
|
width=\columnwidth]{2018_Shape_V5668Sgr.png}} %
\caption[2018 Shape model for V5668 Sgr]{{\mdseries\shape}{} model for V5668 Sgr. Figure adapted from \cite{2018Harvey_etal}}
\label{fig:2018V5568}
\end{figure}
\begin{figure*}[t]
\centering
{\includegraphics[width=\textwidth]{2018_Paper_C.png}} %
\caption[Model for Nova V5668 Sgr]{(a) Spectral line profile simulations of [O\,\textsc{iii}] nebular and auroral lines of V5668 Sgr on day 822 post discovery. The blue-solid lines represent observed line profiles and were used in the fitting of a morpho-kinematical model with the \textsc{{\mdseries\shape}{}} software, represented by overlaid black dots. The auroral line is fitted with an equatorial disk whereas the nebular lines fit an equatorial waist and polar cones morphology with a Hubble outflow velocity law. (b) {{\mdseries\pycross}} emission models of V5668 Sgr, using nova abundances with the inclination angle set at 85\degree. Clumpiness was simulated using a Perlin noise modifier in \textsc{{\mdseries\shape}{}}. The geometry used for this model is illustrated in \autoref{fig:2018V5568}. Figure adapted from \cite{2018Harvey_etal}, OII 4341$\AA$ was shown to demonstrate that the line emission is expected to arise from the same geometrical location and so to deblend it with 4340$\AA$ the same line profile shape would be assumed. With regard to the two nebular [O\,\textsc{iii}] lines they were shown in \cite{2018Harvey_etal} to show the consistency in the line shape as would be expected from 4959$\AA$ and 5007$\AA$ [O\,\textsc{iii}] and how different the auroral [O\,\textsc{iii}] line shape was. Colour-bars correspond to the effective temperature of the ionising source, flux units in ergs $\mathrm{s^{-1}}$. {{\mdseries\pycross}} scientific pipeline is illustrated in \autoref{fig:pipeline}
\label{fig:ShapeV5668}}
\end{figure*}
\section{\textbf{\textsc{PyCross}} pipeline applied to novae}\label{Section3Novae}
Novae are the result of an eruption on the surface of a white dwarf in a close binary system. The white dwarf's counterpart is normally a red dwarf or sub-giant, which has overfilled its Roche lobe and thus loses mass through an accretion stream onto the white dwarf surface. Pressure at the white dwarf-accreted envelope interface increases due to a buildup of hydrogen-rich material, eventually resulting in thermonuclear runaway. This subsequent eruption reaches luminosities $\gtrsim 10^4~\mathrm{{\rm L}_\odot}$, and ejects a mixture of material. A combination of that processed in the thermonuclear runaway, material dredged up from the white dwarf and the previously accreted outer layers of its companion. The ejected material reaches velocities of order $10^3$--$10^5\,\rm{km\,s^{-1}}$. Emission line profiles indicate considerable spatial density and velocity structure. Immediately after a nova eruption the ejected material is dense, bright and optically thick. This soon fades after revealing H\,\textsc{i} and He\,\textsc{i} emission lines. Over time [O\,\textsc{iii}], [N\,\textsc{ii}] and [Ne\,\textsc{iii}] become stronger relative to the fading continuum \citep{2012AJ....144...98W}. Over a few years the ejecta will be observed as a fading, constant-velocity expanding nebulous shell surrounding the post-nova star. These eruptions are not destructive enough to change either star and generally they return to their quiescent state, on decadal timescales. Classical novae repeat the process every $\sim\,10^4\,\mbox{--}10^5\,\rm{yr}$ \citep{2006AstroBook}. Although, a recurrent nova population also exists, with observed recurrence periods on human timescales. Shorter rates of recurrence are related to heightened accretion rates onto higher mass white dwarfs \citep{1995ApJ...445..789P,2005ApJ...623..398Y,2010ApJ...725..831S}.
The material ejected from the white dwarf surface generally forms an axisymmetric shell of gas and dust around the system. These 3D shell structures are difficult to untangle as viewed on the plane of the sky without additional velocity information.
While spectroscopic data can be used to yield approximate values for temperature, velocity, and density along the line of sight of the object, a photoionisation model is required to determine the chemical structure of a nebula \citep{bohigas2008photoionization}. As discussed in \cite{2018Harvey_etal}, photoionisation modelling of ejected nova shells during their nebular stage of evolution can contribute to estimates of the total mass and abundances of heavy elements ejected during nova events \citep{2011Atypical}. Examples of the ability to realise photoionisation models of novae, more specifically V5668 Sgr and V4362 Sgr, created by the {\mdseries\pycross}{} pipelines, are outlined in \autoref{V5668_Paper} and \autoref{PTB_42}.
\subsection{Nova: V5668 Sagittarii (2015)}\label{V5668_Paper}
\cite{2018Harvey_etal} investigated V5668 Sgr (2015), a slow-evolving extremely bright nova on the surface of a CO white dwarf. The nova event produced dust \citep{2018DustV5668}, and was classified to be of the DQ Her-type\footnote{Archetype for rich dust-forming slow novae, and historically significant following a major observed eruption in 1934, one of the first novae to be analysed with high-cadence spectroscopy observations where results were later used to classify nova spectra into 10/11 subclasses by \cite{1942McLaughlin}}. The V5668 Sgr nova event holds the record for longest sustained gamma ray emission from such an event \citep{2018V5668}.
A goal of this research is to better understand the early evolution of classical nova shells in the context of the relationship between polarisation, photometry, and spectroscopy in the optical regime. Observations over five nights with the Dipol-2 instrument mounted on the William Herschel telescope (WHT) and the Royal Swedish Academy of Sciences (KVA) stellar telescope La Palma, yielded polarimetric data directly following the nova's dust formation episode. While this nova shell is not yet resolvable with medium sized ground-based telescopes, \cite{2018Harvey_etal} utilised \textsc{PyCross} to model and visualise the ionisation structure of V5568 Sgr. The polarimetric and spectroscopic data revealed conditions present in the expanding nova shell allowing for the creation of a {\mdseries\shape}{} model. The proposed geometry consisted of an equatorial waist and polar cones, see \autoref{fig:2018V5568}.
Initial models consisted of broad parameter sweeps that were refined using {\mdseries\pycloudy}{} \citep{2013pyCloudy}. This allowed examination of emission line ratios for the hot--dense--thick nova shell under study. A cylindrical primitive was used to construct the equatorial waist, where a density, thickness and Hubble velocity law were applied. The polar features were constructed using cone primitives. In this instance densities applied were estimations from {\mdseries\cloudy}{} simulations and velocity components were derived from measuring Doppler broadened characteristics of observed emission lines. Emission lines in fast outflows of unresolved structures are governed by their velocity field and orientation towards the observer. Analysis techniques available in {\mdseries\shape}{} allow users to disentangle projection effects. The line shapes modelled in \autoref{fig:ShapeV5668} (a) are displayed at an inclination of 85\degree, a polar velocity of $940\,\rm{km\,s^{-1}}$ and an equatorial velocity of $640\,\rm{km\,s^{-1}}$ at their maximum extensions.
Emission models of V5668 Sgr, were created using optimum values derived from {\mdseries\cloudy}{}/{\mdseries\pycloudy}{} parameter sweeps. The average density was found to be $\sim 1.0\times10^9\,\rm{cm^{-3}}$, luminosity and effective temperature were set to log($\rm{{L_ \odot }}$) = 4.36 and $1.8 \times {10^5}\rm{K}$ respectively. To recreate the nova conditions on day 141 post discovery an inner and an outer radius were set to $3.2\times10^{14}\,\rm{cm}$ and $6.4\times10^{14}\,\rm{cm}$, respectively. \autoref{fig:ShapeV5668} (b) illustrates the {\mdseries\pycross}{}\ emission models for V5568 Sgr, where a comparison of the locality of emission through the shell of the same species is presented in each column of three panels. These models show the ionic cuts for C, N and O, respectively. An ionic cut is a slice of the ionised structure for a specific ionisation state of a species. \autoref{fig:ShapeV5668} presents the first state of ionisation for C, N and O. The colour-bar shows the ionised fraction for different geometrical locations of the appropriate ionisation state.
\cite{2018Harvey_etal} revealed variability in polarisation suggesting internal shocks in the nova outflow, supported by the presence of gamma-ray emission \citep{2018V5668}. The position angle of this nova was determined using the polarimetry observations. Spectroscopy allowed for derivation of the physical conditions, including outflow velocity and structure, nebular density, temperature and ionisation conditions. Photoionisation models generated from {\mdseries\pycross}{} gave further insight into the nova system as a whole. \cite{2018Harvey_etal} concluded that slow novae are regularly referred to ``nitrogen flaring", however based on their findings suggest that they are in fact more likely ``oxygen flaring".
\begin{figure*}[!t]
\centering
{\includegraphics[width=6.6in]{pipeline_v3.jpg}} %
\caption[PyCross Scientific Pipeline]{\textsc{PyCross} Scientific Pipeline for V4362 Sgr\,/\,PTB 42(1994). This figure is split into three main sections (1-3). 1 represents observational data used to derive the spatial structure of an emission nebula, 1(a) is narrow-band imaging used to inform the extent and axial ratio of the nebula. 1(b) is low-resolution spectroscopy, used to derive density, abundances and ionisation conditions. 1(c) is high-resolution spectroscopy used to derive the velocity information for individual spectral lines. 2(a) is a suitable {\textsc{Shape}} geometry, seen in full in panels A and B, but arranged in panel D in a position to be output to a text file readable by \textsc{PyCross}, see section \ref{shapetextfile} and \cite{2020Harvey_etal}. 2(b) is a diagnostic diagram generated to interpret the emission line ratios measured from broadband low-resolution spectra. 3 is the final output \textsc{PyCross} model generated from applying the geometry and density at each geometrical point (i.e. from the \textsc{shape} output text file). Other parameters such as the stellar luminosity, stellar effective temperature, and abundances are input to {\textsc{PyCross}}'s GUI. A set of 1D Cloudy simulations are run through the 2D parameter space which are then wrapped in azimuth around the complete shell creating a pseudo 3D model for any observable emission line, here we show the [N\textsc{ii}] 6584$\AA$ at a position angle of 60\degree. Colour-bars in 3 correspond to the effective temperature of the ionising source, flux units in ergs $\mathrm{s^{-1}}$, the colour-bar in the \textsc{PyCloudy}{} diagnostic diagram, 2(b), is the effective temp of the ionisation source. Figure adapted from \citep{2020Harvey_etal}.}
\label{fig:pipeline}
\end{figure*}
\subsection{Nova: V4362 Sagittarii (PTB 42)}\label{PTB_42}
More recently \citep{2020Harvey_etal}, utilised the current version of {\textsc{PyCross}} to aid in uncovering a previously undiscovered classical nova shell surrounding the nova system PTB 42 of the DQ Her type. Imaging was acquired from the Aristarchos telescope in Greece and consisted of two narrow-band filters; H\,$\alpha\,+\,[N\textsc{ii}]$ (6578$\AA/40\AA$)\footnote{The first number represents the centre wavelength and second number represents the filter width in Angstroms} and [O\,\textsc{iii}] (5011$\AA/30\AA$) with exposures of 30\,-\,40 minutes in each filter. High-resolution spectroscopic data was obtained using the Manchester Echelle Spectrograph (MES) instrument mounted on the 2.1\,m telescope at the San Pedro M\'artir (SPM) observatory, Mexico. The PTB 42 nova shell, was detected using the low-resolution, high-throughput SPRAT spectrograph on the Liverpool Telescope during August of 2016. The {\mdseries\pycross}{} pipeline (see \autoref{fig:pipeline}) was then used to generate emission models based on imaging and spectroscopic observations. Based on these observations, the nova shell was reproduced in {\mdseries\shape}{} and then passed to {\mdseries\pycross}{}.
The P.A. for this target was chosen based on polarimetric observations presented in \cite{evanspol}, in addition to close examination of the major and minor axes in the narrow-band imaging (top left of \autoref{fig:pipeline}). High-resolution spectroscopy from MES was used to find the radial velocity of individual components within the nova shell. Difficult to derive parameters are the system inclination, covering and filling factors of the nova shell, as well as the opening angle of the polar cones.
In this study a number of morpho-kinematic models were created to determine the best-fit relationship between image, PV array and 1D line spectrum to commonly proposed nova shells morphologies. While spatial information could not be resolved, structures within the nova could be resolved via line-of-sight velocities. The observed equatorial velocity is 350\,km\,s$^{-1}$, as measured from the MES spectra. An axial ratio of 1.4 and polar velocity of 490\,km\,s$^{-1}$ was initially chosen based on the inclination corrected axial ratio for similar novae DQ Her and T Aur \citep{BodeNova}. Adjusting for inclination when fitting to the line profile gives an equatorial velocity of 390\,km\,s$^{-1}$ and polar velocity of 550\,km\,s$^{-1}$. This allowed for the remaining velocities to be set to 550$\,\times\,\rm{r/r0}$\, (km s$^{-1}$). The observed [N~\textsc{ii}] line profile, taken from the 2012 observation (seen encircled in red at the top left of \autoref{fig:pipeline}, was chosen for modelling the nova shell structure as it had the highest S/N.
The luminosity of the system was estimated based on that of the class archetype, namely DQ Her, during quiescence \cite{1984Ferland}. The inner and outer radii of the shell are estimated based on the observed expansion velocity distribution and narrow-band imaging and shell age, although the actual shell thickness is difficult to know without resolving it spatially. Abundances used were for that of the archetypical slow nova, DQ Her \citep{1984Ferland}.
\begin{figure}[!ht]
\centering
\includegraphics[width=2.5in]{LoTr_1_Updated.png}
\caption[LoTr 1 \textsc{SHAPE} Model Mesh (3D Module)]{(a) Deep, narrowband images of PN LoTr\,1, in [O\,\textsc{iii}] 5007$\AA$. North is to the top of the image, East is to the left. Image acquired using European Southern Observatory (ESO) Multi-Mode Instrument (EMMI), mounted on the 3.6m New Technology Telescope (NTT) of La Silla Observatory. The outer shell appears brighter in both the northeast and southeast directions of the image, which could suggest the possibility of an inclined, elongated structure due to the projection effect. \textit{ Image credit and \copyright \,} \cite{tyndall_lotr_2013}. (b) 3D {\mdseries\shape}{} model of LoTr1 from different views. The view in the bottom right quadrant of (b) represents the nebula as in (a). (c) {\mdseries\shape}{} quadrant slit of LoTr 1 where shells have being aligned. The resulting data cube is then passed to {\mdseries\pycross}{}.}
\label{fig:lotr1_Shape}
\end{figure}
In this work {\mdseries\pycross}{} was employed to better understand observed ionised nebula structure by using broadband spectra from which line ratios are measured. A grid of models was generated to determine the best fitting model parameters which were then applied through the derived geometry. This geometry is determined from matching line profiles in high-resolution spectra and narrow-band imaging in {\mdseries\shape}{}. Polarimetry and imaging was used to determine the position angle of the shell. Using abundances adapted from the DQ Her nova shell model of \cite{1984Ferland} {\mdseries\pycross}{} generated a pseudo 3D simulation of the ionisation structure of PTB 42 as seen in top centre of \autoref{fig:pipeline}. Results show the difference in emission regions for the strongest nebular lines, i.e. [N \textsc{ii}] and [O \textsc{iii}]. The {\mdseries\pycross}{} model presented in \citep{2020Harvey_etal} detailed observed components i.e. equatorial ring, higher latitude rings and polar features of the nova, allowing their individual behaviour to be examined.
\section{\textbf{\textsc{PyCross}} pipeline applied to PNe}\label{Section4PNe}
Post-main sequence low to intermediate mass stars ($\sim0.8\,$M$_\odot-8\,$M$_\odot$) will over time evolve to become planetary nebulae as the outer layers of the star are ejected through thermal pulses after the AGB phase. This exposes hotter layers that can ionise the previous ejected material. A detailed description of the evolutionary track and mechanics a star undergoes on its journey to become a PN at each phase is presented in \cite{2009Prialnik} and see also \citep{Herwig2005}.
There are complications to this simple picture, particularly in the role of binarity and the possibility of `mimics' such as symbiotic systems \citep{boffin2019importance}. As a result, there is no universal observational definition for PNe, but their existence is usually dependent on two components: a circumstellar shell of sufficient mass and density to be detectable, and a hot central star to ionise it. PNe candidates are usually discovered by objective-prism surveys or by direct imaging in a narrow spectral region around a strong emission line or line such as [O\,\textsc{iii}] 4959$\AA$, 5007$\AA$ or H\,$\alpha$ and [N\,\textsc{ii}] 6548$\AA$, 6583$\AA$ \citep{2006AstroBook}. Their expanding shell sizes ($\sim$\,0.2\,pc) and expansion velocities ($\sim$\,25\,km\,s$^{-1}$) imply a dynamical lifetime of $\sim$\,10$^4$\,yr \citep{kwok_2000}.
As intermediate mass stars make up a large portion of the stellar mass in our galaxy, studying how their nuclear-processed interiors are ejected into intricate nebulae and eventually into the interstellar medium, can lead to a deeper understanding of the galaxy's chemical evolution. PNe spectra are rich in emission lines, including the interesting \textit{forbidden lines}, and serve as a laboratory for the physics and chemistry of photoionisation. Modelling the 3D spatiokinematic structure, along with the 3D photoionisation of PNe and their mimics can contribute to understanding how these spectacular nebulae are formed.
\subsection{PN LoTr 1}
The previous section described {\mdseries\pycross}{} and its use in modelling and investigating novae emissions. In this subsection, and for the first time, {\mdseries\pycross}{} will be used in modelling emissions for LoTr\,1, a PN believed to contain a binary central star system consisting of a KL-type III giant and white dwarf, first discovered by \cite{longmore1980}.
A morpho-kinematic {\mdseries\shape}{} model (\autoref{fig:lotr1_Shape} (a)) was created based on long-slit Echelle spectroscopy acquired using the Anglo-Australian Telescope/University College London Echelle Spectrograph (AATUCLES) and the New Technology Telescope-ESO Multi-Mode Instrument (NTT-EMMI) focused on the [O\,\textsc{iii}] 5007$\AA$ emissions over eight different slit positions, published in \cite{tyndall_lotr_2013}. Careful measuring of axes and relative sizes yielded a model constructed of two elongated shells with inner and outer shell radii of 5$''$ and 12$''$ respectively. Each shell is at different inclinations, with an angle of 50$\degree$ difference in position angle, and a difference of 57$\degree$ inclination (see \autoref{fig:lotr1_Shape} (a)). Modifiers were applied to elongate the inner and outer shells along the z-axis by a factor of 1.3 and 1.5 respectively.
\begin{figure}[!ht]
\centering
\includegraphics[width=2.5in]{slit1shape.png}
\caption[LoTr 1 Slit 1 comparison]{Left: Recreation of slit position 1 acquired by \cite{tyndall_lotr_2013}. Right: Theoretical PV diagram created in {\mdseries\shape}{}. West is up.}
\label{fig:slit1shape}
\end{figure}
\begin{table*}[!htbp]
\centering
\begin{tabular}{ccccc}
\hline
& \textbf{Radius (cm)}& \multicolumn{1}{c}{\textbf{\begin{tabular}[c]{@{}c@{}}Expansion Velocity \\ (km/s)\end{tabular}}} & \textbf{Morphology} & \textbf{\begin{tabular}[c]{@{}c@{}}Kinematical Age \\ (yrs)\end{tabular}}\\ \hline
\multicolumn{1}{c}{\textbf{Outer Shell}} & $2.84\times10^{16}$ & 25 $\pm$ 4& Elliptical& 35,000 $\pm$7,000 \\
\multicolumn{1}{c}{\textbf{Inner Shell}} & $9.14\times10^{15}$ & 17 $\pm$ 4& Elliptical& \multicolumn{1}{c}{17,000 $\pm$ 5,500} \\ \hline
& \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} &
\end{tabular}
\caption{Table of parameters for both inner and outer shells of LoTr 1 from \cite{tyndall_lotr_2013}.}
\label{tab:lotr1}
\end{table*}
A comparison of the first slit acquired by \cite{tyndall_lotr_2013} compared to that recreated in {\mdseries\shape}{} can be seen in \autoref{fig:slit1shape}. The observed slit position is in the left hand panel and the right panel shows the theoretical PV array. The systemic velocity, V$_{\mathrm{sys}}$ of the central shell was determined to be 14 $\pm$ 4 km/s \citep{tyndall_lotr_2013}, for the observations, which result from the velocity axis of this data being relative to the V$_{\mathrm{sys}}$. An important similarity correspond to the major axis in that it has the same length as the diameter of the inner shell. This contributes to the assumption that it is a closed, isolated structure rather than a bipolar nebula. A bipolar nebular structure viewed from above would not give this regular shape. These ellipses seen along the velocity axis appear symmetric in both images, in agreement with a spherical shell or elongated ovoids.
\begin{figure}[!h]
\centering
\includegraphics[width=2.2in]{pycross_cuts2.png}
\caption[LoTr1 {\mdseries\pycross}{} photoionisation Cuts]{LoTr 1 {\mdseries\pycross}{} photoionisation cuts of He$^{+}$ and O$^{++}$, using PN abundances with the inclination angle set at 40$\degree$. Colour-bars correspond to the effective temperature of the ionising source, flux units in ergs $\mathrm{s^{-1}}$.}
\label{fig:pycross_cuts2}
\end{figure}
Minor modifications were made to align the shells of LoTr1 prior to creating a data cube slice in order to create a photoionisation model using {\mdseries\pycross}{} (\autoref{fig:lotr1_Shape} (b)). Default PN abudances were used from \cite{aller1983chemical} and \cite{khromov1989planetary}, with high depletions for elements not listed. Emission models generated from {\mdseries\pycross}{} for LoTr\,1 can be seen in \autoref{fig:LoTr1_pycross_40}. These models are set at an inclination angle of 40$\degree$, with a blackbody effective temperature of $\sim$100,000 K \citep{gruendl2001variable} and a luminosity of 100 L$_{\odot}$ \citep{henry1999morphology}. These parameters were chosen as they were ascertained from the Helix nebula (NGC 7293), one of the closest and brightest PN \citep{hora2006infrared}. While a hot black body spectrum is used for the modelling here, it is possible to use more complex central star spectra such as, for example, WR-type spectra, as applicable to HB4 \citep{2019Sophia} and generally found in around 10\% of PNe \citep{2011DePew}. \autoref{fig:pycross_cuts2} shows ionic cuts of He$^{+}$ and O$^{++}$, top and bottom respectively, at an inclination angle of 90$\degree$ for the left column, and 40$\degree$ for the right column. It is worth noting that the object appears entirely spherical at 90$\degree$ because this angle shows a cut through the object head on. Similar observations were made by \cite{tyndall_lotr_2013} who reported clear asymmetry in the nebular structure, additionally but it is close to a head on view.
\begin{figure*}[!htb]
\centering
\includegraphics[width=\textwidth]{LoTr1_PhotoModel.png}
\caption[LoTr1 {\mdseries\pycross}{} Photoionisation Model - Emissions 40$\degree$]{LoTr 1 {\mdseries\pycross}{} photoionisation model using PN abundances with the inclination angle set at 40$\degree$. The input {\mdseries\shape}{} model is seen in \autoref{fig:lotr1_Shape} with changes made to the inclination angles of the two shells to make the entire object symmetrical. Colour-bars correspond to the effective temperature of the ionising source, flux units in ergs $\mathrm{s^{-1}}$.}
\label{fig:LoTr1_pycross_40}
\end{figure*}
In this scenario and for simplicity, attention is limited to two elements: oxygen and helium. While diffuse nebulae contain vast quantities of material, they have relatively low densities (approximately $10^{3}$ particles $\mathrm{cm^{-3}}$). They are rich with emission lines such as Balmer lines of hydrogen, as well as those which arise from the transitions between energy levels which result in ions such as O$^{+}$, O$^{++}$, N$^{+}$, etc. As the lifetime of ionised hydrogen is very short (fractions of a microsecond due to the fast nature of electric dipole transitions), the probability of excited hydrogen is negligible, so any hydrogen can be assumed to be in its ground state for the purpose of ionisation rate calculations.
\autoref{fig:pycross_cuts2} compares O$^{++}$ and He$^{+}$ in our model for LoTr 1 showing predicted emissions maps of this PN in spectral lines other than those observed, to be compared with future observations. The stellar spectrum and nebular analysis could be developed further from this starting point based on more information regarding the central star. This would require a more extensive data-set for further investigation into the morphology of LoTr 1 in order to confirm its true structure and inclination. A data-set with a higher signal-to-noise ratio would be necessary to resolve the very faint outer shell.
Oxygen is a particularly useful ion in analysing the physical structure of a nebula. The well known [O\textsc{iii}] 5007$\AA$ optical line strongly contributes to nebula cooling, while more highly ionised OVI lines can radiate in at higher energies in hotter parts of the nebula. Where ionised hydrogen is found, we expect to find ionised oxygen as the first ionisation potential of oxygen is almost identical to that of hydrogen. The difference between the two however, is that oxygen can be ionised more than once \citep{dyson1997physics}.
\section{Conclusion}\label{Section5Conslusion}
A new application has been presented for the generation of photoionisation pseudo 3D modelling of thin shell nebulae modelled in {\mdseries\shape}{}. Functionality, an operational overview and a scientific pipeline has been described with scenarios where {\mdseries\pycross}{} has been adopted for novae and PN. The software was developed using a formal software development lifecycle, written in Python and will work without the need to install any development environments or additional python packages. This application, {\mdseries\shape}{} models and {\mdseries\pycross}{} archive examples are freely available to students, academics and research community on GitHub (\url{https://github.com/karolfitzgerald/PyCross\_OSX\_App}) for download. The authors cordially request that this paper be referenced when using this tool.
\section*{Acknowledgements}
This work was supported by Athlone Institute of Technology and National University of Ireland-Galway and funded in part by the Irish Research Council's postgraduate funding scheme.
This development of this work was helped by invaluable discussions with Dr Christophe Morisset of UNAM. We also wish to acknowledge the databases used that made calculations in this work possible. Namely recombination coefficients were taken from \url{http://amdpp.phys.strath.ac.uk/tamoc/RR/} and \url{http://amdpp.phys.strath.ac.uk/tamoc/DR/} and the ionic emission data is from version 7.0 of CHIANTI. CHIANTI is a collaborative project involving the NRL (USA), the Universities of Florence (Italy) and Cambridge (UK), and George Mason University (USA).
\section*{References}
|
\section{Introduction}
Currently and for the next 10-15 years, gravitational waves and
neutron stars are in the focus of the scientific community. As it
seems, the Large Hadron Collider at CERN indicates than new
physics may lie well above 15 TeV center of mass, thus
contemporary science is focused on neutron star (NS) physics (for
an important stream of reviews and textbooks see for example
\cite{Haensel:2007yy,Friedman:2013xza,Baym:2017whm,Lattimer:2004pg,Olmo:2019flu})
and astrophysical objects merging, which may provide insights to
fundamental physics problems. Indeed, neutron stars (NSs) have
multiple correlations with various physics research areas, like
nuclear physics
\cite{Lattimer:2012nd,Steiner:2011ft,Horowitz:2005zb,Watanabe:2000rj,Shen:1998gq,Xu:2009vi,Hebeler:2013nza,Mendoza-Temis:2014mja,Ho:2014pta,Kanakis-Pegios:2020kzp},
high energy physics
\cite{Buschmann:2019pfp,Safdi:2018oeu,Hook:2018iia,Edwards:2020afl,Nurmi:2021xds},
modified gravity
\cite{Astashenok:2020qds,Astashenok:2021xpm,Astashenok:2021peo,Capozziello:2015yza,Astashenok:2014nua,Astashenok:2014pua,Astashenok:2013vza,Arapoglu:2010rz,Panotopoulos:2021sbf,Lobato:2020fxt,Oikonomou:2021iid,Odintsov:2021nqa,Odintsov:2021qbq}
and relativistic astrophysics
\cite{Bauswein:2020kor,Vretinaris:2019spn,Bauswein:2020aag,Bauswein:2017vtn,Most:2018hfd,Rezzolla:2017aly,Nathanail:2021tay,Koppel:2019pys}.
Apart from the physical implications of isolated neutron stars,
surprises for fundamental physics may arise from the merging of
astrophysical objects, the mysteries of which are analyzed by the
LIGO-Virgo collaboration. Already the GW170817 event
\cite{TheLIGOScientific:2017qsa} has changed the way of thinking
in theoretical cosmology indicating that the gravitational waves
propagate with a speed equal to that of light. The physics of
astrophysical gravitational waves, and more importantly that of
primordial gravitational waves is expected to change the way of
thinking, or to verify many theoretical proposals in theoretical
cosmology. Future collaborations like the Einstein Telescope Hz
KHz frequencies \cite{Hild:2010id}, the LISA Space-borne Laser
Interferometer Space Antenna \cite{Baker:2019nia,Smith:2019wny}
the BBO \cite{Crowder:2005nr,Smith:2016jqs}, DECIGO
\cite{Seto:2001qf,Kawamura:2020pcg} and finally the SKA (Square
Kilometer Array) Pulsar Timing Arrays at frequencies $10^{-8}$Hz
\cite{Bull:2018lat} are expected to shed new light to fundamental
high energy physics problems, with most of these telescopes
revealing the physics of the radiation domination era. As we
already stated, in the next 10-15 years, particle physics,
theoretical cosmology and theoretical astrophysics will heavily
rely to gravitational wave and NSs observations. Although it seems
that things are more or less settled with theoretical
astrophysics, a recent observation \cite{Abbott:2020khf} has cast
doubt on the maximum mass issue of NSs, and, in parallel, it
indicated that alternative astrophysical objects, like strange
stars, may come into play in the near future. Although it is quite
early phenomenologically speaking, for exotic stars to be
discovered, it is a realistic possibility. Then, if exotic objects
are not yet fully phenomenologically supported, the problem with
the observation \cite{Abbott:2020khf} is that it is probable to
find NSs with masses in the mass-gap region $M\sim 2.5-5
M_{\odot}$. This possibility is sensational and it raises the
fundamental question inherent to the maximum mass problem of NSs,
which is, what is the lowest mass of astrophysical black holes. In
the context of General Relativity (GR), non-rotating neutron stars
with masses in the mass-gap region can only be described by
ultra-stiff equations of state (EoS), thus it is rather hard to
describe them without being in conflict with the GW170817 results.
Modified and extended gravity in its various forms
\cite{reviews1,reviews2,reviews3,reviews4,book,reviews5,reviews6,dimo}
can provide a clear cut description for large mass NSs
\cite{Pani:2014jra,Doneva:2013qva,Horbatsch:2015bua,Silva:2014fca,Chew:2019lsa,Blazquez-Salcedo:2020ibb,Motahar:2017blm,Oikonomou:2021iid,Odintsov:2021nqa,Odintsov:2021qbq}
see also Refs.
\cite{Astashenok:2020qds,Astashenok:2021peo,Astashenok:2021xpm}
for recent descriptions of the GW190814 event, and thus serves as
a cutting edge probable description of nature in limits where GR
needs to be supplemented by a Occam's razor compatible theory.
Motivated by the fundamental and inherently related questions,
which is the maximum mass of NSs and what is the lowest mass of
astrophysical black holes, in this work we shall approach these
issues by calculating the maximum baryonic mass of NSs in the
context of $f(R)$ gravity. We shall focus our analysis on one of
the most popular to date models of $f(R)$ gravity, the $R^2$
model, and we shall use a large variety of EoSs for completeness.
Our approach will yield an indirect way to answer the question
what is the lowest limit of astrophysical black holes, since it is
known that NSs with baryonic masses larger than the maximum static
limit will eventually collapse into black holes. Thus by knowing
the maximum baryonic mass, one may easily calculate the universal
gravitational mass-baryonic mass relation (see Ref.
\cite{Gao:2019vby} for the GR case) for the corresponding $R^2$
gravity theory, and can eventually find the gravitational mass of
the NSs corresponding to the maximum baryon mass. Having the
maximum gravitational mass available, one answers both questions
discussed in this paragraph, since NSs with baryonic masses larger
than the maximum allowed, will collapse into black holes. Thus our
work paves the way towards answering the two fundamental questions
related with the mysterious mass gap regions. The focus in this
work is to find the maximum baryon masses for $R^2$ gravity NSs
for various EoSs, and this is the first toward revealing the
mass-gap region.
\section{Maximum Baryon Masses in $f(R)$ Gravity}
Let us calculate numerically the maximum
baryonic mass of static NSs in the context of $f(R)$ gravity and
specifically for the $R^2$ model in the Jordan frame. We shall use
several different phenomenological EoSs and our main aim is to
pave the way towards answering the question what is the maximum
gravitational mass that neutron stars can have. At the same time,
if this question is answered, one may also have a hint on the
question which is the lowest mass that astrophysical black holes
can have. Before getting to the details of our analysis, we
provide here an overview of the treatment of spherically symmetric
compact objects in the context of Jordan frame $f(R)$ gravity, and
the Tolman-Oppenheimer-Volkoff (TOV) equations.
The $f(R)$ gravity action in the Jordan frame is the following,
\begin{equation}\label{action}
{\cal A}=\frac{c^4}{16\pi G}\int d^4x \sqrt{-g}\left[f(R) + {\cal
L}_{{\rm matter}}\right]\,,
\end{equation}
with $g$ denoting the metric tensor determinant and ${\cal L}_{\rm
matter}$ denotes the Lagrangian of the perfect matter fluids that
are present. Upon variation of the action (\ref{action}) with
respect to the metric tensor $g_{\mu\nu}$, the field equations are
obtained \cite{reviews4},
\begin{equation}
\frac{df(R)}{d R}R_{\mu\nu}-\frac{1}{2}f(R)
g_{\mu\nu}-\left[\nabla_{\mu} \nabla_{\nu} - g_{\mu\nu}
\Box\right]\frac{df(R)}{dR}=\frac{8\pi G}{c^{4}} T_{\mu \nu },
\label{field_eq}
\end{equation}
for a general metric $g_{\mu \nu}$, where
$\displaystyle{T_{\mu\nu}=
\frac{-2}{\sqrt{-g}}\frac{\delta\left(\sqrt{-g}{\cal
L}_m\right)}{\delta g^{\mu\nu}}}$ stands for the energy-momentum
tensor of the perfect matter fluids present. We shall consider
static NSs, which are described by the following
spherically symmetric metric,
\begin{equation}
ds^2= e^{2\psi}c^2 dt^2 -e^{2\lambda}dr^2 -r^2 (d\theta^2 +\sin^2\theta d\phi^2),
\label{metric}
\end{equation}
where $\psi$ and $\lambda$ are arbitrary functions with radial
dependence only. The energy momentum tensor for the perfect matter
fluid describing the NS is,
$T_{\mu\nu}=\mbox{diag}(e^{2\psi}\rho c^{2}, e^{2\lambda}p, r^2 p,
r^{2}p\sin^{2}\theta)$ where $\rho$ denotes the energy-matter density and
$p$ stands for the pressure \cite{weinberg}. By using the
contracted Bianchi identities, one can obtain the equations for
the stellar object, by also implementing the hydrostatic
equilibrium condition,
\begin{equation} \nabla^{\mu}T_{\mu\nu}=0\,,
\label{bianchi}
\end{equation}
which, in turn, yields the Euler conservation equation,
\begin{equation}\label{hydro}
\frac{dp}{dr}=-(\rho
+p)\frac{d\psi}{dr}\,.
\end{equation}
Upon combining the metric \eqref{metric} and the field Eqs.
(\ref{field_eq}), we obtain the equations governing the behavior
of the functions $\lambda$ and $\psi$ inside and outside the
compact object, which are \cite{capquark},
\begin{eqnarray}
\label{dlambda_dr} \frac{d\lambda}{dr}&=&\frac{ e^{2 \lambda
}[r^2(16 \pi \rho + f(R))-f'(R)(r^2 R+2)]+2R_{r}^2 f'''(R)r^2+2r
f''(R)[r R_{r,r} +2R_{r}]+2 f'(R)}{2r \left[2 f'(R)+r R_{r}
f''(R)\right]},
\end{eqnarray}
\begin{eqnarray}\label{psi1}
\frac{d\psi}{dr}&=&\frac{ e^{2 \lambda }[r^2(16 \pi p -f(R))+
f'(R)(r^2 R+2)]-2(2rf''(R)R_{r}+ f'(R))}{2r \left[2 f'(R)+r R_{r}
f''(R)\right]}\, ,
\end{eqnarray}
with the prime in Eqs. (\ref{dlambda_dr}) and (\ref{psi1})
denoting differentiation with respect to the function $R(r)$, that
is $f'(R)=\frac{d f}{d R}$. The above set of differential
equations constitute the $f(R)$ gravity TOV equations, and by
using $f(R)=R$ one obtains the standard TOV equations of GR
\cite{rezzollazan,landaufluid}. In addition to the above TOV
equations, for $f(R)$ gravity, the TOV equations are also
supplemented by the following differential equation,
\begin{equation}\label{TOVR}
\frac{d^2R}{dr^2}=R_{r}\left(\lambda_{r}+\frac{1}{r}\right)+\frac{f'(R)}{f''(R)}\left[\frac{1}{r}\left(3\psi_{r}-\lambda_{r}+\frac{2}{r}\right)-
e^{2 \lambda }\left(\frac{R}{2} + \frac{2}{r^2}\right)\right]-
\frac{R_{r}^2f'''(R)}{f''(R)},
\end{equation}
which is obtained from the trace of Eqs.\eqref{field_eq} by
replacing the metric \eqref{metric}. The differential Eq.
(\ref{TOVR}) basically expresses the fact that the Ricci scalar
dynamically evolves in the context of $f(R)$ gravity, as the
radial coordinate $r$ changes.
Having presented the TOV equations, the focus is now on
solving numerically them, namely Eqs. (\ref{hydro}),
(\ref{dlambda_dr}) and (\ref{psi1}) together with (\ref{TOVR}),
for the $R^2$ model,
\begin{equation}\label{frdef}
f(R)=R+\alpha R^2\, ,
\end{equation}
where the parameter $\alpha$ is expressed in units of
$r_g^2=4G^2M^2_\odot/c^4$ and $r_g$ is the Sun gravitational radius.
With regard to the EoS, we shall consider five phenomenological
and one ideal limiting case EoSs, specifically: a) the APR4 which
is a $\beta$-equilibrium EoS proposed by Akmal, Pandharipande and
Ravenhall (\cite{APR4}). b) The BHF which is a microscopic EoS of
dense $\beta$-stable nuclear matter obtained using realistic
two-body and three-body nuclear interactions denoted as
$N3LO\Delta$ + $N2LO\Delta_1$ \cite{BHF} derived in the framework
of chiral perturbation theory and including the $\Delta$(1232)
isobar intermediate state. This EoS has been derived using the
Brueckner-Bethe-Goldstone quantum many-body theory in the
Brueckner-Hartree-Fock approximation. c) The GM1 EoS, which is the
classical relativistic mean field parametrization GM1 \cite{GM}
for cold neutron star matter in $\beta$-equilibrium containing
nucleons and electrons. d) The QHC18, which is a phenomenological
unified EoS proposed in \cite{QHC18} and describes the crust,
nuclear liquid, hadron-quark crossover, and quark matter domains.
e) The SLy \cite{Douchin:2001sv}, which is a well known and
phenomenologically successful EoS. f) Finally, the limiting case
ideal EoS, called the causal limit EoS, in which case,
$$
P(\rho) = P_{u}(\rho_u) + (\rho-\rho_u)v_s^{2}.
$$
with $P_{u}$ and $\rho_u$ correspond to the pressure and density
of the well known segment of a low-density EoS at $\rho_u\approx
\rho_0$ where $\rho_0$ denotes the saturation density. We shall
assume that the low-density EoS is the SLy EoS and consider the case
when $v_s^{2}=c^2/3$. It is conceivable that the causal EoS is an
ideal limit, thus the resulting baryonic mass for static NSs that
will be obtained for this EoS, will serve as a true upper bound
for the baryonic masses NSs in $R^2$.
Now let us get into the core of our analysis. The clue point is
that NSs with baryon masses larger than $M_B^{max}$, that is,
\begin{equation}\label{mbax}
M_B>M_B^{max}\, ,
\end{equation}
will inevitably collapse to black hole. Thus, in principle, by
knowing the the maximum baryonic mass for a specific EoS and a
specific theory can yield a first hint on where to find black
holes and what is, for sure, the upper limit of NSs, indirectly
though. Let us explain in detail these two syllogisms in some
detail, considering firstly the black hole syllogism although the
two are inherently related. If one knows the maximum baryonic mass
for a specific EoS and theory, it can provide a hint on the
lowest mass of astrophysical black holes. Basically, we can
indirectly know where to start seeking for the lower mass limit of
astrophysical black holes, since $M_B^{max}$ is an ideal upper
limit that the gravitational masses of NSs cannot reach for sure.
Thus the lower limit of astrophysical black holes could be
$M_B^{max}$ because it is not possible to find NSs with such large
gravitational masses. On the other hand, and in the same line of
research, the gravitational masses of NSs can never be as large as
the maximum baryonic masses, thus it is expected to find them to
quite lower limiting values. Thus, in the NSs case, we know where
not to find NSs and seek them in quite lower values. In both
cases, the analysis would be perfectly supplemented by knowing for
a large number of EoSs and a large number of theories, the
theoretical universal relation between the baryonic and
gravitational NSs masses, as in \cite{Gao:2019vby}. However this
task
|
is quite complicated and it will be addressed in more detail
in a future focused work. In this work, we aim to find hints on
where to start finding the lowest limit of astrophysical black
holes, and also to discover where not to find NSs, thus aiming in
providing another theoretical upper bound on static NSs masses.
This work could be considered as a theoretical complement of our
work on causal EoSs developed in Ref. \cite{Astashenok:2021peo}.
Remarkably, the two results seem to provide a quite interesting
result and may lead to an interesting conjecture.
Let us proceed by briefly recalling how to calculate the baryonic
mass for a static NS. The central values of the pressure and of
the mass of the NS are,
\begin{equation}\label{pressuremassatthecenter}
P(0) = P_c,\,\,\,m(0) = 0\, ,
\end{equation}
and near the center, the pressure and the mass of the NS behave
as,
\begin{equation}\label{nearcenterpressure}
P(r) \simeq P_c -(2\pi)(\epsilon_c+P_c) \left(
P_c+\frac{1}{3}\epsilon_c \right) r^2 + O(r^4)\, ,
\end{equation}
\begin{equation}\label{nearthecentermass}
m(r) \simeq \frac{4}{3}\pi\epsilon_cr^3 + O(r^4)\, .
\end{equation}
Considering the spherically symmetric spacetime (\ref{metric}),
the gravitational mass of the NS is,
\begin{equation}\label{gravitationalmass}
M= \int_0^R 4\pi r^2\epsilon dr\, ,
\end{equation}
or equivalently,
\begin{equation}\label{gravimassalternative}
M= \int_0^R 4\pi r^2 e^{(\psi+\lambda)/2}(\epsilon +3P)dr\, ,
\end{equation}
while the baryon mass of the static NS is,
\begin{equation}\label{baryonmass}
M_B = \int_0^R 4\pi r^2 e^{\lambda/2}\rho dr\, .
\end{equation}
For the numerical calculation, we use a length scale of $M_\odot
=1$, and the results of our numerical analysis are presented in
Table \ref{table1}. Specifically, in Table \ref{table1}, we
present the maximum baryonic mass, and the maximum gravitational
mass for all the EoS we mentioned earlier, for various values of
the parameter $\alpha$ which is the coupling of the $R^2$ term in
the gravitational action. With regard to the values of the free
parameter $\alpha$, it is expressed in units
$r_g^2=4G^2M^2_\odot/c^4$, see below Eq. (\ref{frdef}). Making the
correspondence with the standard cosmological $R^2$ model, the
small values of $\alpha$ are more compatible with the cosmological
scenarios. However, at this point one must be cautious since the
cosmological $R^2$ model constraints are usually imposed on the
Einstein frame theory. Specifically the constraints on the
parameter $\alpha$ come from the amplitude of the scalar curvature
primordial perturbations, thus yielding a small $\alpha$. If
however the theory is considered in the Jordan frame directly, the
expression of the amplitude of the scalar perturbations is
different compared to the Einstein frame expression, resulting to
different constraints on the parameter $\alpha$. This is why we
chose the parameter $\alpha$ to vary in the range $0<\alpha<10$,
in order to investigate the physics of NSs for a wide range of
values in order to cover the constraints from both frames.
\begin{table}[h!]
\begin{center}
\caption{\emph{\textbf{PARAMETERS FOR VARIOUS EOSs}}}
\label{table1}
\begin{tabular}{|r|r|r|r||r|r|r|r|}
\hline
\textbf{EoS} & $\alpha$ & $M_B^{max}$ & $M^{max}$ & \textbf{EoS} & $\alpha$ & $M_B^{max}$ & $M^{max}$
\\ \hline
& 0 & 2.65 & 2.17 & & 0 & 2.44 & 2.04\\
APR & 0.25 & 2.67 & 2.18 & QHC18 & 0.25 & 2.48 & 2.07 \\
& 2.5 & 2.76 & 2.24 & & 2.5 & 2.61 & 2.15\\
& 10 & 2.85 & 2.30& & 10 & 2.70 & 2.22\\
\hline
& 0 & 2.47 & 2.08 & &0 & 2.44 & 2.05 \\
BHF & 0.25 & 2.49 & 2.09 & SLy & 0.25 & 2.46 & 2.06 \\
& 2.5 & 2.58 & 2.15 & & 2.5 & 2.54 & 2.11 \\
& 10 & 2.65 & 2.21 & &10 & 2.63 & 2.17 \\
\hline
& 0 & 2.84 & 2.38 & & 0 & 2.98 & 2.52 \\
GM1 & 0.25 & 2.87 & 2.40 & SLy + & 0.25 & 3.01 & 2.54 \\
& 2.5 & 3.01 & 2.49 & causal & 2.5 & 3.13 & 2.63 \\
& 10 & 3.11 & 2.56 & $v_s^2=c^2/3$ & 10 & 3.24 & 2.74 \\
\hline
\hline
\end{tabular}
\caption{Maximum baryonic mass for stars in $R^2$ gravity for some equation of states and various values of the parameter $\alpha$.}
\end{center}
\end{table}
Let us discuss the results presented in Table \ref{table1} in some
detail. As a general comment for all the EoSs, the baryonic mass
is significantly larger than the maximum gravitational mass, for
all the values of the parameter $\alpha$, and this is a general
expected result. With regard to the APR EoS, the baryonic mass takes
values in the range $2.65$$M_{\odot}$-$2.85$$M_{\odot}$, and the
maximum gravitational mass in the range
$2.17$$M_{\odot}$-$2.30$$M_{\odot}$. Thus, for the APR EoS, it is
apparent that NSs with baryonic masses larger than
$2.85$$M_{\odot}$ at most, will collapse into black holes. The value
$2.85$$M_{\odot}$ is indicative of the maximum limit of the baryon
mass in the context of $R^2$ gravity and the corresponding
gravitational mass is $2.30$$M_{\odot}$, which means that
astrophysical black holes will be larger than $2.30$$M_{\odot}$ in
the case of $R^2$ gravity and for the APR EoS. With regard to the
BHF EoS, the baryonic mass takes values in the range
$2.47$$M_{\odot}$-$2.65$$M_{\odot}$, and the maximum gravitational
mass in the range $2.08$$M_{\odot}$-$2.21$$M_{\odot}$. Thus, for
the BHF EoS, it is apparent that NSs with baryonic masses larger
than $2.65$$M_{\odot}$ at most, will collapse to black holes. The
corresponding gravitational mass is $2.21$$M_{\odot}$, which means
that astrophysical black holes will be larger than
$2.30$$M_{\odot}$ in the case of $R^2$ gravity and for the BHF
EoS. With regard to the GM1 EoS, the baryonic mass takes values in
the range $2.84$$M_{\odot}$-$3.11$$M_{\odot}$, and the maximum
gravitational mass in the range
$2.38$$M_{\odot}$-$2.56$$M_{\odot}$. Thus, for the GM1 EoS, it is
apparent that NSs with baryonic masses larger than
$3.11$$M_{\odot}$ at most, will collapse to black holes. The
corresponding gravitational mass is $2.56$$M_{\odot}$, which means
that astrophysical black holes will be larger than
$2.56$$M_{\odot}$ in the case of $R^2$ gravity and for the GM1
EoS. With regard to the QHC18 EoS, the baryonic mass takes values
in the range $2.44$$M_{\odot}$-$2.70$$M_{\odot}$, and the maximum
gravitational mass in the range
$2.04$$M_{\odot}$-$2.22$$M_{\odot}$. Thus, for the QHC18 EoS, it is
apparent that NSs with baryonic masses larger than
$2.7$$M_{\odot}$ at most, will collapse to black holes. The
corresponding gravitational mass is $2.22$$M_{\odot}$, which means
that astrophysical black holes will be larger than
$2.22$$M_{\odot}$ in the case of $R^2$ gravity and for the QHC18
EoS. With regard to the SLy EoS, the baryonic mass takes values in
the range $2.44$$M_{\odot}$-$2.63$$M_{\odot}$, and the maximum
gravitational mass in the range
$2.05$$M_{\odot}$-$2.17$$M_{\odot}$. Thus, for the SLy EoS, it is
apparent that NSs with baryonic masses larger than
$2.63$$M_{\odot}$ at most, will collapse to black holes. The
corresponding gravitational mass is $2.17$$M_{\odot}$, which means
that astrophysical black holes will be larger than
$2.17$$M_{\odot}$ in the case of $R^2$ gravity and for the SLy
EoS. Finally, for the theoretical ideal EoS, namely the causal
EoS, the baryonic mass takes values in the range
$2.98$$M_{\odot}$-$3.24$$M_{\odot}$, and the maximum gravitational
mass in the range $2.52$$M_{\odot}$-$2.74$$M_{\odot}$. Thus, for
the causal EoS, it is apparent that NSs with baryonic masses
larger than $3.24$$M_{\odot}$ at most, will collapse to black
holes. The corresponding gravitational mass is $2.74$$M_{\odot}$,
which means that astrophysical black holes will be larger than
$2.74$$M_{\odot}$ in the case of $R^2$ gravity and for the causal
EoS.
Our results hold true for the $R^2$ gravity and for the specific
EoSs which we studied, thus it is conceivable that these are model
dependent. However, it seems that there is a tendency from these
data, that NSs masses cannot be larger than 3 solar masses, and
this combined with the result of Ref. \cite{Astashenok:2021peo},
further supports the GR claim about finding NSs with masses not larger
than 3 solar masses. Also, astrophysical black holes can be found
or created when NSs with baryon masses larger than the
corresponding maximum baryon masses collapse to black holes. In
general, astrophysical black holes can be found even in the lower
limit of the mass gap region, specifically at
$2.5$$M_{\odot}$-$3$$M_{\odot}$, however, our data are model
dependent. Thus our claims are somewhat model dependent and it is
vital to find a universal relation between the maximum baryon
masses and the corresponding maximum gravitational masses in order
to be more accurate. The universal baryon-gravitational masses
relation for $R^2$ gravity can be obtained using the techniques in
Ref. \cite{Gao:2019vby}. The work is in progress along this
research line. This future study will yield a more robust
result and may further indicate, in a refined way, where to find the
maximum masses of NSs and where the corresponding lower masses of
astrophysical black holes.
\section{Concluding Remarks}
In this work we focused on the calculation of the maximum baryonic
mass for static NSs in the context of extended gravity.
We specified our analysis for one of the most important extended
gravity candidate theory, namely $f(R)$ gravity, and we chose one
of the most important models of $f(R)$ gravity, namely the $R^2$
model. We derived the TOV equations for $f(R)$ gravity and
numerically integrated these for the following EoSs, the APR4, the
BHF, the GM1, the QHC18, the SLy, and finally, the limiting case
ideal EoS, called the causal limit EoS. The calculations of the
baryonic mass yielded quite interesting results, with a general
characteristic being that the maximum baryonic mass was higher
than the maximum gravitational mass for the same set of the model's
parameters and the same EoS. The latter characteristic was
expected. However the results tend to indicate some interesting
features for static neutron stars in the context of $R^2$ gravity.
Specifically, the upper limit of all maximum baryon masses for all
the EoSs we studied, and for all the values of the model free
parameters, seems to be of the order of $\sim 3$$M_{\odot}$. This
feature clearly shows that the static NSs maximum
gravitational mass is certainly significantly lower than this
limit, thus the maximum gravitational mass of static NSs
is expected to be found somewhere in the lower limits of the mass
range $2.5$$M_{\odot}$-3$M_{\odot}$. At the same time, one may have
hints on where to find the lower mass limit of astrophysical black
holes, since NSs with baryonic masses larger than the
maximum baryonic mass for the same range of values of the model's
parameters and for the same EoS. Thus one may say that the lower
masses of astrophysical black holes may be found in the same range
$2.5$$M_{\odot}$-3$M_{\odot}$, which is basically the range where
to find the maximum gravitational masses of static NSs.
However, our analysis is model dependent and also strongly depend
on the underlying EoS. Thus, what is needed is to find a universal
and EoS-independent relation for the baryonic and gravitational
masses of static NSs, at least for the $R^2$ model at hand. The
motivation for this is strong, since we may reach more rigid
answers on the questions which are the maximum NSs masses and
which are the minimum astrophysical black holes masses. This issue
will be thoroughly addressed in the near future. As a final
comment, let us note that it is remarkable that the magic number
of 3 solar masses seems to appear in the context of maximum
baryonic masses. Basically for the baryonic masses, the
$3$$M_{\odot}$ limit is a mass limit that NSs will never reach for
sure. Hence this is an ideal number, not a true limit. On the same
line of research, the causal EoS maximum gravitational mass for
NSs studied in \cite{Astashenok:2021peo}, also involves the
``magic'' number of $3$$M_{\odot}$. Thus the GR claim that neutron
stars cannot have gravitational masses larger than $3$$M_{\odot}$
is also verified from the perspective of baryonic masses
calculations. This conclusion is expected to hold true even if the
universal and EoS-independent relation between that baryonic and
gravitational masses for static NSs is found. However, in order to
be accurate, one must perform similar calculations for spherical
symmetric spacetimes in other modified gravities, like $f(T)$
gravity \cite{Ruggiero:2016iaq,Ren:2021uqb} or even
Einstein-Gauss-Bonnet gravity \cite{Charmousis:2021npl}.
Finally, let us discuss an interesting question, specifically,
whether the analysis we performed would help to break the
degeneracies between the NS EoS and modified gravity forms. Indeed
this would be the ideal scenario and it is generally not easy to
answer. One general answer can be obtained if the following
occurs: if a future observation yields a large mass of a static or
nearly static NSs, which cannot be explained by a stiff EoS, since
the latter is constrained by the GW170817 event. This is seems to
be the case in the GW190814 event, but one has to be certain about
the NSs observation. Thus a kilonova future event, with the
characteristics we described, may indeed verify whether modified
gravity controls eventually large mass NSs and their upper limits.
Our analysis however showed that it is highly unlikely to find NSs
beyond approximately 3 solar masses, and this is an upper bound in
NSs masses. However, we obtained this result in a model and EoS
dependent way, so we need to extend our analysis in a more
universal way. Work is in progress toward this research line.
\section*{Acknowledgments}
This work was supported by MINECO (Spain), project
PID2019-104397GB-I00. This work was supported by Ministry of
Education and Science (Russia), project 075-02-2021-1748 (AVA).
SC acknowledges the support of INFN, {\it Sezione di Napoli, iniziative specifiche} MOONLIGHT2 and QGSKY.
|
\section{INTRODUCTION}
There has been a surge of interest in the cosmic microwave
background radiation (CMB)
since the first
anisotropies of assumed cosmological origin were
detected by the COBE DMR experiment (Smoot {{\frenchspacing\it et al.}} 1992).
On the experimental front, many new experiments have been
carried out, and more are planned or proposed for
the near future (see White {{\frenchspacing\it et al.}} 1994, for a review).
On the theoretical front, considerable progress
has been made in understanding
how the CMB power spectrum $C_\ell$ depends on various
cosmological model parameters (see
Bond {{\frenchspacing\it et al.}} 1994, Hu \& Sugiyama 1995,
and references therein for recent reviews of
analytical and quantitative aspects of this problem).
It is now fairly clear that an accurate measurement
of the angular power spectrum $C_\ell$ to multipoles
$\ell \sim 10^3$ could provide accurate constraints on many of
the standard cosmological parameters ($\Omega$, $\Omega_b$,
$\Lambda$, spectral index $n$, {{\frenchspacing\it etc.}}),
thus becoming the definitive arbiter between various
flavours of the cold dark matter (CDM) cosmogony and
other theories of the origin of structure
in the Universe.
To accurately measure the power spectrum and reach this goal,
a number of hurdles must be overcome:
\begin{itemize}
\item Technical problems
\item Incomplete sky-coverage
\item Foregrounds
\end{itemize}
There is of course a wide variety of technical challenges that must be
tackled to attain high resolution, low noise, well-calibrated
temperature data over a wide range of frequencies and over most of the
sky. However, thanks to the rapid advance in detector technology over
the last two decades and the possibilities of ground based
interferometers, long duration balloon flights and space-born
experiments, there is a real prospect that high sensitivity maps of
the CMB will be obtained within the next decade.
The second hurdle refers to the fact that we are unlikely to
measure the primordial CMB sky accurately behind the Galactic plane.
As is well known, the resulting incomplete sky coverage makes it
impossible to exactly recover the spherical harmonic coefficients
that would give direct estimates of the angular power spectrum
$C_\ell$.
However, a number of methods for efficiently constraining models, using
only partial sky coverage, have now been developed
(G\'orski 1994; Bond 1995; Bunn \& Sugiyama 1995; Tegmark \& Bunn 1995),
and it has recently been shown (Tegmark 1995, hereafter T96)
that even the individual $C_\ell$-coefficients
can be accurately estimated for all but the
very lowest multipoles such as $\ell=2$, by
expanding the data in
appropriately chosen basis functions.
Rather, the most difficult of the above-mentioned hurdles
appears to be the third,
which is the topic of the present paper.
The frequency-dependence of the various
foregrounds has been extensively studied, in both
``clean" and ``dirty" regions
of the sky (see {{\frenchspacing\it e.g.}} Brandt {{\frenchspacing\it et al.}} 1993,
Toffolatti {{\frenchspacing\it et al.}} 1994, for recent reviews).
However, these properties alone
provide a description of the foregrounds
that is somewhat too crude to assess the extent
to which they can be separated from the underlying CMB signal,
since the foregrounds fluctuations depend on the multipole
moment $\ell$ as well (see
Bouchet {{\frenchspacing\it et al.}}, 1995, for simulations).
Most published plots comparing different CMB
experiments tend to show $\ell$ on the
horizontal axis and an amplitude ($C_\ell$ or an {{\frenchspacing r.m.s.}}
$\Delta T/T$) on the vertical axis, whereas most plots comparing
different foregrounds show amplitude plotted against
frequency $\nu$.
Since the fluctuations in the latter tend to depend
strongly on both $\ell$ and $\nu$, {{\frenchspacing\it i.e.}}, on both spatial
and temporal frequency,
one obtains a more accurate picture by combining both of these
pieces of information and working in a
two-dimensional plane as in Figures 1-6. We will indeed
find that the $\ell$--$\nu$
plane arises naturally in the optimized subtraction
scheme that we present.
\fig{ExperimentsFig} shows roughly the regions in this plane
probed by various CMB experiments.
Each rectangle corresponds to one experiment. Its
extent in the $\ell$-direction shows the customary
$1\sigma$ width of the experimental window function
(see {\it e.g.} White \& Srednicki 1995),
whereas the vertical extent shows the frequency range
that is covered. For single-channel experiments, we
plot the quoted bandwidth, whereas for multi-channel
experiments, the box has simply been plotted in
the range between the lowest and highest frequency channel.
For a more detailed description of these experiments, see
Scott, Silk \& White (1995) and references therein.
Figures 2 though 5, which will be described in
Section~\ref{ForegroundsSec},
show the estimated fluctuations of the CMB
and various foregrounds in the same plane, and comparing these
figures with \fig{ExperimentsFig} as in \fig{EverythingFig},
it is easy to understand which experiments are the most
affected by the various foregrounds.
Moreover, as we will see, familiarity with the geography
of this plane provides an intuitive understanding of
the advantages and shortcomings of
different methods of foreground subtraction.
A number of CMB satellite missions are currently under consideration
by various funding agencies, which would offer the excellent
sensitivity, resolution and frequency coverage that is needed for
accurate foreground subtraction. Throughout this paper, we will use
the proposed European Space Agency COBRAS/SAMBA mission
(Mandolesi {{\frenchspacing\it et al.}} 1995) as an illustration of what the next
generation of space-borne CMB experiments may be able to
achieve. The specifications of
competing satellite proposals are similar, though with
a more restricted frequency range.
The rest of this paper is organized as follows. After establishing
our basic notation in Section~\ref{NotationSec}, we derive the optimized
multi-frequency subtraction method in Section~\ref{WienerSec}.
In Section~\ref{ForegroundsSec}, we estimate the angular power spectra
of the various foreground contaminants. In Section~\ref{ResultsSec},
we use these estimates to assess the effectiveness of the subtraction
technique and to show how accurately the CMB fluctuations could be
recovered from high quality data, such as might be obtained from the
proposed COBRAS/SAMBA satellite.
\section{NOTATION}
\label{NotationSec}
\defB_{dust}{B_{dust}}
\defB_{synch}{B_{synch}}
\defB_{ff}{B_{ff}}
\defB_{ps}{B_{ps}}
\defB_{gal}{B_{gal}}
\defB_{sz}{B_{sz}}
\defB_{cmb}{B_{cmb}}
\defB_0{B_0}
\defB_{noise}{B_{noise}}
\def\delta T{\delta T}
\def\delta T_{dust}{\delta T_{dust}}
\def\delta T_{synch}{\delta T_{synch}}
\def\delta T_{ff}{\delta T_{ff}}
\def\delta T_{ps}{\delta T_{ps}}
\def\delta T_{gal}{\delta T_{gal}}
\def\delta T_{sz}{\delta T_{sz}}
\def\delta T_{cmb}{\delta T_{cmb}}
\def\delta T_{noise}{\delta T_{noise}}
\defC^{noise}{C^{noise}}
\defC^{dust}{C^{dust}}
\defC^{ps}{C^{ps}}
\defa^{noise}{a^{noise}}
\defa_{\l m}{a_{\ell m}}
\defa_{\l' m'}{a_{\ell' m'}}
Let $B(\widehat{\bf r},\nu)$ denote the total sky brightness at frequency
$\nu$ in the direction of the unit vector $\widehat{\bf r}$.
Since we know that $B$ is a sum of contributions of physically
distinct origins, we will write it as
\beq{BsumEq}
B(\widehat{\bf r},\nu) = \sum B_i(\widehat{\bf r},\nu).
\end{equation}
In the microwave part of the spectrum, the most important
non-CMB components
are synchrotron radiation $B_{synch}$, free-free emission
$B_{ff}$, dust emission $B_{dust}$, and radiation from point sources
$B_{ps}$ (both radio sources and infrared emission from
galaxies). We will separate the CMB-contribution into two
terms; an isotropic blackbody $B_0$ and the fluctuations
$B_{cmb}$ around it. The former, which is of course independent of
$\widehat{\bf r}$, is given by
\beq{B0eq}
B_0(\nu) = {2h\over c^2}{\nu^3\over e^x-1} =
2{(kT_0)^3\over (hc)^2}\left({x^3\over e^x-1}\right)
\approx 270.2\,\MJy/\sr \left({x^3\over e^x-1}\right),
\end{equation}
where $x\equiv h\nu/kT_0\approx \nu/56.8{\rm GHz}$ and $T_0\approx 2.726{\rm K}$
(Mather {{\frenchspacing\it et al.}} 1994).
If the actual CMB temperature across the sky is
$T(\widehat{\bf r}) = T_0 +\delta T(\widehat{\bf r})$, then
to an excellent approximation,
$B_{cmb}(\widehat{\bf r},\nu) = (\partialB_0(\nu)/\partial T_0)\delta T(\widehat{\bf r})$.
(The quadratic correction will be down by a factor of
$\delta T/T\approx 10^{-5}$.)
This conversion factor from brightness to temperature is
\beq{dBdTeq}
{\partialB_0\over\partial T_0} =
{k\over 2} \left({k T_0\over hc}\right)^2
\left({x^2\over\sinh x/2}\right)^2
\approx
\left({24.8{\rm MJy}/{\rm sr}\over{\rm K}}\right)
\left({x^2\over\sinh x/2}\right)^2.
\end{equation}
Since any other contaminants that we fail to take into account
will be mistaken for CMB fluctuations, it is convenient to
convert all brightness fluctuations into temperature fluctuations
in the analogous way, so we define
\beq{ConversionEq1}
\delta T_i(\widehat{\bf r},\nu) \equiv
{B_i(\widehat{\bf r},\nu)\over
\partialB_0/\partial T_0}.
\end{equation}
Note that with this definition, $\delta T_{cmb}(\widehat{\bf r},\nu)$ is independent
of $\nu$ (compare \fig{CMB_B_Fig} with \fig{CMB_T_Fig}),
whereas most of the foregrounds will exhibit a
strong frequency dependence.
We expand the temperature fluctuations in spherical harmonics as
usual;
\beq{MultipoleExpansionEq}
\delta T_i(\widehat{\bf r},\nu) =
\sum_{\ell=0}^{\infty}
\sum_{m=-\ell}^{\ell}
Y_{\ell m}(\widehat{\bf r}) \,a_{\l m}^{(i)}(\nu).
\end{equation}
For isotropic CMB fluctuations, we have
\beqa{CMBexpecEq1}
\expec{a_{\l m}^{cmb}(\nu)}&=&0,\\
\label{CMBexpecEq2}
\expec{a_{\l m}^{cmb}(\nu)^* a_{\l' m'}^{cmb}(\nu)}&=&
\delta_{\ell\l'}\delta_{m m'} C_\ell,
\end{eqnarray}
where $C_\ell$ is the frequency independent
{\it angular power spectrum}.
For other components, the means
$\expec{a_{\l m}^{(i)}(\nu)}$ are not necessarily equal to
zero. For instance, most of the foregrounds are
by nature non-negative ($B_i \ge 0)$, so we expect the
monopole to be positive.
Also, if there are deviations from isotropy (a secant behavior
with Galactic latitude, for instance), we will not have
$\expec{a_{\l m}^{(i)}(\nu)^* a_{\l' m'}^{(i)}(\nu)} \propto
\delta_{\ell\l'}\delta_{m m'}$.
For such cases, we simply define
\beq{UglyCldefEq}
C_\ell^{(i)}(\nu) \equiv
{1\over 2\ell+1}\sum_{m=-\ell}^\ell \bigexpec{\left|a_{\l m}^{(i)}(\nu)\right|^2},
\end{equation}
since this is the quantity that on average will be added to the estimate of the
CMB power spectrum if the contaminant is not removed.
As we shall see further on, all contaminants that we
have investigated in this paper do become
fairly isotropic when we mask out all but the cleanest parts of the
sky; the different Fourier components do indeed decouple and
so the only important difference from CMB behavior is an additional
constant term in the monopole (which is, of course,
unmeasurable anyway).
We conclude this Section with a few comments on how to
read Figures 2 through 6.
If a random field satisfies equations\eqnum{CMBexpecEq1}
and\eqnum{CMBexpecEq2}, the addition theorem for
spherical harmonics gives the well-known result
\beq{rmsEq}
\expec{\delta T_i(\nu)^2} = \sum_{l=0}^{\infty}
\left({2\ell + 1\over 4\pi}\right) C_\ell^{(i)}(\nu)
\approx \int \left[{\ell(2\ell + 1)\over 4\pi}\right] C_\ell^{(i)} d(\ln\ell).
\end{equation}
This is the reason that we plot the quantity
$[\ell(2\ell + 1)C_l^{(i)}(\nu)/4\pi]^{1/2}$ in the
figures of the next section: the resulting {{\frenchspacing r.m.s.}} temperature
fluctuation $\delta T_i(\nu)$ is basically just the {{\frenchspacing r.m.s.}}
height of the curve times a small constant.
If we compute the {{\frenchspacing r.m.s.}} average in a multipole range
$\ell_0\leq\ell\leq\ell_1$, then
this constant is simply $[(\ln(\ell_1/\ell_0)]^{1/2}$.
For the CMB temperature
fluctuations of standard CDM, for instance,
which are shown in \fig{CMB_T_Fig} normalized to COBE
(Bunn {{\frenchspacing\it et al.}} 1995),
$\expec{\delta T^2}^{1/2} \approx 108{\rm \mu K}$.
Convolving the fluctuations with the COBE beam with ${\rm FWHM}=7.08^{\circ}$
suppresses $C_\ell$ for $\ell\gg 10$, giving
$\expec{\delta T^2}^{1/2} \approx 37{\rm \mu K}$. The order of magnitude of
both of these numbers can be roughly read off by eye with the above
prescription.
For the CMB brightness fluctuations
shown in \fig{CMB_B_Fig}, everything is completely analogous.
For instance, the total fluctuations are
$\expec{B_{cmb}(\nu)^2}^{1/2} \approx 0.05\MJy/\sr$ at the
maximum sensitivity frequency $\nu\approx 218\>{\rm GHz}$,
and $0.02\MJy/\sr$ as seen with the COBE beam.
\section{MULTI-FREQUENCY FOREGROUND SUBTRACTION}
\label{WienerSec}
In this Section, we
derive the multi-frequency subtraction scheme mentioned in
the introduction. Given data in several frequency channels, the goal
is to produce the best maps corresponding to
the different physical components, where ``best" is
taken to mean having
the smallest {{\frenchspacing r.m.s.}} errors.
We first derive the method for an idealized case, and then
add the various real-world complications one by one.
Before beginning, it is instructive to compare this
approach with that of likelihood analysis.
Most published analyses of the COBE DMR sky maps
(see {\it e.g.} Tegmark \& Bunn, 1995 for a recent review) have used
likelihood techniques to constrain models with power spectra
described by one or two
free parameters, typically a spectral index and an
overall normalization.
As long as the number of model-parameters is rather small,
a useful way to deal with foreground contamination is that
described by
Dodelson \& Stebbins (1994). The basic idea is
to include in the likelihood analysis a number of
``nuisance parameters" describing the foregrounds,
and then marginalize over these parameters to obtain the
Bayesian probability distribution for the parameters of interest.
Despite its elegance, this method is of course only feasible when the number
of parameters, $n$, to be estimated is small, since the number
of grid points in the $n$-dimensional parameter space
(and hence the amount of computer time required for the analysis)
grows exponentially with $n$.
The problems addressed in this paper are how to
estimate the entire power spectrum $C_\ell$
(about $n=10^3$ parameters) and how to
reconstruct a high-resolution all-sky map (with perhaps as
many as $n\sim 10^7$ parameters (pixels)),
which is why a more direct approach other than likelihood analysis
is required.
The method we present below is type of linear filtering closely
related to Wiener filtering, but with a crucial difference.
Linear filtering techniques have recently been applied to a
range of cosmological problems. Rybicki \& Press (1992) give a detailed
discussion of the one-dimensional problem.
Lahav {{\frenchspacing\it et al.}} (1994), Fisher {{\frenchspacing\it et al.}} (1995) and Zaroubi {{\frenchspacing\it et al.}} (1995)
apply Wiener filtering to galaxy surveys.
In particular, Bunn {{\frenchspacing\it et al.}} (1994) apply Wiener filtering to the
COBE DMR maps. We will generalize this treatment to the case of multiple
frequencies and more than one physical component.
Although it is tempting to
refer to this simply as multi-frequency Wiener filtering,
we will avoid this term, as it can cause confusion for the following
reason.
As is discussed in any signal-processing text, regular
Wiener filtering modifies the power spectrum of the signal.
Specifically, the filtered signal will have a power spectrum
\beq{StandardWienerEq}
P'_s(k) = {P_s(k)\over P_s(k)+P_n(k)},
\end{equation}
where $P_s$ and $P_n$ are the power spectra of the true signal and the
contaminant, respectively.
This of course makes it useless for power-spectrum estimation.
As we shall see in
Section \ref{RenormSec}, our approach {\it does not} alter the
power spectrum of the signal (the CMB, say), and
moreover has the attractive property of
being independent of any assumptions about
the true CMB power spectrum, requiring only assumptions about
the power spectra of the foregrounds.
This is possible because more than one frequency channel is available,
which allows foreground/background separation even if the
two have identical power spectra.
The availability of multiple frequencies is
absolutely essential
to our method:
in the special
case where $m$, the number of channels, equals one,
it degenerates not to standard Wiener filtering,
but to the trivial case of doing no subtraction at all.
The method that we advocate turns out to be related to
Wiener filtering by a simple $\ell$-dependent rescaling of
the weights. It is therefore trivial switch
between this power-conserving subtraction scheme
and an error-minimizing multi-frequency
generalization of Wiener filtering.
For pedagogical purposes, we first present the latter,
in sections~\ref{FlatSec}-\ref{NonGaussSec},
then show how it should
be rescaled in Section~\ref{RenormSec}.
\subsection{The model}
\label{ModelSec}
Let us make the approximation that we can can write
\beq{ComponentSumEq}
{\delta T}({\bf r},\nu) =
\sum_{i=1}^n f_i(\nu) x_i({\bf r}),
\end{equation}
where each term corresponds to a distinct physical component
(such as CMB, dust, synchrotron radiation, free-free emission,
radio point sources, {{\frenchspacing\it etc.}}).
Thus we are simply assuming that the contribution
from each component is separable into a function
that depends only on frequency times a function
that depends only on position.
For definiteness, let us normalize all
the functions $f_i$ so that $f_i(100\,{\rm GHz})=1$, thus absorbing
the physical units into the fields $x_i$.
Suppose that we observe the sky in $m$ different frequency
channels, so that at each point ${\bf r}$ on the sky, we measure
$m$ different numbers $y_i({\bf r})$, $i=1,...,m$.
(These need of course not be different channels observed by
the same experiment --- we may for instance want to
use the IRAS 100 micron survey as an additional ``channel".)
Assuming that we know the spectra $f_i(\nu)$ of
all the components,
we can write
\beq{ModelEq}
y({\bf r}) = F x({\bf r}) + \bfvarepsilon({\bf r}).
\end{equation}
Here the vector $\bfvarepsilon({\bf r})$ corresponds to the instrumental noise
in the various channels,
and $F$ is a fixed $m\times n$ matrix, the {\it frequency response matrix},
given by
\beq{FdefEq}
F_{ij} \equiv \int_0^{\infty} w_i(\nu) f_j(\nu) d\nu,
\end{equation}
$w_i(\nu)$ being the frequency response of the $i^{th}$ channel.
\subsection{The idealized flat case}
\label{FlatSec}
We now turn to the highly idealized case
where the sky is flat rather than spherical,
there is no pixelization, no Galactic zone of avoidance, {{\frenchspacing\it etc.}}
This simple case is directly applicable only to a
small patch of sky sampled at high resolution.
In the subsequent sections, we will show how to tackle
the numerous real-world complications.
It will be seen that none of these complications
change the basic matrix prescription of the simple case
described here.
The second term in \eq{ModelEq},
the vector $\bfvarepsilon$, contains the instrumental
noise in the different frequency channels. We model this by
\beqa{NoiseDefEq1}
\expec{n_i({\bf r})}&=& 0,\\
\label{NoiseDefEq2}
\expec{\widehat{\noise}_i({\bf k})^*\widehat{\noise}_j({\bf k}')} &=&
(2\pi)^2 \delta_{ij} \delta({\bf k}'-{\bf k}) P^{(n)}_i(k),
\end{eqnarray}
thereby allowing for the possibility that the noise within
each channel may exhibit some
correlation (hats denote Fourier transforms).
Uncorrelated noise simply corresponds to the case
where the noise power spectra are given by $P^{(n)}_i({\bf k}) = \sigma_i^2$,
where the $\sigma_i$ are constants.
Analogously, we assume that the physical components satisfy
\beqa{SignalDefEq1}
\expec{x_i({\bf r})}&=& 0,\\
\label{SignalDefEq2}
\expec{\widehat{x}_i({\bf k})^*\widehat{x}_j({\bf k}')} &=&
(2\pi)^2\delta_{ij} \delta({\bf k}'-{\bf k}) P^{(s)}_i(k).
\end{eqnarray}
Note that we are not assuming the random fields to be Gaussian.
\Eq{SignalDefEq2} follows directly from \eq{SignalDefEq1} and
homogeneity (translational invariance), together with the obvious assumption
that the different components are independent.
Our goal is to make a reconstruction, denoted ${\bf x}'$,
of the physical fields ${\bf x}$ from the
observed data ${\bf y}$. We will set
out to find the best {\it linear} reconstruction.
Because of the translational invariance,
the most general linear estimate ${\bf x}'$ of ${\bf x}$ can clearly
be written as
\beq{WdefEq}
{\bf x}'({\bf r}) = \int W({\bf r}-{\bf r}'){\bf y}({\bf r}')d^2 r',
\end{equation}
for some matrix-valued function $W$ that we will refer to as the
{\it reconstruction matrix}.
We now proceed to derive the best choice of the reconstruction matrix.
Defining the {\it reconstruction errors} as
$\Delta_i({\bf r})\equiv x'_i({\bf r})-x_i({\bf r})$, a straightforward calculation
gives
\beqa{ErrorEq1}
\expec{\Delta_i({\bf r})}&=& 0,\\
\label{ErrorEq2}
\expec{\Delta_i({\bf r})^2} &=&
\int\Bigg[\sum_{j=1}^n\left|\left(\widehat{\W}({\bf k})F-I\right)_{ij}\right|^2P^{(s)}_j({\bf k})
\nonumber \\
& & + \sum_{j=1}^m\left|\widehat{\W}_{ij}({\bf k})\right|^2P^{(n)}_j({\bf k})
\Bigg]\;d^2k, \end{eqnarray}
independent of ${\bf r}$.
\Eq{ErrorEq1} merely tells us that our estimators of the fields
are unbiased. Pursuing the analogy of ordinary Wiener filtering, we
select the reconstruction matrix
that minimizes the {{\frenchspacing r.m.s.}} errors, {{\frenchspacing\it i.e.}}, minimizes
$\expec{\Delta_i({\bf r})^2}$.
We thus require $\delta \expec{\Delta_i({\bf r})^2} = 0$, where the variation is
carried out with respect to $\widehat{\W}_{ij}$, and obtain
\beq{MessySolutionEq}
\sum_{k=1}^n\left(\widehat{\W}({\bf k})F-I\right)_{ik}F_{jk}P^{(s)}_k({\bf k})
+
\sum_{k=1}^m \widehat{\W}_{ik}({\bf k})P^{(n)}_k({\bf k})\delta_{kj} = 0.
\end{equation}
Defining the noise and signal matrices $N$ and $S$ by
\beqa{GdefEq}
N_{ij}({\bf k}) &\equiv&\delta_{ij}P^{(n)}_i({\bf k}) = {\rm diag}\{P^{(n)}_i({\bf k})\},\\
S_{ij}({\bf k}) &\equiv&\delta_{ij}P^{(s)}_i({\bf k}) = {\rm diag}\{P^{(s)}_i({\bf k})\},
\end{eqnarray}
\eq{MessySolutionEq} reduces to simply $\widehat{\W}[FSF^t+N]-SF^t=0$,
which has the solution
\beq{WienerResultEq}
\widehat{\W}({\bf k}) = S({\bf k}) F^t [FS({\bf k})F^t + N({\bf k})]^{-1}.
\end{equation}
These are the appropriate formulae to use when limiting the attention to a
rectangular patch of sky whose sides are small enough
($\ll$ one radian $\approx 60^\circ$) that its curvature can be neglected.
With all sky coverage, we need to solve the corresponding subtraction
problem on a sphere instead, which is the topic of the next subsection.
\subsection{The idealized spherical case}
Above we saw that the optized subtraction became much simpler in Fourier space,
where it became diagonal. In other words, although the linear combination
mixed the $m$ different frequencies, it never mixed
Fourier coefficients corresponding to different wave vectors ${\bf k}$.
The generalization of the derivation above to the case of fields on the
celestial sphere is trivial, and not surprisingly, the corresponding natural
basis functions in which the subtraction becomes diagonal are the spherical
harmonics. Expanding all fields in spherical harmonics as in
Section~\ref{NotationSec}, and combining the
observed $a_{\ell m}$-coefficients for the various frequency channels
in the vector
${\bf a}_{\ell m}$, we can thus write our estimate of the true
coefficients for the various components, denoted ${\bf a}'_{\ell m}$, as
\beq{SphWdefEq}
{\bf a}'_{\ell m} = W^{(\ell)} {\bf a}_{\ell m}.
\end{equation}
The analogues of equations\eqnum{GdefEq}-(\ref{WienerResultEq}),
giving the reconstruction matrix $W^{(\ell)}$, become
\beqa{SphWienerResultEq}
W^{(\ell)}&=&S^{(\ell)} F^t [FS^{(\ell)}F^t + N^{(\ell)}]^{-1},\\
N^{(\ell)}_{ij}&\equiv&\delta_{ij}C_\ell^{noise,i},\\
S^{(\ell)}_{ij}&\equiv&\delta_{ij}C_\ell^{(i)}.
\end{eqnarray}
The corresponding reconstruction errors
$\Delta {a'}^{(i)}_{lm}$
have
their mean square value $\Delta C_\ell^{(i)}\equiv
\expec{|\Delta {a'}^{(i)}_{lm}|^2}$ given by
\beq{SphErrorEq}
\Delta C_\ell^{(i)} =
\left[
\sum_{j=1}^n\left|\left(W^{(\ell)}F-I\right)_{ij}\right|^2 C_\ell^{(j)}
+
\sum_{j=1}^m\left|W^{(\ell)}_{ij}\right|^2 C_\ell^{noise,j}
\right].
\end{equation}
As we will see in subsection~\ref{PixelNoiseSubsec}, the
relevant power spectrum of the noise in channel $i$ is simply
$C_\ell^{noise,i} = 4\pi\sigma_i^2/N_i$, where
$\sigma_i$ is the {{\frenchspacing r.m.s.}} pixel noise and $N_i$ is the number of pixels.
$C_\ell^{(i)}$ denotes the power spectrum of the $i^{th}$ component at
100 GHz. In summary, the subtraction procedure is as follows:
first the maps from all frequency channels are expanded in
spherical harmonics, then the $a_{lm}$-coefficients of the various physical
components are estimated as above, and finally the filtered maps are obtained
by summing over these estimated coefficients, as in
\eq{MultipoleExpansionEq} with $\nu=$100 GHz.
\subsection{Pixelization and incomplete sky coverage}
All real-world CMB maps are pixelized, {{\frenchspacing\it i.e.}}, smoothed
by some experimental beam and sampled only at a finite number of points.
In addition, the presence of ``dirty" regions such as the Galactic plane,
the Large Magellanic Cloud,
bright point sources, {\it etc}, means that we may want to throw away
some of the pixels, leaving us with a map with a topology
reminiscent of a Swiss cheese.
In Section 5, we will see that the subtraction technique is quite efficient
in removing the various foregrounds from a CMB map.
The reason that it works so well is that it takes advantage of
the fact that the foregrounds have quite different power spectra, as summarized
in \fig{EverythingFig}, by
doing the subtraction multipole by multipole.
With incomplete sky coverage, one cannot do quite this well, since
it is impossible to compute the exact coefficients $a_{\ell m}$
using merely part of the sky.
Instead of the spherical harmonics, we must
expand our maps in some other set of basis functions, functions that vanish in
all the ``holes" in our maps. In contrast to the spherical harmonics,
each of these
functions will inevitably probe a range of $\ell$-values, rather than just a
single multipole, specified by a {\it window function}
as described in T96.
To exploit the fact that the various foregrounds have different
power spectra $C_\ell$, we clearly want these
window functions to be as narrow as
possible.
A prescription for how to calculate such basis functions,
taking incomplete sky coverage, pixelization, and position-dependent
noise into account, is
given in T96, and it is found that given a patch of sky whose smallest
angular dimension is $\Delta\theta$,
each basis function will probe an $\ell$-band
of width $\Delta\ell\approx 60^\circ/\Delta\theta$.
For instance, if we restrict our analysis to a
$10^\circ\times 10^\circ$ square, then
$\Delta\ell\approx 6$. This is very good news.
It means that the only
performance degradation in the subtraction technique will stem
from
the fact that it is unable to take advantage of sharp
features in the power spectra of width
$\Delta\ell\approx 6$ or smaller.
This is essentially no loss at all, since
as discussed in Section~\ref{ForegroundsSec}, we expect all the
foregrounds to have fairly smooth power spectra, without any sharp
spikes or discontinuities.
\ignore{
(Besides, who would trust an analysis that
was based on the assumption that we could accurately model such
sharp features in the foreground power spectra?)
}
Whatever set of orthonormal basis functions is chosen for the analysis,
our multi-frequency subtraction prescription
is to expand all maps in these functions and do the estimation
separately for each of the expansion coefficients.
For any one basis function
(corresponding to a set of weights $w_k$, one for each pixel $k$),
there will be such a coefficient $a_i$ for each of the $m$
frequency channels, and we combine these into the
$m$-dimensional vector ${\bf a}$.
The subtraction now decomposes into the following steps:
\begin{enumerate}
\item
Compute $\sigma^2_i$, the variance in $a_i$ that is due to pixel noise
(this variance is simply a weighted sum of
the noise variance in each pixel, the weights being $w_k^2$).
\item
Compute $\Delta_i^2$, the 100 GHz variance in $a_i$ that is due to
the $i^{th}$ physical component, as in T96 (this variance
depends only on the power spectra and the weights $w_k$).
\item
Compute the estimated coefficients for the $n$ different components,
denoted ${\bf a}'$.
\end{enumerate}
The last step is of course analogous to the cases we discussed above, {{\frenchspacing\it i.e.}},
${\bf a}' = W{\bf a}$, where
\beqa{GenWienerResultEq}
W&=&S F^t [FSF^t + N]^{-1},\\
N_{ij}&\equiv&\delta_{ij}\sigma_i^2,\\
S_{ij}&\equiv&\delta_{ij}\Delta_i^2.
\label{GenWienerResultEq4}
\end{eqnarray}
Let us illustrate this with the simple example of a small square region,
sampled in a square grid of $N\times N$ points with say $N=512$.
A convenient set of basis functions is then the discrete Fourier basis,
for which
our subtraction would reduce to the following steps:
\begin{enumerate}
\item
Fast Fourier transform (FFT) the data.
\item
Filter as above, separately for each of the $N^2$ Fourier coefficients.
\item
Perform an inverse FFT to recover the filtered maps.
\end{enumerate}
To do still better, we can use the optimal basis functions of T96.
For this simple case, they turn out to be simply the Fourier basis
functions, but
weighted by a two-dimensional cosine ``bell" $\cos x\cos y$ so that
they go smoothly to zero at the boundary of the square.
Thus the prescription becomes
\begin{enumerate}
\item Multiply by cosine bell
\item FFT
\item Filter
\item Inverse FFT
\item Divide by cosine bell
\end{enumerate}
The resulting map will be very accurate in the central parts of the square,
but the noise levels will explode towards the edges where the
cosine bell goes to zero.
Thus the way to make efficient use of this technique is to tile the
sky into a mosaic of squares with considerable overlap, so that one
can produce a low noise composite map using only the central regions of each
square.
\subsection{Non-Gaussianity and lack of translational invariance}
\label{NonGaussSec}
In the above treatment, we assumed that the statistical properties of
all random fields were translationally invariant, so that our only a
priori knowledge about them was their power spectra. In reality, this
is of course not the case. A flagrant counterexample is the Galactic
plane, where we expect much larger fluctuations in the dust,
synchrotron and free-free components than at high Galactic latitude.
In addition, most of the foregrounds exhibit non-Gaussian behavior.
We wish to emphasize that for the purposes of estimating the
underlying CMB-fluctuations, all of these features work to our
advantage. If we know the power spectrum of a contaminant, then
translational invariance and Gaussianity means that we have no
additional knowledge whatsoever about the contaminant, since
the power spectrum defines the random field completely.
Clearly, the more we know about our enemy, the greater our ability
will be to tackle it and distinguish it from CMB fluctuations.
The type of non-Gaussianity that we encounter in both diffuse
components and point sources manifests itself in that the trouble is
more spatially localized than it would be for a Gaussian random field
with the same power spectrum. A bright point source affects only a
very small region of the celestial sphere, of the order of the beam
width, which can simply be removed (perhaps by using higher resolution
observations at lower sensitivities, see {\it e.g.} O'Sullivan {{\frenchspacing\it et al.}}
1995). Dust emission, free-free emission and synchrotron radiation
tends to be localized to ``dirty regions", with the fluctuation levels
in ``clean regions" in some cases being orders of magnitude lower.
Again, we can take advantage of this non-Gaussianity (and lack of
translational invariance when we know the cause of the emission, such
as the Galactic plane), to simply remove these regions from the data
set. After this initial step, the analysis proceeds as described in
the previous subsection, using the power spectra that are appropriate
for the clean regions. Thus we sift out the CMB fluctuations from the
foregrounds in a two-step process, by exploiting the fact that their
statistical properties are different both in real space and in Fourier
space:
\begin{enumerate}
\item
We place most of the weight on the clean regions in real space.
\item
We place most of the weight on the clean regions in Fourier space,
as illustrated in \fig{EverythingFig}.
\end{enumerate}
\subsection{How to avoid distorting the CMB power spectrum}
\label{RenormSec}
Although the community has displayed considerable interest in
map-making, there are of course many cases where one is merely
interested in measuring the CMB power spectrum, for instance to
constrain the parameters of theories of the formation of structure
({\it e.g.} the amplitude and spectral index of the initial
irregularities). Rather than first generate a map with the method
presented above and then use it to estimate the power spectrum, the
latter can of course be obtained directly by aborting the reconstruction
``half way through". Thus we can estimate $C_{\ell^*}$ by first
estimating the $(2\ell^*+1)$ coefficients $a_{\ell m}$ that have
$\ell=\ell^*$, as described above, and then taking some appropriate
weighted average of the estimated $|a_{\ell m}|^2$, to reduce cosmic
variance. For technical details on the choice of basis functions, the
best weights to use for the averaging, {{\frenchspacing\it etc.}}, see T96.
When using our subtraction technique to estimate power spectra,
the normalization must be modified as described below.
The reason for this is the above-mentioned fact that
Wiener filtering tends to ``suck power" out of the data,
so that the power spectrum of the
filtered map is smaller than the true power spectrum.
Moreover, this power deficit normally depends on scale,
as indicated by \eq{StandardWienerEq}.
Let us use the notation of subsection 3.2 and investigate how the
quantity $|a'_{\ell m}|^2$ is related to the power spectrum $C_\ell$.
To avoid unnecessary profusion of indices, let us focus on one single
multipole, say $\ell=17$, $m=5$, and suppress the indices $\ell$ and $m$
throughout. Thus the vector ${\bf a}$ contains the multipole
coefficients from the $m$ different frequency channels, and
${\bf a}'$ the coefficients for the $n$ different physical components, as before.
A straightforward calculation shows that (no summation implied)
\beq{PowerEstEq1}
\expec{|a'_i|^2} = |V_{ii}|^2 C^{(i)} + b_i,
\end{equation}
where $V\equiv WF$, and the $b_i$, the additive {\it bias}, is given by
\beq{PowerEstEq2}
b_i \equiv
\sum_{j\neq i} |V_{ij}|^2 C^{(j)} +
\sum_{j=1}^m |W_{ij}|^2 C^{noise,j}.
\end{equation}
The power estimator
\beq{PowerEstEq3}
C^{(i)'}\equiv |a'_i|^2 - b_i
\end{equation}
will thus be an unbiased estimator of the true power $C^{(i)}$, {{\frenchspacing\it i.e.}},
$\expec{C^{(i)'}} = C^{(i)}$, if we impose the normalization constraint
$V_{ii}=1$ for all $i=1,...,n$.
As is seen in \eq{PowerEstEq2}, $b_i$ incorporates the power leakage
from the other physical components $(j\neq i)$ and from the
pixel noise. Note that when $V_{ii}=1$, $b_i$ equals $\Delta C^{(i)}$,
the reconstruction errors of \eq{SphErrorEq}.
Let us minimize $b_i$ subject to the constraint that $V_{ii}=1$.
Introducing the Lagrange multipliers $\lambda_i$, we thus differentiate
$L_i \equiv b_i - \lambda_i V_{ii}$ (no summation) with respect to
the components of the matrix $W$ and require the result to vanish.
After a straightforward calculation, we obtain the solution
\beq{PowerEstEq4}
W = \Lambda F^t[FSF^t+N]^{-1},
\end{equation}
where the matrix $\Lambda\equiv{\rm diag}\{\lambda_1,...,\lambda_n\}$.
Imposing the normalization constraints $V_{ii}=1$
now gives $\lambda_i = 1/(F^t[FSF^t+N]^{-1} F)_{ii}$.
Comparing this to \eq{SphWienerResultEq}, we draw the
following conclusion:
\begin{itemize}
\item
{\it
Our optimized power spectrum estimate uses the same reconstruction
matrix $W$
that we derived previously, except that the row vectors should
be rescaled so that $(WF)_{ii} = 1$.
}
\end{itemize}
(The extra matrix $S$ in \eq{SphWienerResultEq} is of course irrelevant
here, as it is diagonal and can be absorbed into $\Lambda$.)
This is the normalization that has been used in
\fig{MethodsFig}.
Since one of the main purposes of CMB sky maps is to serve as an
easy-to-visualize compliment to the
power spectrum, we strongly advocate using the above normalization
convention $(WF)_{ii} = 1$ when generating sky maps as well.
As we saw above, this will ensure that CMB fluctuations in the map
will retain their true power spectrum, rather than suffer the $\ell$-dependent
suppression characteristic of Wiener filtering.
Let us compare this with the situation in standard Wiener filtering,
which corresponds to $m=n=1$. In this simple case, $F$ and $W$
are merely scalars,
so the normalization condition gives $V=WF=1$. Thus $W$ equals a constant
$1/F$ which is independent of $\ell$, corresponding
to no subtraction at
all. In other words, if there is only one frequency channel,
our subtraction method is of no use for power spectrum estimation.
In the general case, there are $m\times n$ components in
$W$ and $n$ constraints $V_{ii}=1$, so the
subtraction method will help whenever
$m$, the number of channels, exceeds one.
An attractive feature of the reconstruction method given by
\eq{PowerEstEq4} is that it gives a reconstructed CMB map that
is completely independent
of our assumptions about the CMB power spectrum.
More formally, $W_{ij}$ is independent of $S_{ii}=C^{(i)}$.
This might seem surprising, since $S$ enters in the right-hand side of
\eq{PowerEstEq4}. The easiest way to prove this result is to
note that since the optimization problem is independent
of the assumed CMB power spectrum (both the target function
$b_i$ and the constraint equation $(WF)_{ii}$
are independent of $C^{(i)}$), its solution
(the $i^{th}$ row of $W$) must be as well.
Above we chose our filter to minimize $b_i$, the total contribution
from the other physical components and pixel noise.
This of course produces a robust power spectrum estimator, since
if our estimate of the power spectrum of some contaminant (or our estimate of
the noise level of some channel) is off by some number of percent,
the resulting error will scale as the corresponding term in $b_i$,
{{\frenchspacing\it e.g.}},
as $|V_{ij}|^2 C^{(j)}$ (or as $|W_{ij}|^2 C^{noise,j}$).
If one is confident that there are no such systematic errors,
one may instead opt to minimize the
variance of our estimator $C'_i$, which
is equivalent to minimizing the variance of $b_i$.
This would lead to a system of cubic (rather than linear) equations
for the components of $W$, to be solved numerically.
\newpage
\section{POWER SPECTRA OF THE FOREGROUNDS}
\label{ForegroundsSec}
In this Section, we make estimates of the angular power spectra $C_\ell(\nu)$
for the various foregrounds. The results are plotted in
Figures 3 though 6, and summarized in \fig{EverythingFig}.
The former are truncated at $[\ell(2\ell+1)C_\ell/4\pi]^{1/2}
= \sqrt{2}\times 20{\rm \mu K} \approx 28\>{\rm \mu K}$, which approximately
corresponds to COBE-normalized scale-invariant temperature fluctuations.
Thus the
shaded regions in \fig{EverythingFig}
are simply the top contours of Figures 3 though 6.
It should be emphasized that these estimates are {\it not}
intended to be very accurate, especially when it comes to
normalization. Rather, the emphasis is on their
{\it qualitative} features, especially those that differentiate them
from one another. Despite the fact that we
currently lack accurate high-resolution data in many important
frequency bands, we will see that quite robust qualitative conclusions
can be drawn about which regions of the $\ell-\nu$-plane will be
most suitable for estimating various parts of the CMB power spectrum.
\subsection{Point sources}
\def{\bar{n}}{{\bar{n}}}
\def\phi_c{\phi_c}
In this Section, we make estimates of the angular power spectrum
$C_\ell(\nu)$ for point sources. Here the $\ell$-dependence is well known,
but the $\nu$-depencence quite uncertain.
However, despite these uncertainties, we will see that
radio point sources will be contribute mainly to
the lower right corner of \fig{EverythingFig}, whereas
infrared point sources will contribute mainly to the upper right.
If at some frequency there are $N$ point sources Poisson distributed
over the whole sky, all with the same flux $\phi$, is is easy to show that
\beq{Radio_almEq1}
\expec{a_{\ell m}}=\cases{
\sqrt{4\pi}{\bar{n}}\phi&if $\ell =0$,\cr
0 &if $\ell\neq 0$,
}
\end{equation}
where ${\bar{n}}\equiv N/4\pi$ is the average number density per steradian,
and
\beq{RadioClEq1}
C_\ell \equiv \expec{|a_{\ell m}|^2} - |\expec{a_{\ell m}}|^2
= {\bar{n}}\phi^2.
\end{equation}
In other words, this would produce a simple white-noise power spectrum,
with the same power in all multipoles,
together with a non-zero monopole caused by the fact that no fluxes
are negative.
If there are two independent Poisson populations, with densities
${\bar{n}}_1$ and ${\bar{n}}_2$ and fluxes $\phi_1$ and $\phi_2$, both the means and
the variances will of course add, giving a monopole
$\sqrt{4\pi}({\bar{n}}_1\phi_1 + {\bar{n}}_2\phi_2)$ and a power spectrum
$C_\ell = {\bar{n}}_1\phi_1^2 + {\bar{n}}_2\phi_2^2$.
Taking the limit of infinitely many populations,
we thus obtain
\beqa{RadioMonopoleEq}
\expec{a_{00}}&=&
\sqrt{4\pi}\int_0^{\phi_c}{\partial{\bar{n}}\over\partial\phi}\phi\, d\phi,\\
\label{RadioClEq2}
C_\ell&=&\int_0^{\phi_c}{\partial{\bar{n}}\over\partial\phi}\phi^2 d\phi,
\end{eqnarray}
where
$\partial{\bar{n}}\over\partial\phi$ is the
{\it differential source count}. In other
words,
we have defined ${\bar{n}}(\phi)$ as the number density per steradian
of sources with flux less than $\phi$.
In real life, we are of course far from powerless against these point
sources, and can either attempt to subtract them by using spectral information
from point source catalogues, or simply choose to throw away all pixels
containing a bright point source. In either case, the end result would be that
we eliminate all sources with a flux exceeding some
flux cut $\phi_c$, which then becomes the upper limit of integration
in equations\eqnum{RadioMonopoleEq} and\eqnum{RadioClEq2}.
We have estimated the source counts at $1.5\>{\rm GHz}$ from a
preliminary point source catalog from the VLA FIRST
all sky survey
(Becker {{\frenchspacing\it et al.}} 1995).
This catalog contains 16272 radio sources in a
narrow strip $110^{\circ} < {\rm ra} < 195^{\circ}$,
$28.5^{\circ} < {\rm dec} < 31.0^{\circ}$, complete down to a flux limit of
$0.75\>{\rm mJy}$. A flux histogram is plotted in
\fig{FluxFig}, together with a simple double power law fit
\beq{LumFuncFitEq}
{\partial{\bar{n}}\over\partial\phi} \approx
{524000\over{\rm mJy}\>{\rm sr}} \left(\phi\over 0.75{\rm mJy}\right)^{-1.65}
\left(1 + {\phi\over 100{\rm mJy}}\right)^{-1}
\end{equation}
that will be quite adequate for our purposes.
We can obviously never eliminate {\it all} radio sources, as there
is for all practical purposes an infinite number of them,
the integral of the differential source count diverging
at the faint end. There is also a rather obvious lower
limit to $\phi_c$ in practice.
Since the highest resolution COBRAS/SAMBA channels have a FWHM
of $4.5$ arcminutes (see Table 1, Section 4.3.3 below),
there are only about $10^7$ independent pixels in
the sky. Assuming that the above-mentioned FIRST data is representative of
the entire sky, there are about 6 million sources brighter than
$0.75\>{\rm mJy}$, so if we choose $\phi_c$ much lower than this and reject all
data that is contaminated at this level, we would have to throw away almost
all our pixels. The subtraction strategy also has
its limits, quite apart from the large amount of work that would
be involved:
if we try to model and subtract the sources, it
appears unlikely that we will be ever to do this with an accuracy exceeding
$1\%$, and even $10\%$ could prove difficult given complications such as
source variability.
Since the choice of flux cut will depend on the level of ambition of
future observing projects,
we simply compute how the resulting power spectrum
depends on the flux cut $\phi_c$. To give a rough idea of what flux cuts
may be practically feasible in the near future, the number of radio
sources in the entire sky are about $4\tento{6}$ above $1\,{\rm mJy}$,
$8\tento{5}$ above $10\,{\rm mJy}$, $7\tento{4}$ above $100\,{\rm mJy}$ and
$800$ above $1\,{\rm Jy}$, all at $1.5\>{\rm GHz}$.
The result, computed using equations\eqnum{dBdTeq},
\eqnum{RadioMonopoleEq},
\eqnum{RadioClEq2},
and\eqnum{LumFuncFitEq},
is shown in \fig{RadioClFig}.
Notice that the fluctuations have quite a different magnitude and
$\phi_c$-scaling than the monopole, since the two are dominated
by quite different parts of the differential source count
function. Since the slope is close to $-2$, the monopole, the total
brightness, gets similar
contributions from several different decades of flux, whereas the
fluctuations are strongly dominated by the brightest sources.
Thus we need not worry about not knowing the
exact differential source count at the faint end, as all
that really matters is its behavior immediately below our
flux cut.
Some authors ({{\frenchspacing\it e.g.}}, Franceschini {{\frenchspacing\it et al.}} 1989) have raised the
possibility that point source clustering could create more large-scale
power that the Poisson assumption would indicate. We have tested this
by computing the power spectrum of the FIRST data, and find no
evidence for any departure from Poisson noise (see also Benn \& Wall
1995). This conclusion, which of course simplifies the issue
considerably, is not surprising, because most of the sources are
located at very large distances. Correlations in the
two-dimensional galaxy distribution that we observe (and which is the
relevant quantity when it comes to CMB contamination) are therefore
diluted by projection to negligible levels.
Although there is good reason to believe that the power spectrum
will remain Poissonian at the higher frequencies that are relevant to the CMB,
the issue of its normalization is of course quite complex, given the
uncertainties about the spectra and the evolution of the various
galaxy and AGN populations (see Franceschini {\it etal} 1991).
In \fig{pointsourcesFig}, we have simply made a $100\>{\rm mJy}$ flux cut
at $1.5\>{\rm GHz}$ (for an all-sky survey, this corresponds
to removing about 70000 sources)
and extrapolated to higher frequencies
with a power law $B(\nu)\propto\nu^{-\alpha}$,
thus obtaining
\beq{radioApproxEq}
\left[{2\ell+1\over 4\pi}\ellC^{ps}_\ell(\nu)\right]^{1/2} \approx
0.30{\rm K} \> \left({\sinh^2(x/2)\over
[\nu/1.5\,{\rm GHz}]^{4+\alpha}}\right)\ell,
\end{equation}
where $x = h\nu/kT_0$ as before.
For $\ell=100$, this corresponds to $1.5\,{\rm \mu K}$ at 100 GHz
if $\alpha=0$.
Lowering the flux cut to $10\>{\rm mJy}$ (removing about 900000 sources)
reduces this by about a factor 4, and
a $1\>{\rm mJy}$ cut (removing about 5 million sources)
gains us another factor of four.
Obviously, ambitious flux cuts become
feasible if only a small fraction of the sky is surveyed.
Flat-spectrum sources with spectral index
$\alpha\approx 0.3$ are likely to dominate at higher frequencies
(Franceschini {{\frenchspacing\it et al.}} 1989), but this is of course
only to be used as a crude first approximation,
as the emission at higher frequencies is likely to be dominated by sources
whose spectra rise and peak near those frequencies,
and very little
is known about the abundances of such objects.
We make the rather cautious assumption of an effective spectral index
$\alpha=0.0$ for the population as a whole.
This approximation
is of course quite
unsatisfactory at the high-frequency end, where infrared
emission from high redshift galaxies could play an important role.
For instance, if this emission is dominated by dust in these galaxies with
emissivity $\beta=2$ (see the following Section),
we would expect $\alpha=-4$ to be a better description
at the higher microwave frequencies.
Unfortunately, the differential source counts of such infrared
point sources around 100 GHz is still completely unknown.
For a recent review of these issues, see Franceschini {{\frenchspacing\it et al.}} (1991).
\subsection{Diffuse Galactic sources}
In this Section, we discuss the qualitative features we expect for the
angular power spectra of the diffuse Galactic contaminants, namely
dust, free-free emission and synchrotron radiation.
\subsubsection{Power spectrum}
We have estimated the power spectrum of Galactic dust from
a large number of $12.5^{\circ}\times 12.5^{\circ}$ fields of the
100 micron IRAS all-sky survey (Neugebauer {{\frenchspacing\it et al.}} 1984),
which have an angular resolution
of two arcminutes (about twice as good as the best COBRAS/SAMBA
channels). Although the amplitude varies greatly with Galactic latitude,
the overall shape is strikingly independent of latitude,
and typically declines as $C_\ell \propto 1/\ell^3$ for
$\ell$ between 100 and a few thousand, steepening
slightly on the smallest scales. This agrees well with previous
findings (Low \& Cutri 1994; Guarini {{\frenchspacing\it et al.}} 1995).
To estimate the power spectrum of synchrotron radiation,
we used the Haslam 408 GHz map (Haslam {{\frenchspacing\it et al.}} 1982). Although the angular resolution
of this map is only of order $0.85^{\circ}$, {\frenchspacing\it i.e.}, far too low to provide
information for $\ell\gg 100$, the logarithmic
slope was found to be consistent with that
for dust in the overlapping multipole range; around $-3$.
These results are hardly surprising, since even without analyzing
observational data, one may
be able to guess the qualitative features of the
power spectra of the three diffuse components.
Since they are all caused by emission from diffuse blobs, one might
expect their power spectra to exhibit the following
characteristic features:
\begin{itemize}
\item $C_\ell$ independent of $\ell$ for small $\ell$, corresponding
to scales much greater than the coherence length of the blobs
(this is the standard Poisson behavior, and follows if one assumes
that well separated blobs are uncorrelated).
\item $C_\ell$ falls off at least as fast as $1/\ell^4$ for very
large $\ell$, corresponding to scales much smaller than typical
blob sizes (this follows from the simple assumption that the
brightness is a {\it continuous} function of position).
\item If $\ell^2 C_\ell$ thus decreases both as $\ell$ gets small and as
$\ell$ gets large, it must peak at some scale, a scale which we refer to as
the coherence scale.
\end{itemize}
The behavior of the contaminant power spectrum for very small $\ell$
(whether there is indeed a coherence scale, {\it etc}), is of
course quite a subtle one, as the presence of the
Galactic plane means that the answer will be strongly dependent on
which patches of sky we choose to mask out during the analysis. We will return
to this issue in the subsection about non-Gaussianity below.
In the figures, we have simply assumed that all three components have
a coherence scale of about $10^{\circ}$, corresponding to
$\ell\approx 10$, and used power spectra of the simple
form $C_\ell\propto (5+l)^{-3}$.
\subsubsection{Frequency dependence}
The frequency dependence of the three components has been
extensively discussed in the literature
(see {{\frenchspacing\it e.g.}} Reach {{\frenchspacing\it et al.}} 1995 and references therein).
For synchrotron radiation and free-free emission, we use simple power laws
$B(\nu)\propto \nu^{-\beta}$.
For synchrotron emission, $\beta\approx 0.75$ below
10 GHz (de Bernardis {{\frenchspacing\it et al.}} 1991), steepening to $\beta\sim 1$
above 10 GHz (Banday \& Wolfendale 1991), so we simply assume
$\beta=1$ here.
For free-free emission, we make the standard assumption $\beta=0.1$.
For dust, we assume a spectrum of the standard form
\beq{DustSpecEq}
C^{dust}_\ell\propto {\nu^{3+\beta}\over e^{h\nu/kT}-1}.
\end{equation}
Although an emissivity index $\beta=2$ is found to be a good
fit in the Galactic plane (Wright {{\frenchspacing\it et al.}} 1991), we use instead
the more conservative
parameters $T = 20.7{\rm K}$, $\beta=1.36$, which are found to
better describe the data at high Galactic latitudes (Reach {{\frenchspacing\it et al.}} 1995),
since it is of of course the cleanest
regions of the sky that are the most
relevant ones for measurement of CMB fluctuations.
\subsubsection{Non-Gaussianity and inhomogeneity}
The spatial distributions of synchrotron radiation, free-free and dust
emission of course exhibit strong non-Gaussian features, and also a strong
departure from translational invariance because of the
Galactic plane.
As discussed in Section 3, this is good news regarding our ability
to estimate CMB fluctuations. However, it forces us to be careful when
presenting plots of estimated power spectra.
Thus plots showing foreground contributions to
the CMB, such as those presented in this paper,
should be read with the following two caveats in mind.
First, one of the manifestations of the type of non-Gaussianity that these components
display is the presence of ``clean regions" and
``dirty regions". For instance, a raw histogram of the brightness per
pixel in the DIRBE 240 micron map shows that although
$3\%$ of the pixels have a brightness exceeding 100 MJy/sr,
the mean is only about 6 MJy/sr. The extremely bright pixels are
of course mainly located in the Galactic plane, but the level of
``cleanness" also exhibits strong variations with Galactic longitude, caused
both by known objects such as the Large Magellanic Cloud and the
North Galactic Spur, and by the non-Gaussian clumpiness
of the dust component itself. Similar
conclusions follow from an analysis of the IRAS
100 micron maps.
The result of this is that although the power spectrum may have a similar shape
in clean and dirty regions, the normalization will vary considerably, much
more than it would due to sample variance in a Gaussian field.
It is thus important that plots of $C_\ell$-estimates are supplemented with
a description of what type of region they refer to. In the figures in this
paper, all such power spectra refer to averages for
{\it the cleanest 20\% of the sky}.
Secondly, when estimating the lowest multipoles, it is important to
use as much of the celestial sphere as possible, to keep the window
functions in $\ell$-space narrow (T96). Of course, the contribution of
Galactic foregrounds increases as one includes more sky, but this is
unlikely to be a serious problem. The cleanest two thirds of the DIRBE
240 micron pixels have an average brightness about three times that of
the cleanest $20\%$ and this is a large enough area to recover all
multipoles fairly accurately except the quadrupole and octupole (T96).
Thus if we increase the sky coverage to estimate the lowest multipoles
more accurately, the Galactic foregrounds are likely to be within a factor
of a few of the contributions from the cleanest regions of the sky,
and perhaps less if the contaminant power $\ell^2 C_\ell$ falls of on
scales larger than some coherence scale.
\subsection{The effects of discreteness, pixel noise and beam smoothing}
Although we usually think of pixel noise as a problem of a different
nature than the other contaminants, it can be described by an angular
power spectrum $C^{noise}_\ell(\nu)$ and so be treated on an equal
footing. One may ask what is the point of doing this, since the
statistical impact of the noise on the subtraction described in this
paper is straightforward to calculate anyway. The answer is that it
provides better physical intuition. Real world brightness data is of
course discretely sampled as ``pixels" rather than smooth functions
known at every point $\widehat{\bf r}$, but as long as the sampling is
sufficiently dense (the typical pixel separation being a few times
smaller than the beamwidth), this discreteness is merely a rather
irrelevant technical detail. It enters when we do the analysis in
practice, but our results are virtually the same as if we had
continuous sampling.
\subsubsection{Pixel noise}
\label{PixelNoiseSubsec}
\defn{n}
If we estimate the angular power spectrum from a sky map containing only
isotropic pixel noise, we find that all the $C_\ell$-coefficients are
equal (the white noise power spectrum),
at least down to the angular scale corresponding to the
inter-pixel separation. This well-known result simply reflects the
fact that the noise in the different pixels is uncorrelated.
We will now elaborate on this in slightly greater detail.
For a CMB sky map at frequency $\nu$, pixelized into $N$ pixels
pointing in the directions $\widehat{\bf r}_i$ and
with noise $n_i$,
$i=1,2,...,N$,
one typically has to a good approximation that the $n_i$
are Gaussian random variables satisfying $\expec{n_i} = 0$ and
\beq{PixelNoiseEq}
\expec{n_in_j} = \delta_{ij}\sigma_i^2,
\end{equation}
for some known numbers $\sigma_i$.
We want to eliminate this discreteness from the problem, and describe
the noise as a continuous field
$B_{noise}(\widehat{\bf r},\nu)$ instead, a random field that gets added to the
actual brightness $B(\widehat{\bf r},\nu)$.
More specifically, for any weight function $\psi(\widehat{\bf r})$ that
we use in our analysis, we want the result to be
the same whether we sum over the pixels or integrate over the
field, so we require
\beq{SumIntegralEq}
{1\over N} \sum_{i=1}^N \psi(\widehat{\bf r}_i) n_i \approx
{1\over 4\pi} \int\psi(\widehat{\bf r})B_{noise}({\bf r},\nu) d\Omega.
\end{equation}
Fortunately, this is easy to arrange: when the pixels are placed according
to an equal-area method (as in the COBE pixelization system),
a simple choice that works is to choose $B_{noise}(\widehat{\bf r})$
to be a white noise field satisfying
\beq{WhiteNoiseEq}
\expec{B_{noise}(\widehat{\bf r})B_{noise}(\widehat{\bf r}')} = \delta(\widehat{\bf r},\widehat{\bf r}'){4\pi\over N}\sigma(\widehat{\bf r})^2,
\end{equation}
where $\delta$ is the angular Dirac delta function, and
$\sigma(\widehat{\bf r})$ denotes the
$\sigma_i$ corresponding to the pixel position closest to $\widehat{\bf r}$.
Since white noise by definition has no correlations on any scale, it is easy
to see why this reproduces the basic feature of the pixel noise, {{\frenchspacing\it i.e.}}, no
correlation between neighbouring pixels. The fact that white noise fluctuates
wildly on sub-pixel scales does not invalidate
\eq{SumIntegralEq}, since any weighting function $\psi$ that we
use in practice cannot vary on sub-pixels scales
(since we will after all apply it to pixels),
and thus smoothes out this substructure.
For the purposes of analyzing future experiments, let us assume that
the pixel noise is independent of position, so that $\sigma_i$
simply equals some constant, $\sigma$.
This means that the white noise is isotropic, and has a well-defined angular power
spectrum $C^{noise}_\ell$ that we will now compute.
For white noise, the power spectrum is
independent of $\ell$, and
thus all we need to do is find the overall normalization, expressed in terms of
the pixel noise and number of pixels.
We do this by
examining the simplest multipole, $\ell=0$.
Since $Y_{00} = 1/\sqrt{4\pi}$, \eq{MultipoleExpansionEq}
gives
\beq{NoiseEq1}
a^{noise}_{00} = \sqrt{4\pi}\left({1\over 4\pi}\int \delta T_{noise}(\widehat{\bf r}) d\Omega\right),
\end{equation}
where we have factored out $\sqrt{4\pi}$ so that the expression in parenthesis
is the sky-averaged temperature.
Replacing this by the
pixel-averaged temperature (simply using \eq{SumIntegralEq} with $\psi=1$), we obtain
\beq{NoiseEq2}
a^{noise}_{00} = \sqrt{4\pi}\left({1\over N}\sum_{i=1}^N \delta T_{noise}(\widehat{\bf r}_i)\right).
\end{equation}
Using
\eq{PixelNoiseEq},
this leaves us with our desired result,
\beq{NoiseEq4}
C^{noise}_\ell = C^{noise}_0 = \bigexpec{\left|a_{00}\right|^2} =
{4\pi\over N}\sigma^2.
\end{equation}
It is straightforward to verify that this normalization
agrees with that in \eq{WhiteNoiseEq}.
\subsubsection{Beam smoothing}
\defB^{obs}{B^{obs}}
\defw{w}
\def\widehat{\beamf}{\widehat{w}}
\def\theta_b{\theta_b}
In this subsection, we point out that the effects of beam smoothing
can be completely absorbed into the description of the noise.
In a single-beam experiment, the
brightness field $B^{obs}$ is the true field $B$ convolved with a
beam function $w$,
\beq{BeamConvEq}
B^{obs}(\widehat{\bf r}) = \intw(\theta) B(\widehat{\bf r}') d\Omega',
\end{equation}
where $\theta\equiv\cos^{-1}(\widehat{\bf r}\cdot\widehat{\bf r}')$ is the angle between the
two vectors.
As long as the beam width is much less than a radian,
expanding this in spherical harmonics gives the familiar result
\beq{FourierBeamEq}
C_{\ell}^{obs} \approx \left|\widehat{\beamf}(\ell)\right|^2C_\ell,
\end{equation}
where $\widehat{\beamf}$ is the Fourier transform of $w$.
In the common approximation that the beam profile is a Gaussian,
\beq{GaussianBeamConvEq}
w(\theta) = {1\over\sqrt{2\pi}\theta_b}
e^{-{1\over 2}{\theta^2\over\theta_b^2}},
\end{equation}
\eq{FourierBeamEq} reduces to
\beq{FourierBeamEq2}
C_{\ell}^{obs} \approx e^{-\theta_b^2\ell(\ell+1)} C_\ell.
\end{equation}
The standard way to quote the beam width is to give
the full-width-half-max (FWHM) width of $w$,
so the correspondence is
$\theta_b = {\rm FWHM}/\sqrt{8\ln 2} \approx 0.425\>{\rm FWHM}.$
\noindent
This suppression of high multipoles by beam smoothing
of course affects the fields $B_i$ for all the different components
($B_{cmb}$, $B_{dust}$, {{\frenchspacing\it etc.}}) except one. The exception
is $B_{noise}$, since the pixel noise gets added after the
beam smoothing, and thus has nothing to do with the beam width.
Although it is easy to convert between actual and observed
fields (using \eq{FourierBeamEq} for the power spectrum or
a simple deconvolution for the fields $B_i(\widehat{\bf r})$), it is quite
a nuisance to always have to distinguish between the two.
Since $B_{noise}$ is the only field that is simpler to describe
``as observed", we adopt the following convention:
\begin{itemize}
\item
We let all fields $B_i(\widehat{\bf r},\nu)$ refer to the {\it actual} fields,
as they would appear if there were no beam smoothing.
\end{itemize}
We thus define the ``unsmoothed" noise field as
\beq{UnsmoothedNoiseEq}
C^{noise}_{\ell}(\nu) \equiv
{4\pi\sigma(\nu)^2/N \over \left|\widehat{\beamf}\left(\ell\right)\right|^2}.
\end{equation}
This is what is plotted in \fig{NoiseFig}.
In other words, the advantages of this convention are that
\begin{itemize}
\item
this figure can be directly compared with those for the
various foregrounds, and
\item
the latter figures are independent of any assumptions about
the beam width.
\end{itemize}
\subsubsection{COBRAS/SAMBA specifics}
The proposed COBRAS/SAMBA satellite mission (Mandolesi {{\frenchspacing\it et al.}} 1995) is
currently being evaluated by the European Space Agency,
and if approved, is scheduled for launch in 2003.
As currently proposed, it would
have nine frequency
channels, as summarized in Table 1.
\begin{table}
\begin{tabular}{|l|ccccccccc|}
\hline
Channel &1&2&3&4&5&6&7&8&9\\
\hline
Center frequency $\nu$ [GHz] &31.5&53&90&125&143&217&353&545&857\\
Bandwidth $\Delta\nu/\nu$ &0.15&0.15&0.15&0.15&0.35&0.35&0.35&0.35&0.35\\
FWHM beam size [arcmin]&30&18&12&10&10.5&7.5&4.5&4.5&4.5\\
Pixel noise $\sigma(\nu)$, 2 years [$\mu$K]
&17.7&18.3&23.7&69.5&5.7&6.0&36.0&271&62700\\
\hline
\end{tabular}
\caption{COBRAS/SAMBA channel specification}
\label{ytable1}
\end{table}
Although we will use the exact
numbers from the this table in the analysis in
Section~\ref{ResultsSec}, we have made a few
simplifying approximations in generating
\fig{NoiseFig}, since we want this plot to be approximately applicable
to any satellite mission using similar technology.
\vskip 0.2 truein
\noindent
{\bf Sensitivity:}
The reason that $\sigma(\nu)$ explodes for large $\nu$ is of course
the exponential fall-off of the Planck-spectrum.
Converting $\sigma$ to brightness fluctuations using
\eq{dBdTeq}, one finds that to a crude approximation,
$\sigma(\nu)\propto\nu$.
The main deviation from this power law
occurs around 130 GHz, where the
the technology transition
from HEMT to bolometer detectors between channels 4 and 5 can be seen to
cause the noise to drop by an order of magnitude in Table 1.
In generating figures~\ref{NoiseFig} and~\ref{EverythingFig},
we have simply interpolated the tabulated values with a cubic spline.
\vskip 0.2 truein
\def\eta{\eta}
\noindent
{\bf Number of pixels:}
Typically, the number of pixels is chosen so that neighbouring pixels overlap
slightly. Since the combined area of all pixels
is proportional to ${\rm FWHM}^2\>N$,
it is therefore convenient to define the dimensionless quantity
$\eta \equiv {\rm FWHM}^2 N/4\pi$.
In terms of $\eta$, which is thus a measure of the rate of
oversampling, we can write the numerator
of \eq{UnsmoothedNoiseEq} as
\beq{OversampEq}
{4\pi\sigma^2\over N} = {{\rm FWHM}^2\sigma^2\over\eta},
\end{equation}
{{\frenchspacing\it e.g.}}, in terms of the quantities that are usually quoted
in experimental specifications.
The COBE DMR experiment had ${\rm FWHM} = 7.08^{\circ}$
and N=6144 pixels, which gives an oversampling factor
$\eta\approx 7.47$.
In this paper, we will assume that the COBRAS/SAMBA experiment
will use this same degree of
oversampling, in all channels.
\vskip 0.2 truein
\noindent
{\bf Beam width:} From Table 1, we see that the resolution is
diffraction limited (FWHM
|
$ \propto 1/\nu$) at low frequencies,
corresponding to a mirror size of order 1 meter,
whereas it is constant at the highest frequencies.
In generating figures~\ref{NoiseFig} and~\ref{EverythingFig},
we have simply interpolated the tabulated values with a cubic spline.
\ignore{
Here the simple fit
\beq{ResFitEq}
{\theta_b\over 0.6} \approx 4'\left[1 + \left({\nu\over
140\>{\rm GHz}}\right)^{-1.3}\right]
\end{equation}
for the frequency dependence is adequate for the purposes of
\fig{NoiseFig}.
Combining all these estimates, we thus obtain
\beqa{NoiseApproxEq}
\nonumber
\left[{2\ell+1\over 4\pi}C^{noise}_\ell(\nu)\right]^{1/2}&=&
\sqrt{2\ell+1\over N}
\exp\left[{1\over 2}\theta_b(\nu)^2\left(\ell+{1\over 2}\right)^2\right] \sigma(\nu)
{\partial B_0\over\partial T_0}\\
&\approx&
0.010 {\rm \mu K} \> \sqrt{2\ell+1} e^{\theta_b(\nu)^2\ell^2/2}{(\sinh x/2)^2\over x^{2.7}},
\end{eqnarray}
where $\theta_b(\nu)$ is given by
\eq{ResFitEq} and $x = h\nu/kT_0$ as before.
This function is plotted in \fig{NoiseFig}.
(These fits are used for illustration only --- in the
calculations presented in the following Section,
the exact figures from Table 1 have been used.)
}
\clearpage
\section{RESULTS}
\label{ResultsSec}
We have computed the signal-to-noise ratio obtainable with the
technique presented in Section~\ref{WienerSec} assuming that the foregrounds
behave as conjectured in Section~\ref{ForegroundsSec}.
We have taken $m=9$ and $n=5$, corresponding to the
nine COBRAS/SAMBA channels specified in Table 1 and the five
components CMB, dust, radio point sources, synchrotron radiation
and free-free emission.
\subsection{Multi-frequency subtraction versus no subtraction}
The resulting reconstruction errors $\Delta C_l$ computed from
\eq{SphErrorEq} are shown in \fig{MethodsFig} (bottom heavy line,
delimiting the double-hatched region), and are seen to lie about three
orders of magnitude beneath a CDM power spectrum, corresponding to a
signal-to-noise ratio of about $10^3$ for CMB map reconstruction
(referring to all foregrounds collectively as ``noise" in this
context). For estimating the power spectrum $C_\ell$, a quadratic
quantity, the corresponding signal-to-noise ratio would be an
outrageous $10^6$ for $\ell\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 1000$. Clearly, this sort of accuracy
will not be attainable in practice, because we do not know the
frequency dependence of the various contaminants accurately enough to
trust the extrapolations involved in the subtraction. We will return
to this issue below. Thus interpreting this as an upper limit to how
well we can hope to do, we also wish to place a lower limit on how
poorly we can do. The most simplistic approach possible is of course
making no attempts whatsoever at foreground subtraction, and using
merely a single channel. The resulting errors for the 143 GHz channel
are given by the uppermost curve in the figure. A slight improvement
would be the solid curve below it, which corresponds to choosing the
single best COBRAS/SAMBA channel separately for each multipole. The
frequency where the combined foreground contribution is minimized is
plotted as a function of $\ell$ in \fig{EverythingFig} (heavy dashed
line). The reason for its positive slope is of course that when $\ell$
increases, the power spectrum of dust decreases whereas the power
spectrum of radio point sources, attacking from below, increases.
This same effect is illustrated in \fig{MultipolesFig}, which shows
the total contribution from all foregrounds to the observed $C_l$ as a
function of frequency. It is seen that although the most promising
frequency for estimating low multipoles is $\nu\sim 50$GHz, the
central COBE channel, the optimal frequency for searching for CDM
Doppler peaks ($\ell \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 200$) is around 100 GHz. The situation at
three of the COBRAS/SAMBA frequencies is summarized in
Figures~\ref{Channel31Fig} through~\ref{Channel217Fig}.
\subsection{Why pixel-by-pixel subtraction performs so much worse}
An alternative subtraction method would
be to combine the smaller pixels of the high
channels into 30 arcminute pixels, and then simply apply our subtraction
scheme on a pixel-by-pixel basis, using
equations\eqnum{GenWienerResultEq}-(\ref{GenWienerResultEq4})
as follows:
\begin{enumerate}
\item
Compute the pixel variance $\Delta_i^2$ contributed by each component,
as $\Delta_i^2 = \sum_\ell (2\ell+1)C_l^{(i)}|\widehat{\beamf}(\ell+1/2)|^2/4\pi$.
\item
Use the pixel variance of Table 1 for $\sigma_i^2$ in the matrix $N$.
\item
Use the resulting matrix $W$ to reconstruct the CMB temperature for each pixel.
\item
Estimate $C_\ell$ by expanding the resulting CMB map in spherical harmonics.
\end{enumerate}
Loosely speaking, this method corresponds to subtracting before Fourier
transforming, rather than vice versa as in the optimized method.
We found that this method produced a signal-to-noise ratio
of about 15 for each pixel. The
resulting reconstruction errors are shown by the heavy dashed line in
\fig{MethodsFig}, and are seen to be more than an order of magnitude worse
than the optimized method for the small $\ell$-values where the low
($30^\prime$) resolution is not a problem.
This degradation in performance may seem surprising, since the
same (overly optimistic) assumption that we know the frequency
dependence of all components is made with this method.
However, the cause is exactly the same as that of the
discrepancy between the two
uppermost curves in the figure: the optimal channel weighting is different
for high and low multipoles. Thus the pixel-by-pixel approach selects one
single channel weighting that does not perform unduly badly for any multipole,
at the price of not
doing especially well at any multipole either.
The optimized method, on the other hand, tailors the channel weighting separately
for each multipole.
\subsection{Why ``direct subtraction" does so poorly}
Another natural method would be to simply
select as many channels as there are components, and
reconstruct the true fields by merely inverting the matrix $F$,
{{\frenchspacing\it i.e.}}, by choosing ${\bf x}' = F^{-1}{\bf y}$.
Although this gives exactly the right answer in the absence of pixel
noise, this method performs very poorly when the pixel
noise is non-negligible. The noise in the resulting
reconstruction will typically be dominated by the most noisy of all
the channels, and when the matrix $F$ is poorly conditioned (such as when two
components like synchrotron radiation and free-free emission
have similar spectral indices), there
will be additional noise amplification.
If the number of frequencies $m$ equals the number of components $n$,
then in the limit of no noise ($N=0$),
our general subtraction scheme reduces to
\beq{NoNoiseEq}
W = SF^t(FSF^t)^{-1} = F^{-1},
\end{equation}
{{\frenchspacing\it i.e.}}, to simple
``direct subtraction", independently of our assumptions about
the various power levels (which are contained in $S$).
Since for this special case, $WF=F^{-1}F=I$, we see that
the power-preserving subtraction we advocate also reduces to
direct subtraction when the noise is ignored, since the
normalization condition $(WF)_{ii}=1$ is satisfied.
Since the optimized method is computationally trivial anyway, involving
merely the inversion of a small matrix,
there appears to be no advantage in neglecting the noise
and using ``direct subtraction".
It should also be emphasized that whereas ``direct subtraction"
and least-squares generalizations thereof
require at least as many channels as components, our more general
method has no such restriction, allowing
$m<n$, $m=n$ and $m>n$.
\subsection{Modeling spectral uncertainties}
The main caveat to bear in mind when using any of the above-mentioned
subtraction schemes is that, in reality, we do not know the matrix $F$
with perfect accuracy. For instance, if the spectral index of
free-free emission is $\beta=0.2$ rather than $\beta=0.1$, a
subtraction attempt based on an extrapolation from 31 GHz to 217 GHz
will be off by about $20\%$. We obviously expect the spectral indices
to vary slightly from one sky region to another. For example,
unsubtracted radio sources will not all have the same spectra and so their
average spectrum will vary with position.
For us to be able to trust the error bars produced by a foreground
removal method, we clearly need a way of quantifying the impact of
such uncertainties in spectral indices. A simple way to do this is of
course to assume some probability distributions for the spectral
indices, make Monte-Carlo realizations, and compute the average of the
actual reconstruction errors obtained when (erroneously) assuming the
the spectral indices are exactly known. However, this is merely a
method of diagnostics, and would not in itself provide a more robust
subtraction scheme. Fortunately, there is quite a simple way of
incorporating such uncertainties into the foreground model
itself, which we will now describe.
\ignore{
The lowermost curve in \fig{MethodsFig} show the reconstruction errors
that would result if there were no synchrotron radiation or free-free
emission. The curve above this one shows the result if merely
free-free emission is absent. Notice that for very low $\ell$, where we
have assumed that the two are of comparable importance at 31 GHz,
neglecting one of the components helps almost as much as neglecting
both of them. This is because here the algorithm fails to break the
degeneracy between the two ($\beta=0.1$ versus $\beta=1.0$), faced
with noise and confusion from the true CMB fluctuations. We can take
advantage of this very effect to incorporate our uncertainties about
spectral indices into our model.} Suppose we have reason to believe
that about half of the synchrotron emission is characterized by
$\beta\approx 1.1$ and half by $\beta\approx 0.7$. We can simply
incorporate this into our model as two separate components with the
same power spectrum $C_\ell$ but with different spectral dependencies
$f_i(\nu)$. More realistically, we may wish to include an allowed
range of spectral indices, reflecting either our lack of knowledge of
their precise values or the presence of several physically distinct
sub-components. In either case, the way to incorporate these ranges
into the analysis is the same. If we wish to put into the model that
$\beta = 0.9\pm 0.2$, for instance, we can simply insert two
synchrotron components, one with $\beta=0.7$ and one with $\beta=1.1$
(or a range of components, if one prefers) and the
multi-frequency subtraction formalism will automatically reflect
our uncertainty in $\beta$ by associating appropriately large
reconstruction errors with synchrotron subtraction.
\subsection{Satellite specifics}
We conclude this Section with a few more technical results relating to
the impact of COBRAS/SAMBA specifics on the reconstruction errors, the
lowermost heavy curve in \fig{MethodsFig}.
Removing channel 9 makes almost no difference. Removing the three
highest frequency channels produces the thin solid curve in the same figure,
{{\frenchspacing\it i.e.}}, a worsening by a factor of a few over the whole range of
multipoles. Removing the all the HEMT channels (channels 1-4), yields
the thin dashed line, and most of this loss of accuracy
is incurred even
if only channel 1 is removed --- basically because this greatly
reduces the ability to subtract out synchrotron radiation from the
higher frequency channels. When all channels are included, the multi-frequency
subtraction scheme is found to place most of the weight on channels 4, 5
and 6. Although the other channels receive considerably smaller
weights, they still help considerably, as these weights are tuned so
as to subtract out the residual foregrounds.
It should be emphasized that although removing a few channels as described
above made a relatively minor difference
when the spectral indices were assumed to be perfectly known,
broad spectral coverage is of course of paramount importance
to ensure that the removal process is robust and can handle
uncertainties in spectral slopes as described in the previous subsection.
\section{DISCUSSION}
We have presented a new method for subtracting foreground
contamination from multi-frequency microwave sky maps, which can be
used to produce accurate CMB maps and estimates of the CMB power
spectrum $C_\ell$.
\subsection{Relation to other subtraction methods}
The method incorporates the noise levels and
subtracts the
foregrounds in the Fourier, or multipole, domain rather than in real
space, thereby exploiting the fact that contaminants such as dust,
point sources, {{\frenchspacing\it etc.}}, tend to have power spectra that differ
substantially from that of the CMB.
Compared to a standard subtraction procedure, it thus
improves the situation in two ways:
\begin{enumerate}
\item The noise levels of the input maps are taken into account.
\begin{itemize}
\item
In a simple subtraction scheme, the
cleaned map is just a particular linear combination of the
input maps at various frequencies, where the weights
take no account of the noise levels. For instance,
if both the IRAS 100 micron map and the DIRBE 140 micron map
were used as dust templates, the relative weight assigned to
the two would be arbitrary in a simple subtraction
scheme but optized in the approach presented here.
\item
Simple subtraction is merely a special case of our
method applied on a pixel-by-pixel basis, obtained in the
limit of zero noise when $m=n$.
\item
Since any spatial template whatsoever can be included
as a ``channel", the special case of simple subtraction thus
has no advantages whatsoever over the more
general method presented here.
\end{itemize}
\item The subtraction is performed in Fourier space.
\begin{itemize}
\item Since no phase information is lost by
passing to the Fourier (multipole) domain and back,
this preserves all the information about the various spatial
templates, for instance the location of the Galactic plane.
\item Thus this approach is superior to working in real space
whenever the contaminants have power spectra that differ
substantially from that of the CMB.
\item This will be the case for {\it regardless} of what the
true CMB power spectrum turns out to be, since certain contaminants are
known to have power spectra that differ from one another
(for instance, the dust power scales roughly as $\ell^{-3}$,
whereas the point source power scales as $\ell^0$.
\item Although normal Wiener-filtering modifies the
power spectrum of the measured signal, our
prescription in \eq{PowerEstEq4}
(where the row vectors of $W$ are rescaled
so that $(WF)_{ii} = 1$)
does {\it not} not have
this undesirable property, which means that our subtraction
scheme is suitable for power spectrum estimation as
well as map-making (we showed that this is only possible when
more than one frequency is available).
\item
With this prescription, the optimal weighting
coefficients in the $W$-matrix are independent of any assumptions
about the CMB power spectrum (in stark contrast
to conventional one-channel Wiener filtering).
The weights only depend on the assumptions
about the foreground power spectra.
\item
It is of course advantageous
to perform the subtraction in Fourier space even
if one does not wish to make any assumptions about
the foreground levels. This model-independent special case of our
method corresponds to simply setting
the noise levels to zero.
\end{itemize}
\end{enumerate}
Any of these two improvements can of course be implemented without the
other.
Traditional subtraction, but mode by mode in Fourier space, will perform better
than if done pixel-by-pixel in real space.
Likewise, if the overall foreground levels are known in many channels,
the result will be better if noise is taken into
account than if it is ignored in the subtraction.
\subsection{Foreground estimates}
To provide a qualitative understanding of how well the method will be able to
tackle the next generation of CMB data, we made rough estimates of the power
spectra of the relevant foregrounds in Section~\ref{ForegroundsSec}.
The results are summarized in \fig{EverythingFig}.
Although these estimates are not intended to be
very accurate, the following qualitative
conclusions appear to be quite robust:
\begin{itemize}
\item
Galactic dust poses a problem mainly at the upper left, corresponding to
large scales and high frequencies.
\item
Synchrotron radiation and free-free emission pose a problem mainly at the lower
left, corresponding to large scales and low frequencies.
\item
Radio point sources are a problem mainly at the lower right,
corresponding to small scales and low frequencies.
\item
If infrared emission from high redshift
galaxies pose a significant problem, it will
be at the upper right,
corresponding to small scales and high frequencies.
\item
Experimental noise and beam dilution are mainly a limitation to the right and
above.
\item
The most favorable frequencies for measuring high multipoles such as the CDM
Doppler peaks are larger (around 100 GHz and above) than the best ones for
probing the largest scales (around 50 GHz).
\end{itemize}
\subsection{An example: COBRAS/SAMBA}
In Section 5, we assessed the effectiveness of the
method using the specifications of the proposed COBRAS/SAMBA satellite
mission and the above-mentioned foreground modeling.
As is seen in \fig{MethodsFig}, our
method provides a gain of more than a factor of ten over the entire
multipole range compared to
subtracting the backgrounds on a pixel-by-pixel basis.
If the frequency dependencies of the foregrounds were perfectly
known and independent of position in the sky (which they are not),
then the method
could recover the CMB multipoles $a_{\ell m}$ out to
about $\ell=10^3$ to an accuracy of about a tenth of a percent.
We also found that our uncertainty about the
spectral indices could be incorporated
into the formalism in straightforward way, by simply replacing a poorly
understood component
by several components, all with the same power spectra, but with
slightly different spectral indices.
We saw that even with extremely pessimistic assumptions about our
ability to model the
foregrounds, it is likely that an estimate of the CMB power
spectrum $C_\ell$
can be made where the residual foreground contamination is merely a few
percent.
Since
the power spectrum of dust falls off and that of point
sources rises
with $\ell$ (relative to that
of the CMB), the most favorable situation occurs around
$\ell=200$, where even power
accuracies of $1\%$ would be an extremely pessimistic
prediction.
This is fortunate, as this scale coincides with the location of
the first Doppler peak of standard CDM, and accurate determination of its
position (if some variant of CDM is correct)
would provide a direct measurement
of $\Omega$, the cosmological density parameter
(see {\it e.g.} Kamionkowski \& Spergel 1994).
\subsection{The effect of non-Gaussianity}
The case where the foregrounds are Gaussian is in a sense the worst
possible case when it comes to subtracting them.
Since Gaussian random fields are completely specified by their power spectra,
this means that we have no additional information that we can take advantage
of in our attempts at foreground removal.
Since the method we derived was optimized for the Gaussian case,
a natural question to ask is whether one can do still better by making use
of the fact that many foregrounds do in fact exhibit non-Gaussian features.
One simple way to do this is to simply remove spatially localized
contaminants ({\frenchspacing\it e.g.}, the Galactic plane and bright point sources),
as was discussed in Section~\ref{NonGaussSec}.
Making optimal use of non-Gaussian features, however,
is quite difficult in practice, as it generally leads to a
non-linear optimization problem. Even if a numerical solution
were found, one might be left wondering to what extent
poor assumptions about
the precise type of non-Gaussianity might have degraded the final results.
Fortunately, the method we have presented appears to be quite
adequate in practice. As part of the scientific
case for the COBRAS/SAMBA satellite, detailed simulations have been made
where real foreground maps (extrapolated to the relevant frequencies
using the data from IRAS, the Haslam map, {{\frenchspacing\it etc.}}) were added to
simulated CMB data. The simulated real-world maps were then
``cleaned" with various subtraction
methods. The results (Bouchet {{\frenchspacing\it et al.}} 1996) show that
our method works extremely well, even though the IRAS dust maps
are known to exhibit strong
non-Gaussian features.
\subsection{Outlook: what more do we need to know?}
To be able to better quantify the effectiveness of future CMB missions
and answer questions such as ``what design changes would
improve the ability to remove foregrounds the most?", it is important that
more accurate measurements of the various foreground power spectra
be made.
Experiments are now becoming so sensitive that it no longer suffices to
summarize each foreground by a single number $\Delta T$
(which usually refers to its average intensity, the monopole)
--- its entire power spectrum is needed, to chart out the
foreground landscape
of \fig{EverythingFig}.
Some of the most urgent outstanding questions are the following:
\begin{itemize}
\item
What is the differential source count of radio point sources between
10 GHz and 100 GHz, {{\frenchspacing\it i.e.}}, how do we normalize their power spectrum
for various flux cuts?
\item
What is the differential source count
of infrared point sources between 50 and 300
GHz, {{\frenchspacing\it i.e.}}, how do we normalize their power spectrum
for various flux cuts?
\item
What is the power spectrum of synchrotron radiation and free-free emission on
small angular scales?
\end{itemize}
If none of the foreground contaminants turn out to be much worse
than we assumed in Section 4,
then the next generation of CMB experiments may indeed allow us to measure
the key cosmological parameters with hitherto unprecedented accuracy.
\vskip 0.5 truein
\noindent
{\bf ACKNOWLEDGMENTS:}
We thank Ted Bunn, Fran\c{c}ois Bouchet,
Anthony Lasenby, Jean-Lup Puget and Ned Wright
for many useful comments and
Gavin Dalton for his help
with the IRAS $100\mu$m and DIRBE maps.
MT acknowledges partial support from European Union contract
CHRX-CT93-0120 and Deutsche Forschungsgemeinschaft
grant SFB-375.
\section{REFERENCES}
\rf Banday, A. J. \& Wolfendale, A. W. 1991;MNRAS;248;705
\rf Becker, R. H., White, R. L. \& Helfand, D. J. 1995;ApJ;450;559
\rf Benn, C.R. \& Wall, J.V. 1995;MNRAS;272;481
\rf Bond, J.R. \& Efstathiou, G., 1987;MNRAS;226;655
\rf Bond, J. R. 1995;Phys. Rev. Lett.;74;4369
\rf Bouchet, F. {{\frenchspacing\it et al.}} 1995;Space Science Rev.;74;37
\pp Bouchet, F. {{\frenchspacing\it et al.}} 1996, in preparation.
\rf Brandt, W. N. {{\frenchspacing\it et al.}} 1994;ApJ;424;1
\rf Bunn, E. F. {{\frenchspacing\it et al.}} 1994;ApJ;432;L75
\rf Bunn, E. F., Scott, D. \& White, M. 1995;ApJ;441;L9
\rf Bunn, E. F. \& Sugiyama, N. 1995;ApJ;446;49
\rf de Bernardis, P., Masi, S. \& Vittorio, N. 1991;ApJ;382;515
\rf Dodelson, S. \& Stebbins, A. 1994;ApJ;433;440
\rf Fisher, K. B. {{\frenchspacing\it et al.}} 1995;MNRAS;272;885
\rf Franceschini, A. {{\frenchspacing\it et al.}} 1989;ApJ;344;35
\rf Franceschini, A. {{\frenchspacing\it et al.}} 1991;A\&A Supp.;89;285
\rf G\'orski, K. M. 1994;ApJ;430;L85
\rf Guarini, G., Melchiorri, B. \& Melchiorri, F. 1995;ApJ;442;23
\rf Haslam, C. G. T. {{\frenchspacing\it et al.}} 1982;A\&A Supp.;47;1
\rf Hu, W. \& Sugiyama, N. 1995;ApJ;436;456
\rf Jungman, G, Kamionkowski, M., Kosowsky, A \&
Spergel, D.N., 1996;Phys. Rev. Lett.;76;1007
\rf Kamionkowski, M. \& Spergel, D.N., 1994;ApJ;431;1
\rf Lahav, O. {{\frenchspacing\it et al.}} 1994;ApJ;423;L93
\rf Low, F. J. \& Cutri, R. M. 1994;Infrared Phys. \& Technol.;35;291
\rf Mandolesi, N. {{\frenchspacing\it et al.}} 1995;Planetary \& Space Science;43;1459
\rf Mather, J. C. {{\frenchspacing\it et al.}} 1994;ApJ;420;439
\rf Neugebauer, G. {{\frenchspacing\it et al.}} 1984;ApJ;278;L1
\rf O'Sullivan, C. {{\frenchspacing\it et al.}} 1995;MNRAS;274;861
\rf Reach, W. T. {{\frenchspacing\it et al.}} 1995;ApJ;451;188
\rf Rybicki, G. B. \& Press, W. H. 1992;ApJ;398;169
\rf Scott D, Silk J and White M 1995;Science;268;829
\rf Smoot, G.F. {{\frenchspacing\it et al.}} 1992;ApJ;396;L1
\rf Tegmark, M. \& Bunn, E. F. 1995;ApJ;455;1
\rf Tegmark, M. 1996;MNRAS;280;299
\pp Toffolatti, L. {{\frenchspacing\it et al.}} 1995, in {\it 1993 Capri Workshop on the cosmic
microwave background},
{\it Astrophys. Lett \& Comm.}, in press.
\rf White, M. \& Srednicki, M, 1995;ApJ;443;6
\rf White, M., Scott, D. and Silk, J. 1994;ARA\&A;32;319
\rf Wright, E. L. {{\frenchspacing\it et al.}} 1991;ApJ;381;200
\rf Zaroubi, S. {{\frenchspacing\it et al.}} 1995;ApJ;449;446
\clearpage
\newpage
\begin{figure}[phbt]
\centerline{\epsfxsize=17cm\epsfbox{experiments_bw.ps}}
\caption{Where various CMB experiments are sensitive.}
The boxes roughly indicate the range of multipoles $\ell$ and frequencies
$\nu$ probed by various CMB experiments. The large heavy unshaded box
corresponds to the proposed COBRAS/SAMBA satellite.
\label{ExperimentsFig}
\end{figure}
\newpage
\begin{figure}[phbt]
\centerline{\epsfxsize=17cm\epsfbox{cmbB.ps}}
\caption{How the CMB brightness fluctuations depend
on multipole and frequency in the standard CDM model
(assuming scale-invariant scalar
fluctuations in a critical density, $\Omega=1$,
universe, with Hubble constant $H_0 = 50{\rm km}{\rm s}^{-1}{\rm
Mpc}^{-1}$, and baryon density $\Omega_b = 0.05$). The CDM power
spectrum was computed as described by Bond \& Efstathiou (1987).}
\label{CMB_B_Fig}
\end{figure}
\newpage
\begin{figure}[phbt]
\centerline{\epsfxsize=17cm\epsfbox{cmb.ps}}
\caption{How the CMB temperature fluctuations depend
on multipole and frequency for the CDM model with parameters
as in Figure 2.}
\label{CMB_T_Fig}
\end{figure}
\newpage
\begin{figure}[phbt]
\centerline{\epsfxsize=17cm\epsfbox{dust.ps}}
\caption{Model for how the dust power spectrum
depends on multipole and frequency.}
\label{DustFig}
\end{figure}
\newpage
\begin{figure}[phbt]
\centerline{\epsfxsize=17cm\epsfbox{pointsources.ps}}
\caption{Model for how the power spectrum of
point sources depends on multipole and frequency.}
\label{pointsourcesFig}
\end{figure}
\newpage
\begin{figure}[phbt]
\centerline{\epsfxsize=17cm\epsfbox{noise.ps}}
\caption{How the COBRAS/SAMBA pixel noise
power depends on multipole and frequency.}
\label{NoiseFig}
\end{figure}
\newpage
\begin{figure}[phbt]
\centerline{\epsfxsize=17cm\epsfbox{flux.ps}}
\caption{The VLA FIRST
differential source count
for radio sources at 1.5 GHz.}
\label{FluxFig}
\end{figure}
\newpage
\begin{figure}[phbt]
\centerline{\epsfxsize=17cm\epsfbox{radioCl.ps}}
\caption{Dependence of radio source fluctuations on flux cut.}
\label{RadioClFig}
The average brightness (monopole, upper curve) and the power spectrum
normalization (lower curve) at 1.5 GHz
from the VLA FIRST survey is plotted
as a function of flux cut.
\end{figure}
\newpage
\begin{figure}[phbt]
\centerline{\epsfxsize=17cm\epsfbox{channel31.ps}}
\caption{Model power spectra of various components at 31 GHz.} From
the bottom up, on the left hand side, the curves correspond to
pixel noise, radio point sources, dust, synchrotron radiation,
free-free emission, all sources combined (heavy), and CMB (shaded)
according to the CDM model plotted in Figures 2 and 3.
\label{Channel31Fig}
\end{figure}
\newpage
\begin{figure}[phbt]
\centerline{\epsfxsize=17cm\epsfbox{channel90.ps}}
\caption{Model power spectra of various components at 90 GHz.} From
the bottom up, on the left hand side, the curves correspond to
pixel noise, radio point sources, synchrotron radiation,
free-free emission, dust, all sources combined (heavy),
and CMB (shaded).
\label{Channel90Fig}
\end{figure}
\newpage
\begin{figure}[phbt]
\centerline{\epsfxsize=17cm\epsfbox{channel217.ps}}
\caption{Model power spectra of various components at 217 GHz.} From
the bottom up, on the left hand side, the curves correspond to
pixel noise,
radio point sources, synchrotron radiation,
free-free emission, CMB (shaded), dust,
and all sources combined (heavy).
\label{Channel217Fig}
\end{figure}
\newpage
\begin{figure}[phbt]
\centerline{\epsfxsize=17cm\epsfbox{multipoles.ps}}
\caption{Total contamination at selected
multipoles.}
The total fluctuations of all foregrounds excluding (solid lines) and including
(dashed lines) COBRAS/SAMBA pixel noise are plotted for
the multipoles 10, 300 and 1000. The heavy horizontal line corresponds to
COBE-normalized Sachs-Wolfe fluctuations in the CMB.
\label{MultipolesFig}
\end{figure}
\newpage
\begin{figure}[phbt]
\centerline{\epsfxsize=17cm\epsfbox{methods.ps}}
\caption{Comparison of methods}
The combined residual contribution of all foregrounds and noise
is plotted
for different approaches to foreground subtraction. From
top to bottom, the four labeled curves correspond to
(1) use of the 143 GHz channel with no subtraction, (2) use of the best
COBRAS/SAMBA channel at each multipole with no subtraction,
(3) optimized subtraction on a pixel-by-pixel basis, and
(4) the optimal subtraction technique.
The uppermost curve (shaded) is a standard CDM power spectrum
as plotted in earlier figures.
The two thin curves at the bottom correspond to reducing
the number of
channels as described in the text.
\label{MethodsFig}
\end{figure}
\newpage
\begin{figure}[phbt]
\centerline{\epsfxsize=17cm\epsfbox{everything_bw.ps}}
\caption{Where various foregrounds dominate}
The shaded regions indicate where the various foregrounds cause
fluctuations exceeding those of COBE-normalized scale-invariant
fluctuations ($\approx\sqrt{2}\times 20{\rm \mu K}$),
thus posing a substantial challenge
to estimation of genuine CMB fluctuations.
They correspond to dust (top), free-free emission (lower left),
synchrotron radiation (lower left, vertically shaded),
radio point sources (lower right) and COBRAS/SAMBA
instrumental noise and beam dilution (right).
The heavy dashed line shows the frequency where the total foreground
contribution to each multipole is minimal.
The boxes indicate roughly the range of multipoles $\ell$ and frequencies
$\nu$ probed by various CMB experiments, as in \fig{ExperimentsFig}.
\label{EverythingFig}
\end{figure}
\end{document}
\bye
|
\section{Introduction}
{\it Motivation\/} --
Nuclear $\beta$ decay has played a central role in the development of the electroweak sector of the
Standard Model (SM) of particle physics. The discovery of parity violation in the $\beta$ decay of
$^{60}$Co~\cite{Lee56,Wu57} led to the ``$V-A$'' theory of the weak interaction, and subsequently
to the understanding that $\beta$ decay is mediated by the $W$-boson that couples to left-handed
fermions. Present-day $\beta$-decay experiments search for deviations from the SM due to ``non-$V-A$''
currents, resulting for instance from the exchange of charged vector bosons that couple to right-handed
fermions or of charged scalar bosons~\cite{Her01,Sev11}.
In Ref.~\cite{Noo13a} we proposed that $\beta$-decay experiments offer interesting opportunities to
test the validity of Lorentz invariance of the weak interaction ({\em cf.} also Ref.~\cite{Dia13}). The search
for violations of Lorentz invariance is nowadays motivated by attempts to unify the SM with general
relativity~\cite{Lib09}. Some of these theories of ``quantum gravity'' predict Lorentz-violating signals that
could be detectable in high-precision experiments at low energy. The results of some recent searches
are reported in Refs.~\cite{Ben08,Alt09,Bro10,Smi11,Hoh13}. The experimental evidence for Lorentz and
in particular rotational invariance of the weak interaction, which has a bad track record for obeying
symmetries, however, is surprisingly poor, as pointed out already in Ref.~\cite{Lee56}.
In order to guide and interpret such $\beta$-decay experiments, we developed a general approach
to the violation of Lorentz invariance in neutron and allowed nuclear $\beta$ decay~\cite{Noo13a}.
The effective Lorentz-violating Hamiltonian density is given by the current-current interaction
\begin{equation}
\mathcal{H}_\beta = (g^{\mu\nu}+\chi^{\mu\nu})\left[\bar{\psi}_p(x)\gamma_\mu
(C_V+C_A\gamma_5)\psi_n(x)\right]
\left[\bar{\psi}_e(x)\gamma_\nu(1-\gamma_5)\psi_\nu(x)\right]+{\rm H.c.} \ ,
\label{eq:hamiltonian}
\end{equation}
where $g^{\mu\nu}$ is the Minkowski metric and $\chi^{\mu\nu}$ is a complex, possibly
momentum-dependent, tensor that parametrizes Lorentz violation. $C_V$ and $C_A$ are the conventional
vector and axial-vector coupling constants and H.c. denotes Hermitian conjugation. As shown in
Ref.~\cite{Noo13a}, this approach includes a wide class of Lorentz-violating effects, such as contributions
from a modified $W$-boson propagator
$\left\langle W^{\mu+}W^{\nu-}\right\rangle = -i\,(g^{\mu\nu}+\chi^{\mu\nu})/M_W^2$,
but also contributions from a Lorentz-violating vertex $-i\gamma_{\nu}(g^{\mu\nu}+\chi^{\mu\nu})$.
Measurements of $\beta$-decay observables provide limits on the values of the components of
the tensor $\chi^{\mu\nu}$. Moreover, such limits can be translated into bounds on the parameters
of the Standard Model Extension (SME)~\cite{Col98,Kos11}, which provides the most general
effective field theory framework for the spontaneous breaking of Lorentz and CPT symmetry.
An experiment to test Lorentz violation in the Gamow-Teller $\beta$ decay of polarized $^{20}$Na
was recently performed at KVI, Groningen~\cite{Wil13,Mul13}. This is the first dedicated experiment
on allowed $\beta$ decay. However, several years before the $W$-boson was discovered and long
before searches for Lorentz violation became fashionable, two isolated experiments were performed
that searched for a ``preferred'' direction in space in first-forbidden $^{90}$Y $\beta$ decay~\cite{New76,Wie72}
and in first-forbidden $^{137}$Cs and second-forbidden $^{99}$Tc $\beta$ decays~\cite{Ull78}. The
hope was that such forbidden decays would be more sensitive to violations of rotational invariance,
{\it i.e.} angular-momentum conservation. We have revisited these experiments and interpreted them
within our effective field theory framework. For that reason, we have extended the approach of
Ref.~\cite{Noo13a} to forbidden $\beta$ decays. The technical details can be found in Ref.~\cite{Noo13b}.
In this Letter, we show that the experiments of Refs.~\cite{New76,Wie72,Ull78} provide strong and
unique bounds on Lorentz violation in the weak interaction, and in particular on previously
unconstrained parameters of the SME.
{\it Forbidden $\beta$ decays} --
Since nuclear states are characterized by spin and parity, it is customary to expand the lepton
current in the $\beta$-decay matrix element in multipoles. Compared to the multipole expansion
of the photon field in the atomic case this expansion is complicated, because both vector
and axial-vector currents contribute and two relativistic particles are involved, for which only the
total angular momentum of each particle is a good quantum number. Moreover, the $\beta$ particle
moves in the Coulomb field of the daughter nucleus. The lowest-order terms in the multipole expansion
correspond to the allowed approximation, which amounts to evaluating the lepton current at $r=0$
and neglecting relativistic effects for the nucleus. This implies that neither of the leptons
carries off orbital angular momentum.
Higher-order terms in the expansion correspond to forbidden transitions~\cite{Kon66,Wei61},
which are suppressed by one or more of the following small dimensionless quantities: $pR$, where
$p$ is the lepton momentum and $R$ the nuclear radius (this corresponds to the ratio of the nuclear
radius and the de Broglie wavelength of the lepton), $v_N$, the velocity of the decaying nucleon
in units of $c$, and $\alpha Z$, the fine-structure constant times the charge of the daughter nucleus.
The lowest power of these quantities that appears in the amplitude determines the degree in
which the transition is forbidden. The transitions are classified by the nuclear-spin change
$\Delta I=|I_i-I_f|$ and relative parity $\pi_i\pi_f=\pm1$ (parity change {\em no} or {\em yes}),
where $I_i$, $\pi_i$ and $I_f$, $\pi_f$ are the spins and parities of the parent and daughter
nucleus, respectively. $n$-times forbidden transitions with $\Delta I =n+1$ are called unique.
Such unique forbidden transitions are advantageous, since they depend on only one nuclear matrix
element, which cancels in the asymmetries that quantify Lorentz violation.
In Ref.~\cite{Noo13b} we derived the multipole expansion for the Lorentz-violating $\beta$-decay
Hamiltonian of Eq.~(\ref{eq:hamiltonian}). Because the tensor $\chi^{\mu\nu}$ contracts the nucleon
and lepton currents in an unconventional way, the possibility arises that angular momentum is no
longer conserved in the transition. In particular, it is now possible that $\Delta I = J+1$ for $\chi^{0k}$
and $\chi^{k0}$ and that $\Delta I = J+2$ for $\chi^{km}$, where $J$ is the total angular momentum
of the leptons and Latin superscripts run over space indices. In contrast, rotational invariance implies that
$\Delta I \le J$. At the same time, however, the suppression of the transitions is still for the most part
determined by the angular momentum of the leptons. Due to this, the parts of $\chi^{\mu\nu}$ that
connect to the spin-dependent nucleon current ($\chi^{k0}$ and $\chi^{km}$) can be enhanced by
a factor $\alpha Z/pR$ with respect to the Lorentz-symmetric contributions. This enhancement factor
occurs only in transitions with $\Delta I\ge 2$, {\it i.e.} starting from unique first-forbidden transitions.
{\it Analysis of the old experiments\/} --
In Ref.~\cite{New76} the $\beta$-decay chain
$^{90}$Sr$(0^+,30.2\,a)$ $\rightarrow$ $^{90}$Y$(2^-,64.1\,h)$ $\rightarrow$ $^{90}$Zr$(0^+)$
was considered, wherein the $\beta^-$ decay of $^{90}$Y is a $\Delta I = 2$, {\em yes}, unique
first-forbidden transition. A search was made for dipole and quadrupole anisotropies in the angular
distribution of the electrons,
\begin{equation}
W(\theta) = W_0\left( 1 + \varepsilon_1\cos\theta + \varepsilon_2\cos^2\theta \right) \ ,
\label{eq:distribution}
\end{equation}
where $\theta$ is the angle between the electron momentum and a presumed preferred direction
in space. A 10 Ci $^{90}$Sr source was put in a vacuum chamber and the electron current it produced
was measured on a collector plate opposite the source, giving a solid angle of nearly 2$\pi$. The
source was made such that only high-energy electrons could come out, assuring that only the
current due to $^{90}$Y was measured. The endpoint of $^{90}$Sr is too low to contribute significantly
to the current for this particular source. The chamber rotated about a vertical axis with a frequency of 0.75 Hz.
An anisotropy would result in a modulation of the detected current with a frequency of 0.75 or 1.5 Hz,
depending on the dipole or quadrupole nature of the anisotropy.
\begin{table}[t]
\centering
\setlength{\tabcolsep}{10pt}
\begin{tabular}{c|r|r|r}
\hline\hline
Asymmetry $\delta$ & $10^8\times a_0$ & $10^8\times a_1$ & $10^8\times a_2$ \\
\hline
$N\!S$ & $-1.9\pm1.0$ & $3.2\pm1.9$ & $1.7\pm1.9$ \\
$EW$ & $\;\;\:1.1\pm1.0$ & $2.9\pm1.9$ & $1.9\pm1.9$ \\
2$\nu$ & $-1.0\pm1.0$ & $0.5\pm1.7$ & $0.7\pm1.7$ \\
\hline\hline
\end{tabular}
\caption{The measured values~\cite{New76} for $a_{0,1,2}$ of Eq.~(\ref{eq:delta}).}
\label{tab:limits}
\end{table}
The data were analyzed in terms of two dipole current asymmetries,
\begin{equation}
\delta_{N\!S} = 2\,\frac{i_N-i_S}{i_N+i_S} \ , \;\;\;\;\;
\delta_{EW} = 2\,\frac{i_E-i_W}{i_E+i_W} \ ,
\end{equation}
and one quadrupole asymmetry,
\begin{equation}
\delta_{2\nu} = 2\,\frac{i_N+i_S-i_E-i_W}{i_N+i_S+i_E+i_W} \ ,
\end{equation}
where $N$, $S$, $E$, $W$ mean north, south, east, and west, and where for instance
$i_N$ denotes the mean current in the lab-fixed northern quadrant of the chamber's
rotation. These current asymmetries $\delta$ were fitted as functions of sidereal time as
\begin{equation}
\delta = a_0 + a_1\sin(\omega t+\phi_1) + a_2\sin(2\omega t+\phi_2) \ ,
\label{eq:delta}
\end{equation}
where $\omega$ is the angular rotation frequency of the Earth. The extracted coefficients
$a_{0,1,2}$ are given in Table~\ref{tab:limits}. Relative phases between the different
asymmetries were not considered and the phases $\phi_{1,2}$ between the amplitudes
were not reported. Such relations would have provided stronger constraints on Lorentz violation.
By using Eq.~\eqref{eq:distribution} the expressions for $a_{0,1,2}$ were determined. Scattering
inside the source, due to which the emission direction of the electrons gets partly lost when they leave
the sample, had to be taken into account. With a Monte-Carlo program the probability distribution
to detect an electron was determined, depending on the angle of its original direction with respect
to the normal of the source. This probability distribution was then folded with Eq.~\eqref{eq:distribution}.
The result for the current as function of the angle $\theta_n$ between the direction of the collector
plate and the presumed asymmetry axis reads~\cite{Corr}
\begin{equation}
I(\theta) = I_0 \left[ 1+ \frac{C_1}{3+C_2}\,\varepsilon_1\cos\theta_n
+ \frac{C_2}{15+5C_2}\,\varepsilon_2\cos2\theta_n \right] \ ,
\label{eq:probdistribution}
\end{equation}
with $C_1=1.26$ and $C_2=0.39$. After transforming this equation to a standard
Sun-centered reference frame, the upper limits $|\varepsilon_1| < 1.6\!\times\!10^{-7}$ and
$|\varepsilon_2| < 2.0\!\times\!10^{-6}$ were determined at 90\% confidence level (C.L.)~\cite{New76}.
We interpret the data in Table~\ref{tab:limits} by using the new formalism~\cite{Noo13a}, in
which the differential decay rate for a unique first-forbidden transition is given by~\cite{Noo13b}
\begin{equation}
\frac{dW}{d\Omega dE} \propto p^2 + q^2 + p^2 \,
\frac{\alpha Z}{pR} \left[\frac{3}{10}\frac{p}{E}\left(\chi_r^{ij}\hat{p}^i\hat{p}^j-\tfrac{1}{3}\chi_r^{00}\right)
- \frac{1}{2}\tilde{\chi}_i^l \hat{p}^l + \chi_r^{l0}\hat{p}^l\right] \ ,
\label{decayrspch2}
\end{equation}
where $p$ and $E$ are the electron momentum and energy and $q=E_0-E$ is the neutrino momentum,
with $E_0$ the energy available in the decay. The proportionality factor contains phase space and one
``$\beta$ moment''~\cite{Kon66}, a matrix element that depends on the nuclear structure.
The subscripts $r$ and $i$ on the Lorentz-violating tensor indicate the real and imaginary part of $\chi^{\mu\nu}$,
respectively, and $\tilde{\chi}^l=\epsilon^{lmk}\chi^{mk}$. The Lorentz-invariant part of the decay rate has
the typical unique first-forbidden spectrum shape $\sim p^2+q^2$. The Lorentz-violating part scales with
$\alpha Z/pR$, which proves that forbidden transitions can be more sensitive to angular-momentum violation,
compared to allowed ones. The enhancement is about one order of magnitude for a typical transition.
\begin{table*}[tb]
\centering
\begin{tabular}{c|c|c|c}
\hline\hline
Asymmetry $\delta$ & $a_0$ & $a_1$ & $a_2$ \\
\hline
$N\!S$ & $-1.3\left[2
|
X_r^{30}-\tilde{X}_i^3\right]$ &
$1.4\left[\left(2X_r^{20}-\tilde{X}_i^2\right)^2 + \left(2X_r^{10}-\tilde{X}_i^1\right)^2\right]^{1/2}$ & $0$ \\
$EW$ & $-0.63\left[2X_r^{30}-\tilde{X}_i^3\right]$ &
$1.8\left[\left(2X_r^{20}-\tilde{X}_i^2\right)^2 + \left(2X_r^{10}-\tilde{X}_i^1\right)^2\right]^{1/2}$ & $0$ \\
2$\nu$ & $0.0090\left[3X_r^{33}-X_r^{00}\right]$ &
$0.031\left[\left(X_r^{13}+X_r^{31}\right)^2 + \left(X_r^{23}+X_r^{32}\right)^2\right]^{1/2}$ &
$0.033\left[\left(X_r^{12}+X_r^{21}\right)^2 \right
$ \\
& & & \multicolumn{1}{r}{$\left.+\left(X_r^{22}-X_r^{11}\right)^2\right]^{1/2}$}\\
\hline\hline
\end{tabular}
\caption{The theoretical predictions from Eq.~\eqref{decayrspch2} for $a_{0,1,2}$ of Eq.~(\ref{eq:delta}).}
\label{tab:theory}
\end{table*}
With Eq.~\eqref{decayrspch2} instead of Eq.~\eqref{eq:distribution}, the remaining part of the analysis
parallels the analysis of Ref.~\cite{New76} summarized above. It requires a simulation of the
electron trajectories with the modified weight of the Lorentz-violating part of the expression, which, however,
would entail a small modification of the original simulation. Therefore, instead, we integrate the Lorentz-violating
part of Eq.~\eqref{decayrspch2} over the energy of the detected electrons, including the energy-dependent
phase-space factor $\propto pEq^2 F(Z,E)$, with $F(Z,E)$ the Fermi function. We integrate over the top
23.4\% of the energy spectrum, since the detector covered a $2\pi$ solid angle and 11.7\% of the electrons
escaped the source and were collected~\cite{New76}. With this simplified procedure we may do the angular
folding of Eq.~\eqref{decayrspch2} with the original detection probability distribution. We checked that
the limits derived below are not affected by more than 4\% when changing the integrated fraction of
the energy spectrum by a factor two.
We transform the tensor $\chi^{\mu\nu}$, defined in the laboratory frame, to the tensor $X^{\mu\nu}$ defined
in the Sun-centered frame~\cite{Kos11}, by using $\chi^{\mu\nu} = R^{\mu}_{\;\;\rho}R^{\nu}_{\;\;\sigma}X^{\rho\sigma}$,
with $R$ the apropriate rotation matrix~\cite{Noo13a}. In this way, we obtain the theoretical expressions for
the coefficients $a_{0,1,2}$ listed in Table~\ref{tab:theory}. The numerical factors in front of the coefficients in
Table~\ref{tab:theory} are determined by the location of the experiment on Earth (the colatitude of New York
is about $49^\circ$), the constants $C_1$ and $C_2$, and two phase shifts in the amplifier used in the
experiment~\cite{New76}. The phase correlation between $\delta_{N\!S}$ and $\delta_{EW}$ that our
theory predicts was not measured. Therefore, $2X_r^{10}-\tilde{X}_i^1$ and $2X_r^{20}-\tilde{X}_i^2$
cannot be extracted separately and only the combined value can be determined. The phase shifts of the
amplifier are the reason that $a_0 \neq 0$ for $\delta_{EW}$.
Comparing the experimental values in Table~\ref{tab:limits} to the theoretical predictions in
Table~\ref{tab:theory}, we derive the following limits on the Lorentz-violating coefficients at 95\% C.L.:
\begin{subequations}
\begin{eqnarray}
-6\!\times\!10^{-9}<2X_r^{30}-\tilde{X}_i^3 & < & 2\!\times\!10^{-8}\ , \label{o1} \\
-3\!\times\!10^{-6}<3 X_r^{33}- X_r^{00} & < & 1\!\times\!10^{-6}\ , \label{o2} \\
\left[(2X_r^{20}-\tilde{X_i^2})^2+(2X_r^{10}-\tilde{X}_i^{1})^2\right]^{1/2} & < & 4\!\times\!10^{-8} \ , \label{c1} \\
\left[(X_r^{13}+X_r^{31})^2 + (X_r^{23}+X_r^{32})^2\right]^{1/2} & < & 1\!\times\!10^{-6}\ , \\
\left[(X_r^{12}+X_r^{21})^2 + (X_r^{22}-X_r^{11})^2\right]^{1/2} & < & 1\!\times\!10^{-6}\ . \label{c2}
\end{eqnarray}
\label{bounds}
\end{subequations}
The limit of Eq.~(\ref{c1}) was obtained from $\delta_{EW}$ only, because of the phase correlation
between $\delta_{N\!S}$ and $\delta_{EW}$.
In Ref.~\cite{Ull78} a similar search was made in the $\Delta I = 2$, {\em yes}, unique first-forbidden
$\beta^-$ decay $^{137}$Cs$(\frac{7}{2}^+,30.2\,a)$ $\rightarrow$ $^{137m}$Ba$(\frac{11}{2}^-)$,
and in the $\Delta I = 2$, {\em no}, second-forbidden $\beta^-$ decay
$^{99}$Tc$(\frac{9}{2}^+,2.1\times10^5\,a)$ $\rightarrow$
$^{99}$Ru$(\frac{5}{2}^+)$, by looking for a modulation of the counting rate as a function of sidereal time.
An upper limit for a $\cos\theta$ or $\cos2\theta$ term of $3\!\times\!10^{-5}$ was found at 90\% C.L.
Although the accuracy was less than in the $^{90}$Y experiment, the setup in this experiment
had a higher angular resolution. For $^{137}$Cs decay Eq.~\eqref{decayrspch2} also applies,
while the expression for $^{99}$Tc decay is given by~\cite{Noo13b}
\begin{equation}
\frac{dW}{d\Omega dE} \propto \mathcal{M}_{3/2}^ 2 p^2 + \mathcal{M}_{1/2}^ 2 q^2
+ \ \mathcal{M}_{3/2}\mathcal{W}\,p^2\,\frac{\alpha Z}{pR}
\left[\frac{3}{10}\frac{p}{E}\left(\chi_r^{ij}\hat{p}^i\hat{p}^j-\tfrac{1}{3}\chi_r^{00}\right)
- \frac{1}{2}\tilde{\chi}_i^l \hat{p}^l + \chi_r^{l0}\hat{p}^l\right] \ ,
\label{decayrspch2b}
\end{equation}
where $\mathcal{M}_{1/2}$ and $\mathcal{M}_{3/2}$ depend on three $\beta$ moments~\cite{Kon66} and
$\mathcal{W}=2\mathcal{M}_{3/2}-\mathcal{M}_{1/2}$. We use $\mathcal{M}_{3/2}/\mathcal{M}_{1/2}=0.735$,
such that the spectrum shape $\sim 0.54\,p^2+q^2$~\cite{Lip66,Rei74}.
The experimental setup in Ref.~\cite{Ull78} was made to measure electrons in two directions. In one direction
perpendicular to the Earth's rotation axis a count rate $N_S$ was measured, and in the direction parallel to the
Earth's rotation axis a count rate $N_P$. The observable $\mathcal{A}=N_S/N_P-1$ was then inspected
for sidereal variations. By using the expressions for the Lorentz-violating decay rates of $^{137}$Cs and
$^{99}$Tc, given in Eqs.~\eqref{decayrspch2} and \eqref{decayrspch2b}, we obtain an expression similar
to Eq.~(\ref{eq:delta}) for the observable $\mathcal{A}$. The amplitudes are proportional to the same
combinations of Lorentz-violating coefficients as found previously for $^{90}$Y.
Here $a_0$ is a combination of the terms found in the first column of Table~\ref{tab:theory}. The terms for $a_1$
can be separated to obtain the individual terms $2X_r^{10}-\tilde{X}_i^1$ and $2X_r^{20}-\tilde{X}_i^2$. Similarly
this can be done for $a_2$. The proportionality constants are larger than for the $^{90}$Y experiment. In particular,
for the terms found previously in the quadrupole asymmetry the sensitivity of this setup is a factor 10 to 100 higher.
However, the statistical accuracy in this experiment is much lower and the improvements on the bounds of
Eq.~(\ref{bounds}) are insignificant. The best case is $|3X_r^{33}-X_r^{00}|<8\!\times\!10^{-6}$ at 95\% C.L.,
instead of the bound $3\!\times\!10^{-6}$ of Eq.~(\ref{o2}).
{\it Discussion and outlook\/} --
By using experiments on forbidden $\beta$ decay, we have set strong limits on Lorentz
violation in the weak interaction, in particular on the tensor $\chi^{\mu\nu}$ that modifies
the $W$-boson propagator. The general bounds of Eq.~\eqref{bounds} can be translated
into bounds on SME parameters~\cite{ColPhD,Cam12}, in terms of which~\cite{Noo13a}
\begin{equation}
\chi^{\mu\nu} = - k_{\phi\phi}^{\mu\nu} - i\,k_{\phi W}^{\mu\nu}/2g \ ,
\end{equation}
when we assume that $\chi^{\mu\nu}$ is momentum independent~\cite{Noo13b}; $g$ is the
SU(2) electroweak coupling constant. Since $k_{\phi\phi}$ has a real symmetric component
$k_{\phi\phi}^S$ and an imaginary antisymmetric component $k_{\phi\phi}^A$, while $k_{\phi W}$
is real and antisymmetric, we derive at 95\% C.L. the bounds:
\begin{subequations}
\begin{eqnarray}
-5\times 10^{-9} < (k^S_{\phi\phi})^{ZT},
(k^A_{\phi\phi})^{Y\!X}, (k_{\phi W})^{Y\!X} &<& 1\times 10^{-8}\ , \\
-1\times 10^{-6} < (k^S_{\phi\phi})^{Z\!Z} &<& 4 \times 10^{-7}\ , \\
-1\times 10^{-6} < (k^S_{\phi\phi})^{TT} &<& 3\times 10^{-6}\ , \\
\left|(k^S_{\phi\phi})^{X\!X}\right|, \left|(k^S_{\phi\phi})^{YY}\right| &<& 1\times 10^{-6}\ , \\
\left|(k^S_{\phi\phi})^{XT}\right|, \left|(k^S_{\phi\phi})^{YT}\right|, \left|(k^A_{\phi\phi})^{X\!Z}\right|,
\left|(k^A_{\phi\phi})^{Y\!Z}\right|,
\left|(k_{\phi W})^{X\!Z}\right|, \left|(k_{\phi W})^{Y\!Z}\right| &<& 2 \times 10^{-8}\ , \\
\left|(k^S_{\phi\phi})^{XY}\right|, \left|(k^S_{\phi\phi})^{X\!Z}\right|, \left|(k^S_{\phi\phi})^{Y\!Z}\right| &<& 5\times 10^{-7} \ .
\end{eqnarray}
\label{smebounds}
\end{subequations}
We assumed that there are no cancellations between different parameters, {\it i.e.} when deriving
a bound on one parameter, the others were set to zero. With that caveat, Eq.~(\ref{smebounds}) provides
the first strict direct bounds on these SME parameters in the electroweak sector. For the components
$\chi^{\mu\nu}_r+\chi^{\nu\mu}_r$, they improve recent bounds from pion decay~\cite{Alt13} by three
orders of magnitude. (Indirect bounds were previously obtained for some of these parameters~\cite{And04}.
The validity of these indirect bounds is addressed in Ref.~\cite{Alt13}.)
In order to improve on our bounds, a more sensitive $\beta$-decay experiment of the type performed
in Refs.~\cite{New76,Wie72,Ull78} could be designed. With theory input and by exploiting modern
detector systems a number of the drawbacks of these pioneering experiments can be overcome.
However, to reach their precision level will require long-running experiments with high-intensity sources.
Forbidden $\beta$ transitions with low $E_0$ are preferred, as seen from Eq.~(\ref{decayrspch2}) and
because of radiation safety.
We have shown that $\beta$ decays with a higher degree of forbiddenness do not further enhance
Lorentz violation~\cite{Noo13b}. To obtain direct bounds on all the components $\chi^{\mu\nu}_{r,i}$
one may consider measurements of allowed $\beta$ transitions~\cite{Noo13a}. Both the degrees of
freedom of the $\beta$ particle and of the parent spin can be used. The KVI experiment~\cite{Wil13,Mul13}
measures $\tilde{\chi}^l_i$ in $\beta$ decay of polarized nuclei. $\beta^-$ emitters that populate the
ground state of the daughter nucleus are preferred. Sources like $^{32,33}$P, $^{35}$S, $^{45}$Ca,
or $^{63}$Ni are excellent options. An alternative is to measure semileptonic decays of hadrons,
in particular when they are produced in high-energy facilities and decay in flight: The boost then
provides an enhancement of order $\gamma^2$, where $\gamma$ is the Lorentz factor. Finally,
orders of magnitudes could be gained at a future $\beta$-beam facility~\cite{Lin10}, where
high-intensity high-energy radioactive beams decay to produce electron-neutrino beams.
{\it Acknowledgments\/} --
We are much indebted to professor Riley Newman for clarifying communications about his
experiment~\cite{New76} and for sending us Ref.~\cite{Wie72}. We acknowledge an exchange
with professor Jack Ullman about Ref.~\cite{Ull78}. This research was supported by the
Stichting voor Fundamenteel Onderzoek der Materie (FOM) under Programmes 104 and 114
and project 08PR2636.
|
\section{Introduction}
It is well-known that Noether' theorem established a close
connection between symmetries and conservation laws for the PDEs
possessing a variational structure \cite{Olver,blu11}.
However, the application of Noether' approach relies on the
following two conditions which heavily hinder the construction of
conservation laws in such way:
(1). The PDEs under consideration must be derived from a variational
principle, i.e. they are Euler-Lagrange equations.
(2). The used symmetries must leave the variational integral
invariant, which means that not every symmetry of the PDEs can
generate a conservation law via Noether' theorem. Note that the
symmetry stated here and below refers to the generalized symmetry of
PDEs.
Thus many approaches are developed to get around the limitations of
Noether's theorem
\cite{lap,ste,km-2000,kara-2006,sb-2002a,sb-2002b}. Recently,
Ibragimov introduced the concept of nonlinear self-adjointness
\cite{ib-2011} which unified three subclasses stated earlier and
provided a feasible method to construct conservation laws of PDEs
\cite{nh-2006,ib-2007,gan-2011,Fs-2012,tt-2012}. The two required
conditions of this approach are the symmetries admitted by the PDEs
and the differential substitutions which convert nonlocal
conservation laws to local ones. As for the first requirement,
finding the symmetries of the PDEs, there exist a number of
well-developed methods and computer algebra programs
\cite{wolf-2002,nj-2004,ch-2007,here-1997}. However, the way to
obtain the required differential substitutions is only to use the
equivalent identity of the definition which involves complicated
computations \cite{ib-2011,zhang-2015,gan-2014}.
Therefore a question naturally arise: would it be possible to
derive the differential substitutions by other ways? The answer is
positive. We show that each adjoint symmetry of PDEs is a
differential substitution and vice versa. Consequently, each
symmetry corresponds to a conservation law via a formula, where the
formula only involves differential operation instead of integral
operation and thus can be fully implemented on a computer.
It should be noted that there exists an approach, named by direct
method, which does not require the symmetry information of PDEs but
also connected with the symmetry and adjoint symmetry of PDEs.
The direct method may be first used by Laplace to derive the
well-known Laplace vector of the two-body Kepler problem\cite{lap}.
Later, Steudel writes a conservation law in characteristic form
\cite{ste}, where the characteristics are also called multipliers
and can be obtained by the fact that Euler operators annihilate the
divergence expressions identically \cite{Olver}. In particular, Anco
and Bluman showed that, on the solution space of the given system of
PDEs, multipliers are symmetries provided that its linearized system
is self-adjoint, otherwise they are adjoint symmetries and can be
obtained by choosing from the set of adjoint symmetries by virtue of
the so-called adjoint invariance conditions
\cite{sb-2002a,sb-2002b}.
Thus compare the computational process of the two methods, we find
that the set of differential substitutions of nonlinear
self-adjointness includes the one of multipliers as a subset.
The remainder of the paper is arranged as follows. In Section 2,
some related notions and principles are reviewed and the main
results are given. In Section 3, two illustrated examples are
considered. The last section contains a conclusion of the results.
\section{Main results}
In this section, we first review some related notions and
principles, and then give the main results of the paper.
\subsection{Preliminaries}
\subsubsection{Symmetry and adjoint symmetry}
Consider a system of $m$ PDEs with $r$th-order
\begin{eqnarray}\label{perturb}
E^{\alpha}(x,u,u_{(1)},\cdots,u_{(r)})=0, ~~~~\alpha=1,2,\dots,m,
\end{eqnarray}
where $x=(x^1,\dots,x^n)$ is an independent variable set and
$u=(u^1,\dots,u^m)$ is a dependent variable set, $u_{(i)}$ denotes
all $i$-th $x$ derivatives of $u$. Note that the summation
convention for repeated indices will be used if no special
explanations are added.
On the solution space of the given PDEs, a symmetry is determined by
its linearized system while the adjoint symmetry is defined as the
solution of the adjoint of the linearized system \cite{blu11,Olver}.
In particular, the determining system of a symmetry
$X_\eta=\eta^i(x,u,u_{(1)},\dots,u_{(s)})\partial_{u^i}$ is the
linearization of system (\ref{perturb}) annihilating on its solution
space, that is,
\begin{eqnarray}\label{sym-det}
&&(\mathscr{L}_E)^\alpha_\rho \eta^\rho=\frac{\partial
E^\alpha}{\partial u^\rho}\eta^\rho+\frac{\partial
E^\alpha}{\partial
u^\rho_{i_1}}D_{i_1}\eta^\rho+\dots+\frac{\partial
E^\alpha}{\partial u^\rho_{i_1\dots i_r}}D_{i_1}\dots
D_{i_r}\eta^\rho=0
\end{eqnarray}
holds for all solutions of system (\ref{perturb}). In
(\ref{sym-det}) and below, $D_i$ denotes the total derivative
operator with respect to $x^i$,
\begin{eqnarray}
&&\no D_i = \frac{\partial}{\partial x^i} +u_{i}
\frac{\partial}{\partial u} +u_{ij}\frac{\partial}{\partial u_{j}}
+\dots, ~~~i=1,2,\dots,n.
\end{eqnarray}
The adjoint equations of system
(\ref{sym-det}) are
\begin{eqnarray}\label{adsym-det}
(\mathscr{L}_E^*)_\alpha^\rho \omega_\rho=\omega_\rho \frac{\partial
E^\rho}{\partial u^\alpha}-D_{i_1}\bigg(\omega_\rho\frac{\partial
E^\rho}{\partial u^\alpha_{i_1}}\bigg)+\dots+(-1)^r D_{i_1}\dots
D_{i_r}\bigg(\omega_\rho\frac{\partial E^\rho}{\partial
u^\alpha_{i_1\dots i_r}}\bigg)=0,
\end{eqnarray}
which are the determining equations for a adjoint symmetry
$X_{\omega}=\omega_\rho(x,u,u_{(1)},\cdots,u_{(r)})\partial_{u^\rho}$
of system (\ref{perturb}).
In general, solutions of the adjoint symmetry determining system
(\ref{adsym-det}) are not solutions of the symmetry determining
system (\ref{sym-det}). However, if the linearized system
(\ref{sym-det}) is self-adjoint, then adjoint symmetries are
symmetries and system (\ref{perturb}) has a variational principle
and thus Noether' approach is applicable in this case \cite{Olver}.
\subsubsection{Nonlinear self-adjointness with differential substitution}
We begin with nonlinear self-adjointness introduced by Ibragimov
\cite{ib-2011}, whose main idea is first to turn the system of PDEs
into Lagrangian equations by artificially adding new variables, and
then to apply the theorem proved in \cite{nh-2007} to construct
local and nonlocal conservation laws.
Let $\mathcal {L}$ be the formal Lagrangian of system
(\ref{perturb}) written as
\begin{eqnarray}\label{lagrangian}
&& \mathcal {L} = v^{\beta}E^{\beta}(x,u,u_{(1)},\dots,u_{(r)}),
\end{eqnarray}
where $ v^{\beta}$ is the new introduced dependent variable, then
the adjoint equations of system (\ref{perturb}) are defined by
\begin{eqnarray}\label{adequation}
(E^{\alpha})^{\ast}(x,u,v,u_{(1)},v_{(1)},\cdots,u_{(r)},v_{(r)})=\frac{\delta\mathcal
{L}}{\delta u^{\alpha}}=0,
\end{eqnarray}
where $v=(v^1,\dots,v^m)$ and hereafter, $\delta/\delta u^{\alpha}$
is the Euler operator defined as
\begin{eqnarray}\label{var-op}
\frac{\delta}{\delta u^{\alpha}}=\frac{\partial}{\partial
u^{\alpha}}+\sum_{s=1}^{\infty}(-1)^sD_{i_1}\dots
D_{i_s}\frac{\partial}{\partial u^{\alpha}_{i_1\dots i_s}}.
\end{eqnarray}
Then the definition of nonlinear self-adjointness of system
(\ref{perturb}) is given as follows.
\begin{define} \label{def-non}(Nonlinear self-adjointness \cite{ib-2011}) The
system (\ref{perturb}) is said to be nonlinearly self-adjoint if the
adjoint system (\ref{adequation}) is satisfied for all solutions $u$
of system (\ref{perturb}) upon a substitution $v=\varphi(x,u)$ such
that $\varphi(x,u)\neq 0$.\end{define}
Here, $\varphi(x,u)=(\varphi^1(x,u),\dots,\varphi^m(x,u))$ and
$v=\varphi(x,u)$ means $v^i=\varphi^i(x,u)$, $\varphi(x,u)\neq 0$
means that not all elements of $\varphi(x,u)$ equal zero and is
called a nontrivial substitution. Definition \ref{def-non} is
equivalent to the following identities holding for the undetermined
functions
$\lambda_{\alpha}^{\beta}=\lambda_{\alpha}^{\beta}(x,u,u_{(1)}\dots,u_{(r)})$
\begin{eqnarray}\label{equ-id1}
&&
(E^{\alpha})^{\ast}(x,u,v,u_{(1)},v_{(1)},\cdots,u_{(r)},v_{(r)})_{|_{v=\varphi}}
=\lambda_{\alpha}^{\beta}E^{\beta},
\end{eqnarray}
which is applicable in the proofs and computations.
Nonlinear self-adjointness contains three subclasses. In particular,
if the substitution $v=\varphi(x,u)$ becomes $v=u$, then system
(\ref{perturb}) is called strict self-adjointness. If
$v=\varphi(u)$, then it is named by quasi self-adjointness. If
$v=\varphi(x,u)$ involving $x$ and $u$, then it is called weak
self-adjointness.
As an extension of the substitution, if
$v=\varphi(x,u,u_{(1)},\dots,u_{(s)})$, then it is called nonlinear
self-adjointness with differential substitution \cite{ib-2011,gan-2014,zhang-2015}.
\begin{define} \label{def-ger}(Nonlinear self-adjointness with differential
substitution) The system (\ref{perturb}) is said to be nonlinearly
self-adjoint with differential substitution if the adjoint system
(\ref{adequation}) is satisfied for all solutions of system
(\ref{perturb}) upon a substitution
$v=\varphi(x,u,u_{(1)},\dots,u_{(s)})$ such that $v\neq 0$
\end{define}
Similarly, Definition \ref{def-ger} is equivalent to the following
equality
\begin{eqnarray}\label{equ-id1-id2}
&&\no(E^{\alpha})^{\ast}(x,u,v,u_{(1)},v_{(1)},\cdots,u_{(r)},v_{(r)})_{|_{v=\varphi(x,u,u_{(1)},\dots,u_{(s)})}}\\
&&\hspace{5cm}=(\lambda_{\alpha}^{\beta}+\lambda_{\alpha}^{\beta
i_1}D_{i_1}+\dots+\lambda_{\alpha}^{\beta i_1\dots i_s}D_{i_1}\dots
D_{i_s})E^\beta,
\end{eqnarray}
where $\lambda_{\alpha}^{\beta},\lambda_{\alpha}^{\beta
i_1},\dots,\lambda_{\alpha}^{\beta i_1\dots i_s}$ are undetermined
functions of arguments $x,u,u_{(1)},\dots, u_{(r+s)}$ respectively.
Since the highest order derivatives in
$\lambda_{\alpha}^{\beta},\lambda_{\alpha}^{\beta
i_1},\dots,\lambda_{\alpha}^{\beta i_1\dots i_s}$ maybe larger than
the highest order derivative in $E^\alpha$, the right side of system
(\ref{equ-id1-id2}) may not linear in $D_{i_1}\dots
D_{i_k}E^\alpha,$ $k=1,\dots,m$ and thus application of equality
(\ref{equ-id1-id2}) to find the differential substitutions is a
difficult task
For example, consider the Klein-Gordon equation in the form
\begin{equation}\label{k-g-eq}
G=u_{tt}-u_{xx}-g(u)=0,
\end{equation}
where $g(u)$ is a nonlinear function of $u$. Following the main idea
of nonlinear self-adjointness with differential substitution, set
the formal Lagrangian $\mathcal {L}=\alpha(u_{tt}-u_{xx}-g(u))$ with
a new introduced dependent variable $\alpha$, then the adjoint
equation of Eq.(\ref{k-g-eq}) is
\begin{eqnarray}\label{adjoint}
&& \frac{\delta \mathcal {L}}{\delta
u}=D^2_t\alpha-D^2_{x}\alpha-g'(u)\alpha.
\end{eqnarray}
Assume the differential substitution
$\alpha=\varphi(x,t,u,\partial_xu,\partial_tu,\dots,\partial^p_xu,\partial^{p-1}_x\partial_tu)$
and use the equality (\ref{equ-id1-id2}), then Eq.(\ref{adjoint})
becomes
\begin{eqnarray}\label{det}
&&\no \mathscr{D}^2_t\varphi
+\mathscr{L}_\varphi[u]G+\sum_{i=0}^{p-1}D_x^iG(D_t+\mathscr{D}_t)\varphi_{\partial_x^i\partial_tu}-D^2_{x}\varphi-g'(u)\varphi\\
&&\hspace{3cm}=\sum_{j=0}^p
\lambda_jD_x^jG+\sum_{k=0}^{p-1}\mu_kD_x^kD_tG+\sum_{l,s=0}^{p-1}\nu_{ls}D_x^lG\,D_x^sG,
\end{eqnarray}
where $\lambda_j,\mu_k,\nu_{ls}(i,k,l,s=1,\dots,p-1)$ are arbitrary
functions of $x,t,u$ and up to $p+2$ order derivatives of $u$
without containing $u_{tt}$ and its differential results, and
$$\mathscr{D}_t=\partial_t+u_t\partial_u+u_{xt}\partial_{u_x}+(u_{xx}+g(u))\partial_{u_t}
+\dots$$ is the total derivative operator which expresses $u_{tt}$
and its derivatives through Eq.(\ref{k-g-eq}). The symbol
$\mathscr{L}_\varphi[u]$ stands for the linearization operator of
function $\varphi$ defined by
\begin{eqnarray}\label{linear}
&\no\mathscr{L}_\varphi [u]G=\varphi_{u}
G+\varphi_{\partial_xu}D_{x}G+\varphi_{\partial_tu}D_tG+\varphi_{\partial^{2}_xu}D_x^2G+
\varphi_{\partial_x\partial_tu}D_{x}D_tG+\varphi_{\partial^{2}_tu}D_t^2G+\dots.
\end{eqnarray}
Obviously, the right side of Eq.(\ref{det}) is not linear in
$D_x^kG$ since the last term contains $D_x^lG\,D_x^sG$, thus the
computation of differential substitution for Eq.(\ref{k-g-eq}) is
not an easy task. For details of the example we refer to reference
\cite{zhang-1}.
After finding the differential substitutions of nonlinear
self-adjointness, we will use the following theorem to construct
conservation laws of the system \cite{nh-2007}.
\begin{theorem} \label{con-law}
Any infinitesimal symmetry (Local and nonlocal)
\begin{eqnarray}
&&\no X=\xi^i(x,u,u_{(1)},\dots)\frac{\partial}{\partial
x^i}+\eta^{\sigma}(x,u,u_{(1)},\dots)\frac{\partial}{\partial
u^{\sigma}}
\end{eqnarray}
of system (\ref{perturb}) leads to a conservation law $D_i(C^i)=0$
constructed by the formula
\begin{eqnarray}\label{formula}
&&\no C^i=\xi^i\
|
mathcal {L}+W^{\sigma}\bigg[\frac{\partial \mathcal
{L}}{\partial u_i^{\sigma}}-D_j(\frac{\partial \mathcal
{L}}{\partial u_{ij}^{\sigma}})+D_jD_k(\frac{\partial \mathcal
{L}}{\partial u_{ijk}^{\sigma}})-\dots\bigg]\\&&\hspace{0.8cm}
+D_j(W^{\sigma}) \bigg[\frac{\partial \mathcal {L}}{\partial
u_{ij}^{\sigma}}-D_k(\frac{\partial \mathcal {L}}{\partial
u_{ijk}^{\sigma}})+\dots\bigg]+D_jD_k(W^{\sigma})\bigg[\frac{\partial
\mathcal {L}}{\partial u_{ijk}^{\sigma}}-\dots\bigg]+\dots,
\end{eqnarray}
where $W^{\sigma}=\eta^{\sigma}-\xi^ju_j^{\sigma}$ and $\mathcal
{L}$ is the formal Lagrangian (\ref{lagrangian}) which is written in
the symmetric form about the mixed derivatives. \end{theorem}
\subsubsection{Multiplier}
Multipliers are a set of functions which multiplies a system of PDEs
in order to make the system get a divergence form, then for any
solution of the equations this divergence will equal zero and one
will get a conservation law.
Explicitly, multiplier of conservation law for system
(\ref{perturb}) is defined as follows.
\begin{define}\label{df-mul}(Multiplier \cite{sb-2002b})
Multipliers for PDE system (\ref{perturb}) are a set of expressions
\begin{equation}\label{mult}
\no\Lambda=\{\Lambda_1(x,u,u_{(1)},\dots,
u_{(r)}),\,\dots,\,\Lambda_m(x,u,u_{(1)},\dots,
u_{(r)})\},\end{equation} satisfying
\begin{equation}\label{mul}
\no\Lambda_\beta(x,u,u_{(1)},\dots,
u_{(r)})E^{\beta}(x,u,u_{(1)},\dots,u_{(r)})=D_i(C^i)
\end{equation}
with some expressions $C^i$ for any function $u$.
\end{define}
Since Euler operator $\delta/\delta u^{\sigma}$ with
$\sigma=1,2,\dots,m$ acting on the divergence expression $D_i(C^i)$
yields zero identically, so the following theorem for computing
multipliers is established \cite{Olver,blu11}.
\begin{theorem}
A set of non-singular local multipliers (\ref{mult}) yields a local
conservation law for the PDEs system (\ref{perturb}) if and only if
the set of identities
\begin{equation}\label{det-mult}
\frac{\delta}{\delta
u^{\sigma}}\big(\Lambda_\beta(x,u,u_{(1)},\dots,
u_{(r)})E^{\beta}(x,u,u_{(1)},\dots,u_{(r)})\big)=0
\end{equation}
holds for arbitrary functions $u(x)$.
\end{theorem}
Since system (\ref{det-mult}) holds for arbitrary $u$, one can treat
each $u^{\sigma}$ and its derivatives as independent variables, and
consequently system (\ref{det-mult}) is separated into an
over-determined linear PDEs system whose solutions are multipliers.
When the calculation works on the solution space of the PDEs
expressed in a Cauchy-Kovalevskaya form, the multipliers are
selected from the set of adjoint symmetries using the adjoint
invariance condition. For details we refer to references
\cite{sb-2002a,sb-2002b} and therein.
\subsection{Main results}
We first give an equivalent definition of nonlinear self-adjointness
with differential substitution. Definition \ref{def-ger} means that
adjoint system (\ref{adequation}), after inserted by the
differential substitution $v=\varphi(x,u,u_{(1)},\dots,u_{(s)})$,
holds identically on the solution space of original system
(\ref{perturb}). This property can be used as the following
alternative definition for nonlinear self-adjointness with
differential substitution.
\begin{define} \label{def-ger-eqv}(Nonlinear self-adjointness with differential
substitution) The system (\ref{perturb}) is nonlinearly self-adjoint
with differential substitution if the adjoint system
(\ref{adequation}) upon a nontrivial differential substitution
$v=\varphi(x,u,u_{(1)},\dots,u_{(s)})$ holds on the solution space
of system (\ref{perturb}).
\end{define}
In the sense of Definition \ref{def-ger-eqv}, nonlinear
self-adjointness with differential substitution is equivalent to the
following equality
\begin{eqnarray}\label{equ-id1-id1}
(E^{\alpha})^{\ast}(x,u,v,u_{(1)},v_{(1)},\cdots,u_{(r)},v_{(r)})_{|_{v=\varphi(x,u,u_{(1)},\dots,u_{(s)})}}=0,~~
\mbox{when}~ E^\alpha=0,
\end{eqnarray}
which is called the determining system of differential substitution.
On the other hand, since $v$ is a new introduced dependent variable
set, then adjoint equations (\ref{adequation}) can be explicitly
expressed as
\begin{eqnarray}\label{equ-id1-id3}
&&\no (E^{\alpha})^{\ast}=\frac{\delta\mathcal {L}}{\delta
u^{\alpha}}\\
&&\hspace{1.15cm}=v^\beta\frac{\partial E^\beta}{\partial
u^{\alpha}}+\sum_{r=1}^{\infty}(-1)^rD_{i_1}\dots
D_{i_r}\left(v^\beta\frac{\partial E^\beta}{\partial
u^{\alpha}_{i_1\dots i_r}}\right)=0,
\end{eqnarray}
then compare system (\ref{equ-id1-id3}) with the adjoint symmetry
determining equations (\ref{adsym-det}), we find that they are
identical if set $v=\omega$, where
$\omega=(\omega_1,\dots,\omega_m)$, thus we have the following
result.
\begin{theorem}\label{th-2}
Any adjoint symmetry of system (\ref{perturb}) is a differential
substitution of nonlinear self-adjointness and vice versa.
\end{theorem}
Theorem \ref{th-2} provides a new way to search for differential
substitutions of nonlinear self-adjointness, which is identical to
find the adjoint symmetries of PDEs. Furthermore, given a
differential substitution of nonlinear self-adjointness, formula
(\ref{formula}) can generate a conservation law with the symmetry of
system (\ref{perturb}), thus together with Theorem \ref{con-law} and
Theorem \ref{th-2}, we obtain the following result.
\begin{theorem}\label{th-3}
If system (\ref{perturb}) is nonlinearly self-adjoint with
differential substitution, then each symmetry corresponds to a
conservation law via formula (\ref{formula}).
\end{theorem}
Theorem \ref{th-3} builds a direct connection between symmetries and
conservation laws by means of formula (\ref{formula}). In
particular, if system (\ref{perturb}) is strictly self-adjoint, then
a symmetry is adjoint symmetry and thus is a differential
substitutions of nonlinear self-adjointness.
In summary, we formulate the above procedure as the following
algorithm of constructing conservation laws of PDEs.
\vspace{0.2cm} \hrule
\smallskip
\vspace{0.2cm}
Step 1: Check whether the system of PDEs has a Lagrangian.
If the system has a Lagrangian, then compute symmetries directly;
Otherwise, compute symmetries and adjoint symmetries admitted by the
system.
Step 2: Construct the formal Lagrangian $\mathcal {L}$ and find the
differential substitution.
By Theorem \ref{th-2}, the obtained adjoint symmetries are the
differential substitutions of nonlinear self-adjointness.
Step 3: With the above known results, use formula (\ref{formula}) to
construct conservation laws of the PDEs.\vspace{0.2cm} \hrule
\smallskip
\vspace{0.2cm}
Since computing symmetry and adjoint
symmetry is an algorithmic procedure, thus a variety of symbolic
manipulation programs have been developed for many computer algebra
systems (See \cite{nj-2004,ch-2007} and references therein).
Furthermore, the general conservation law formula (\ref{formula})
only involve differential operation instead of integral operation.
Hence, the proposed algorithm can be fully implemented on a
computer.
To end this section, we compare nonlinear self-adjointness with
differential substitution with multiplier method. Conservation law
multiplier method for the PDEs admitting a Cauchy-Kovalevskaya form
is further studied in \cite{sb-2002a,sb-2002b}, which states that
multipliers can be obtained by choosing from the set of adjoint
symmetries with the so-called the adjoint invariance conditions,
thus by Theorem \ref{th-2}, we have:
\begin{cor}\label{cor-1}
For the PDEs admitting a Cauchy-Kovalevskaya form, the set of
differential substitutions contains the one of multipliers as a
subset.
\end{cor}
By Corollary \ref{cor-1}, we find that there exist some adjoint
symmetries which are differential substitutions but not multipliers
of system (\ref{perturb}), this case will be exemplified by a
nonlinear wave equation in the next section.
\section{Two illustrated examples}
In this section, by means of the computer algebra system
Mathematica, we consider two examples, where one is a nonlinear wave
equation used to demonstrate the result of Corollary \ref{cor-1},
the other is the Thomas equation used to illustrate the
effectiveness of nonlinear self-adjointness with differential
substitution.
\subsection{A nonlinear wave equation}
The first example is to consider a nonlinear wave equation
\cite{blu11,sb-2002a}
\begin{eqnarray}\label{wave-speed}
&&E= u_{tt}-u^2u_{xx}-uu_{x}^2=0,
\end{eqnarray}
which has a variational principle given by the action integral
$S=\int(u_t^2+u^2u_x^2)/2\,dtdx$ and so the adjoint symmetry and the
symmetry are identical.
We first apply multiplier method to study conservation laws of
Eq.(\ref{wave-speed}). A function $\Lambda=\Lambda(x,t,u,u_x,u_t)$
is a local multiplier of Eq.(\ref{wave-speed}) if and only if Euler
operator (\ref{var-op}) acting on $\Lambda E$ yields zero for any
$u=u(x,t)$, i.e.,
\begin{eqnarray}\label{wave-speed-det}
&&\no \frac{\delta (\Lambda E)}{\delta
u}=D_t^2\Lambda-u^2D_x^2\Lambda-2uu_xD_x\Lambda\\
&&\hspace{1.8cm}-(2uu_{xx}+u_x^2)\Lambda+E\Lambda_u-D_x(E\Lambda_{u_x})-D_t(E\Lambda_{u_t})=0.
\end{eqnarray}
Splitting Eq.(\ref{wave-speed-det}) with respect to $u_{tt}$ and its
differential results, we find that the determining system for
multiplier $\Lambda$ consists of the symmetry determining system
\begin{eqnarray}\label{wave-speed-det-1}
&&\mathscr{D}_t^2\Lambda-u^2D_x^2\Lambda-2uu_xD_x\Lambda-(2uu_{xx}+u_x^2)\Lambda=0,
\end{eqnarray}
where
$\mathscr{D}_t=\partial_t+u_t\partial_u+u_{xt}\partial_{u_x}+(u^2u_{xx}+uu_{x}^2)\partial_{u_t}
+\dots$ is the total derivative operator on the solution space of
Eq.(\ref{wave-speed}), and
\begin{eqnarray}\label{wave-speed-det-2}
&& 2\Lambda_u-\mathscr{D}_t\Lambda_{u_t}-D_x\Lambda_{u_x}=0,
\end{eqnarray}
which is called the adjoint invariance condition.
It is known that Eq.(\ref{wave-speed}) is invariant under the
symmetry $X=(u-xu_x)\partial_u$, then function $\Lambda=u-xu_x$ is a
solution of Eq.(\ref{wave-speed-det-1}) but does not satisfy the
adjoint invariance condition (\ref{wave-speed-det-2}), thus it is
not a multiplier. However, by Theorem \ref{th-2}, it is a
differential substitution of nonlinear self-adjointness for
Eq.(\ref{wave-speed}), then set the formal lagrangian $$\mathcal
{L}=(u-xu_x)( u_{tt}-u^2u_{xx}-uu_{x}^2),$$ and by means of Theorem
\ref{con-law}, we obtain a conservation law
$D_tC_{(\ref{wave-speed})}^t+D_xC_{(\ref{wave-speed})}^x=0$ given by
the formulae
\begin{eqnarray}
&&\no C_{(\ref{wave-speed})}^t=(u-xu_x)D_t\eta-\eta
(u_t-xu_{xt}),\\\no
&&C_{(\ref{wave-speed})}^x=(xu_x-u)u^2D_x\eta-xu^2u_{xx}\eta,
\end{eqnarray}
where $X= \eta(x,t,u,u_x,u_t,\dots)\partial_u$ is a symmetry of
Eq.(\ref{wave-speed}).
For example, let $\eta=-u_t$, then we obtain
\begin{eqnarray}
&&\no C_{(\ref{wave-speed})}^t=(u-xu_x)(u^2u_{xx}+uu_{x}^2)-u_t (u_t-xu_{xt}),\\
&&\no C_{(\ref{wave-speed})}^x=u^2(xu_x-u)u_{xt}-xu^2u_{xx}u_t,
\end{eqnarray}
which is not reported in the literatures to the authors' knowledge.
\subsection{The Thomas equation}
The Thomas equation is written as
\begin{eqnarray}\label{th-pde}
&&G=u_{xt}+\alpha u_x+\beta u_t+\gamma u_x u_t=0,
\end{eqnarray}
which arises in the study of chemical exchange process
\cite{rr-1966}, where the constants $\alpha,\beta$ and $\gamma$
satisfy $\alpha>0,\beta>0,\gamma\neq0$. The property of nonlinear
self-adjointness had been
studied in \cite{ib-2011}.
Following the infinitesimal symmetry criterion for PDEs
\cite{Olver}, the determining equation of a symmetry in evolutionary
vector field
$X=\varphi(x,t,u,\partial_xu,\partial_tu,\dots)\partial_u$ is
\begin{eqnarray}\label{th-det}
&& \mathscr{D}_tD_x\varphi+\alpha D_x\varphi+\beta
\mathscr{D}_t\varphi +\gamma u_x\mathscr{D}_t\varphi+\gamma
u_tD_x\varphi=0,
\end{eqnarray}
where $\mathscr{D}_t=\partial_t+u_t\partial_u-(\alpha u_x+\beta
u_t+\gamma u_x u_t)\partial_{u_x}+u_{tt}\partial_{u_t} +\dots $ is
the total derivative operator which expresses $u_{xt}$ and its
derivatives through Eq.(\ref{th-pde}). The adjoint equation of
Eq.(\ref{th-det}) is
\begin{eqnarray}\label{th-pde-adj}
\mathscr{D}_tD_x\psi-\alpha D_x\psi-\beta \mathscr{D}_t\psi -\gamma
u_x\mathscr{D}_t\psi-\gamma u_tD_x\psi+2\gamma (\alpha u_x+\beta
u_t+\gamma u_x u_t)\psi=0,
\end{eqnarray}
which is the determining system of a adjoint symmetry
$X=\psi(x,t,u,\partial_xu,\partial_tu,\dots,)\partial_u$. Then by
Theorem \ref{th-2}, solutions of Eq.(\ref{th-pde-adj}) are the
differential substitutions of nonlinear self-adjointness.
Introduce the formal Lagrangian of Eq.(\ref{th-pde}) in the
symmetric form $$\mathcal
{L}=v\Big(\frac{1}{2}u_{xt}+\frac{1}{2}u_{tx}+\alpha u_x+\beta
u_t+\gamma u_x u_t\Big),$$ where $v$ is a new dependent variable,
then by formula (\ref{formula}), we obtain the following general
conservation law formulae.
\begin{prop}
A conservation law
$D_tC_{(\ref{th-pde})}^t+D_xC_{(\ref{th-pde})}^x=0$ of
Eq.(\ref{th-pde}) is given by
\begin{eqnarray}\label{con-law-1}
&&\no C_{(\ref{th-pde})}^t
|
^{m-w_d}-1)\times N$-matrix $A=\left(\varphi_\alpha\left(\left\{\frac{nY_dz}{N}\right\}\right)\right)_{z\in \cZ_{N,w_d},n=0,\ldots,N-1}$, with the vector $\eta_{d-1}$ and that this matrix-vector
product can be computed very efficiently,
as we will show below.
Note that the rows of $A$ are periodic with period
$b^{m-w_d}$, since $Y_d=b^{w_d}$, and $N=b^m$ and therefore $(n+b^{m-w_d}) Y_d z \equiv n Y_d
z(\text{mod }b^m)$. More specifically, $A$ is a block matrix
\[
A=\Big(\underset{b^{w_d} \text{ - times }}{\underbrace{\bsOmega^{(m-w_d)},\ldots, \bsOmega^{(m-w_d)}}}\Big)\,,
\]
where $\bsOmega^{(k)}:=\left(\varphi_\alpha\left(\left\{\frac{nz}{b^k}\right\}\right)\right)_{z\in \cZ_{b^k,0},n=0,\ldots,b^k-1}$.
If $\bsx$ is any vector of length $N=b^m$ we compute
\[
A \bsx
= \bsOmega^{(m-w_d)}\bsx_1+\cdots+\bsOmega^{(m-w_d)}\bsx_{b^{w_d}}
= \bsOmega^{(m-w_d)}(\bsx_1+\cdots+\bsx_{b^{w_d}})\,,
\]
where $\bsx_1$ consists of the first $b^{(m-w_d)}$ coordinates of $x$,
where $\bsx_2$ consists of the next $b^{(m-w_d)}$ coordinates of $x$,
and so forth.
It has been shown by Nuyens and Cools in \cite{NC06a, NC06b} that multiplication
of a vector of length $b^k$ with $\bsOmega^{(k)}$ can be computed using $O(k b^k)$ operations.
Addition of the vectors $\bsx_1,\ldots,\bsx_{b^{w_d}}$, uses $b^m$ single additions.
Thus multiplication of a $b^m$-vector with $A$ uses $O(b^m+k b^k)$ operations.
We summarize the reduced fast CBC-construction:
\begin{algorithm}[Reduced fast CBC-algorithm]\label{alg:mod-fast-cbc}
Pre-computation:
\begin{enumerate}
\item[a)] Compute $\varphi_\alpha(\frac{n}{b^m})$ for all $n=0,\ldots,b^m-1$.
\item[b)] Set $\eta_1(n)= 1+\gamma_1\varphi_\alpha\left(\left\{\frac{nY_1z_1}{b^m}\right\}\right)$ for $n = 0, \ldots, b^m-1$.
\item[c)] Set $z_1=1$. Set $d=2$ and $s^\ast$ to be the largest integer such that $w_{s^\ast} < m$.
\end{enumerate}
While $d\le \min\{s, s^\ast\}$,
\begin{enumerate}
\item\label{it:1} partition $\eta_{d-1}$ into $b^{w_d}$ vectors $\eta_{d-1}^{(1)}, \eta_{d-1}^{(2)}, \ldots, \eta_{d-1}^{(b^{w_d})}$ of length $b^{m-w_d}$
and let $\eta' = \eta_{d-1}^{(1)} + \eta_{d-1}^{(2)}+ \cdots + \eta_{d-1}^{(b^{w_d})}$ denote the sum of the parts,
\item\label{it:2} let $T_d(z)=\bsOmega^{(m-w_d)} \eta' $,
\item let $z_d=\mathrm{argmin}_z\ T_d(z)$,
\item\label{it:4} let $\eta_d(n)=\eta_{d-1}(n)\left(1+\gamma_d\varphi_\alpha\left(\left\{\frac{nY_dz_d}{b^m}\right\}\right)\right)$,
\item increase $d$ by $1$.
\end{enumerate}
If $s > s^\ast$, then set $z_{s^\ast} = z_{s^\ast+1} = \cdots = z_s = 0$. The squared worst-case error is then given by
\[
e^2_{b^m,s,\alpha,\bsgamma}(Y_1z_1,\ldots,Y_{s}z_{s})=-1+\frac{1}{b^m}\sum_{n=0}^{b^m-1}\eta_s(n).
\]
\end{algorithm}
From the above considerations we obtain the following result:
\begin{thm}
For positive even integers $\alpha$, the cost of Algorithm \ref{alg:mod-fast-cbc} is
$$O \left(b^m+ \min\{s, s^\ast\} b^m+ \sum_{d=1}^{\min\{s, s^\ast\}} (m-w_d)b^{m-w_d} \right).$$
\end{thm}
\begin{proof}
The first term
originates from the pre-computation of $\varphi_\alpha$, the second term
comes from Steps \ref{it:1} and \ref{it:4}, and the last term from Step
\ref{it:2}.
\end{proof}
\begin{examp}\rm
Assume that the weights satisfy $\gamma_j \sim j^{-3}$. Then a fast CBC algorithm constructs in $O(s m b^m)$ operations a generating vector for which the worst-case error is bounded independently of the dimension $s$. However, if we choose, for example, $w_j \sim \frac{3}{2}\log_b j$, then with the reduced fast CBC algorithm we can construct a generating vector in $O(m b^m + \min\{s, s^\ast\} m)$ operations for which the worst-case error is still bounded independently of the dimension $s$, since $\sum_j \gamma_j b^{w_j} \ll \zeta(3/2) < \infty$. This is a drastic reduction of the construction cost, especially when the dimension $s$ is large.
We give the results of some practical computations. Throughout we use $b=2$,
$\alpha=2$, $\gamma_j = j^{-3}$ and $w_j =\lfloor \frac{3}{2}\log_b j \rfloor$.
In Table~\ref{tbl:modified-cbc} we report the computation time in seconds
and the base-10 logarithm of the worst-case error for several values of $s$ and $m$.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|c|c|}
\hline
& $s$ = 10& $s$ = 20& $s$ = 50& $s$ = 100& $s$ = 200& $s$ = 500& $s$ = 1000\\
\hline\hline
\multirow{2}{*}{$m$ = 10}&\begin{tabular}{c}0.104\\ -1.89\end{tabular}&\begin{tabular}{c}0.120\\ -1.85\end{tabular}&\begin{tabular}{c}0.144\\ -1.79\end{tabular}&\begin{tabular}{c}0.148\\ -1.74\end{tabular}&\begin{tabular}{c}0.156\\ -1.67\end{tabular}&\begin{tabular}{c}0.164\\ -1.65\end{tabular}&\begin{tabular}{c}0.176\\ -1.65\end{tabular}\\
\hline
\multirow{2}{*}{$m$ = 12}&\begin{tabular}{c}0.356\\ -2.39\end{tabular}&\begin{tabular}{c}0.400\\ -2.35\end{tabular}&\begin{tabular}{c}0.472\\ -2.31\end{tabular}&\begin{tabular}{c}0.524\\ -2.27\end{tabular}&\begin{tabular}{c}0.564\\ -2.19\end{tabular}&\begin{tabular}{c}0.588\\ -2.10\end{tabular}&\begin{tabular}{c}0.608\\ -2.08\end{tabular}\\
\hline
\multirow{2}{*}{$m$ = 14}&\begin{tabular}{c}1.29\\ -2.88\end{tabular}&\begin{tabular}{c}1.45\\ -2.84\end{tabular}&\begin{tabular}{c}1.67\\ -2.79\end{tabular}&\begin{tabular}{c}1.88\\ -2.76\end{tabular}&\begin{tabular}{c}2.03\\ -2.72\end{tabular}&\begin{tabular}{c}2.35\\ -2.62\end{tabular}&\begin{tabular}{c}2.50\\ -2.53\end{tabular}\\
\hline
\multirow{2}{*}{$m$ = 16}&\begin{tabular}{c}5.13\\ -3.39\end{tabular}&\begin{tabular}{c}5.68\\ -3.34\end{tabular}&\begin{tabular}{c}6.47\\ -3.30\end{tabular}&\begin{tabular}{c}7.16\\ -3.28\end{tabular}&\begin{tabular}{c}7.78\\ -3.24\end{tabular}&\begin{tabular}{c}9.27\\ -3.17\end{tabular}&\begin{tabular}{c}11.2\\ -3.10\end{tabular}\\
\hline
\multirow{2}{*}{$m$ = 18}&\begin{tabular}{c}22.3\\ -3.89\end{tabular}&\begin{tabular}{c}24.4\\ -3.84\end{tabular}&\begin{tabular}{c}27.2\\ -3.81\end{tabular}&\begin{tabular}{c}29.4\\ -3.79\end{tabular}&\begin{tabular}{c}32.1\\ -3.76\end{tabular}&\begin{tabular}{c}38.2\\ -3.71\end{tabular}&\begin{tabular}{c}47.2\\ -3.65\end{tabular}\\
\hline
\multirow{2}{*}{$m$ = 20}&\begin{tabular}{c}118\\ -4.41\end{tabular}&\begin{tabular}{c}126\\ -4.35\end{tabular}&\begin{tabular}{c}137\\ -4.33\end{tabular}&\begin{tabular}{c}145\\ -4.31\end{tabular}&\begin{tabular}{c}157\\ -4.30\end{tabular}&\begin{tabular}{c}182\\ -4.26\end{tabular}&\begin{tabular}{c}223\\ -4.21\end{tabular}\\
\hline
\end{tabular}\\
\end{center}
\caption{Computation times and log-worst-case errors for the reduced fast CBC construction}\label{tbl:modified-cbc}
\end{table}
For comparison we also computed the same figures for the fast CBC construction (see Table~\ref{tbl:original-cbc}). The advantage of the modified approach becomes apparent as the dimension
becomes large. This is due to the fact that computation of every component
takes the same amount of time. For example, if $m=20$, computation of one extra
component takes roughly 40 seconds. Thus computing the point set for $m=20$
and $s=1000$ would take roughly 40000 seconds ($\approx$ 11 hours), compared to
223 seconds ($<$ 4 minutes) for the modified method. Note that the loss
of accuracy, compared to the gain in computation speed, is insignificant.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c||c|c|c|}
\hline
& $s$ = 10& $s$ = 20& $s$ = 50\\
\hline\hline
\multirow{2}{*}{$m$ = 10}&\begin{tabular}{c}0.384\\ -1.90\end{tabular}&\begin{tabular}{c}0.724\\ -1.88\end{tabular}&\begin{tabular}{c}1.80\\ -1.88\end{tabular}\\
\hline
\multirow{2}{*}{$m$ = 12}&\begin{tabular}{c}1.32\\ -2.40\end{tabular}&\begin{tabular}{c}2.62\\ -2.37\end{tabular}&\begin{tabular}{c}6.55\\ -2.37\end{tabular}\\
\hline
\multirow{2}{*}{$m$ = 14}&\begin{tabular}{c}5.22\\ -2.90\end{tabular}&\begin{tabular}{c}10.4\\ -2.87\end{tabular}&\begin{tabular}{c}26.0\\ -2.86\end{tabular}\\
\hline
\multirow{2}{*}{$m$ = 16}&\begin{tabular}{c}21.7\\ -3.40\end{tabular}&\begin{tabular}{c}43.4\\ -3.36\end{tabular}&\begin{tabular}{c}109\\ -3.35\end{tabular}\\
\hline
\end{tabular}\\
\end{center}
\caption{Computation times and log-worst-case errors for the fast CBC construction, i.e., where $w_j=0$ for all $j$}\label{tbl:original-cbc}
\end{table}
\end{examp}
\section{Walsh spaces and polynomial lattice point sets}\label{secnotation}
Similar results to those for lattice point sets from the previous sections can be shown for polynomial lattice
point sets over finite fields $\FF_b$ of prime order $b$ with modulus $x^m$. Here we only sketch these results and the necessary notations,
as they are in analogy to those for Korobov spaces and lattice point sets.
As a quality criterion we use the worst-case error of QMC rules in a weighted Walsh space with
general weights which was (in the case of product weights) introduced in \cite{DP05}
(likewise we could also use the mean square worst-case error of digitally shifted polynomial lattices in the Sobolev space
$\cH_{s,\bsgamma}^{\sob}$ from Remark~\ref{re1}; see \cite{DKPS05,DP05,DP10}).
For a prime number $b$ and for $h \in \NN$ let $\psi_b(h)=\lfloor \log_b(h)\rfloor$.
The weighted Walsh space $\cH(K_{s,\alpha,\bsgamma}^{\wal})$ is a reproducing kernel Hilbert space with kernel function of the form
\begin{align*}
K_{s,\alpha,\bsgamma}^{\wal}(\bsx,\bsy) = 1+ \sum_{\emptyset \not=\uu \subseteq [s]} \gamma_{\uu} \sum_{\bsh_{\uu}\in \NN^{|\uu|}} \frac{\wal_{\bsh_{\uu}}(\bsx_{\uu})\overline{\wal_{\bsh_{\uu}}(\bsy_{\uu})}}{\prod_{j \in \uu}b^{\alpha \psi_b(h_j)}},
\end{align*}
where $\wal_{\bsh}$ denotes the $\bsh$th Walsh function in base $b$ (see, for example, \cite[Appendix~A]{DP10}),
and inner product
$$\langle f,g\rangle_{K_{s,\alpha,\bsgamma}^{\wal}}=\sum_{\uu \subseteq [s]} \gamma_{\uu}^{-1} \sum_{\bsh_{\uu}\in \NN^{|\uu|}} \left(\prod_{j \in \uu}b^{\alpha \psi_b(h_j)}\right) \widetilde{f}((\bsh_{\uu},\bszero)) \overline{\widetilde{g}((\bsh_{\uu},\bszero))},$$ where $\widetilde{f}(\bsh)=\int_{[0,1]^s} f(\bst) \overline{\wal_{\bsh}(\bst)}\rd \bst$ is the $\bsh$th Walsh coefficient of $f$.\\
For integration in $\cH(K_{s,\alpha,\bsgamma}^{\wal})$ we use a special instance of polynomial lattice point sets over $\FF_b$. Let $\cP(\bsg,x^m)$, where $\bsg=(g_1,\ldots,g_s)\in \FF_b[x]^s$, be the $b^m$-element point set consisting of $$\bsx_n:=\left(\nu\left(\frac{n\ g_1}{x^m}\right),\ldots,\nu\left(\frac{n\ g_s}{x^m}\right)\right)\ \ \ \mbox{ for }\ n \in \FF_b[x]\ \mbox{ with } \deg(n)<m,$$ where for $f \in \FF_b[x]$, $f(x)=a_0+a_1 x+\cdots +a_r x^r$, with $\deg(f)=r$ the map $\nu$ is given by $$\nu\left(\frac{f}{x^m}\right):= \frac{a_{\min(r,m-1)}}{b^{m-\min(r,m-1)}}+\cdots+\frac{a_1}{b^{m-1}}+\frac{a_0}{b^m} \in [0,1).$$
Note that $\nu(f/x^m)=\nu((f\pmod{x^m})/x^m)$. We refer to \cite[Chapter~10]{DP10} for more information about polynomial lattice point sets.
The worst-case error of a polynomial lattice rule based on $\cP(\bsg,x^m)$ with $\bsg \in \FF_b[x]^s$ in the weighted Walsh space $\cH(K_{s,\alpha,\bsgamma}^{\wal})$ is given by (see \cite{DKPS05})
\begin{equation}\label{eqerrorexpr}
e_{N,s,\alpha,\bsgamma}^2 (\bsg)=\sum_{\emptyset\neq\uu\subseteq [s]}\gamma_\uu
\sum_{\bsh_{\uu} \in \D_\uu} \prod_{j \in \uu} b^{-\alpha \psi_b(h_j)},
\end{equation}
where $$\D_\uu:=\left\{\bsh_\uu\in (\FF_b[x]\setminus \{0\})^{\abs{\uu}} \ :\ \bsh_\uu \cdot\bsg_\uu\equiv 0\, (x^m)\right\}.$$
Let again $w_1,\ldots,w_s\in\NN_0$ with $w_1\le w_2\le \cdots \le w_s$ (again the most important case is where $w_1 = 0$, since otherwise each point is counted $b^{w_1}$ times).
We associate a non-negative integer $n$ with $b$-adic expansion $n=n_0+n_1 b+\cdots +n_r b^r$, where $n_j \in \{0,1,\ldots,b-1\}$, with the polynomial $n=n_0+n_1 x+\cdots +n_r x^r$ in $\FF_b[x]$ and vice versa. Despite of this association we point out that in the following we always use polynomial arithmetic in contrast to the previous sections where we used the usual integer arithmetic.
Now let $Y_j=x^{w_j}\in \FF_b[x]$ for $j=1,2,\ldots,s$. Furthermore, for $w_j<m$, we can write
$$\cZ_{b^m,w_j}:=\left\{h\in\Field_b[x] \setminus\{0\}\ : \ \deg (h) < m-w_j,\ \gcd(h,x^m)=1\right\},$$
and for $w_j\ge m$,
$$ \cZ_{b^m,w_j}:=\{1 \in\Field_b [x]\}.$$
Note that $$|\cZ_{b^m,w_j}|=\left\{
\begin{array}{ll}
b^{m-w_j-1}(b-1) & \mbox{ if } w_j<m,\\
1 & \mbox{ if } w_j \ge m.
\end{array}\right.$$
We propose the following CBC construction algorithm for generating vectors $\bsg \in \FF_b[x]^s$.
\begin{algorithm}\label{algcbcxm}
Let $N,w_1,\ldots,w_s$, $Y_1,\ldots,Y_s$ be as above. Construct $\bsg=(Y_1 g_1,\ldots, Y_s g_s) \in \FF_b[x]^s$ as follows.
\begin{enumerate}
\item Set $g_1 =1 $.
\item For $d\in [s-1]$ assume that $g_1,\ldots,g_{d}$ have already been found. Now choose $g_{d+1}\in \cZ_{b^m,w_{d+1}}$ such that
$$ e_{N,d+1,\alpha,\bsgamma}^2 ((Y_1 g_1,\ldots,Y_d g_d,Y_{d+1} g_{d+1}))$$
is minimized as a function of $g_{d+1}$.
\item Increase $d$ and repeat the second step until $\bsg=(Y_1 g_1,\ldots,Y_s g_s)$ is found.
\end{enumerate}
\end{algorithm}
The following theorem states that our algorithm yields generating vectors $\bsg$ with a small integration error. Let
\begin{equation*}
\mu_b(\alpha):=\sum_{k=1}^{\infty} b^{-\alpha \psi_b(h)}=\frac{b^{\alpha}(b-1)}{b^{\alpha}-b}.
\end{equation*}
\begin{thm}\label{thmcbcxm}
Let $\bsg=(Y_1 g_1,\ldots,Y_s g_s)\in \FF_b [x]^s$ be constructed according to Algorithm~\ref{algcbcxm}.
Then for every $d\in [s]$ it is true that, for $\lambda \in(1/\alpha,1]$,
\begin{equation}\label{eqthmcbcxm}
e_{N,d,\alpha,\bsgamma}^2 ((Y_1 g_1,\ldots,Y_d g_d))\le \left(\frac{b}{b-1}\sum_{\emptyset\neq\uu\subseteq [d]}\gamma_\uu^\lambda
\frac{\mu_b (\alpha\lambda)^{\abs{\uu}}}{b^{\max\{0,m-\max_{j\in \uu} w_j\}}}\right)^{\frac{1}{\lambda}}.
\end{equation}
\end{thm}
\begin{proof}
The proof of Theorem~\ref{thmcbcxm} is very similar to the proof of Theorem~\ref{thmcbclpspp} and hence we omit it.
\end{proof}
Corollary~\ref{cor1} and Corollary~\ref{cor2} apply accordingly.\\
Finally, the question arises whether a reduced fast CBC construction
analogous to that in Section \ref{sec:fast-mod-cbc} is possible.
For the case of product weights this question can be answered in the
affirmative, though we will not go into
details.
Again we can write the squared worst-case error as
\begin{equation}
e^2_{N,s,\alpha,\bsgamma}(\bsg)= -1+\frac{1}{N}\sum_{n=0}^{N-1}\prod_{j=1}^s \left[1+\gamma_j \varphi_\alpha\left(\nu\left(\frac{n g_j}{x^m}\right)\right)\right],
\end{equation}
for some function $\varphi_{\alpha}$,
where the numbers $\varphi_\alpha(\nu(\frac{f}{x^m}))$, for $f \in \FF_b[x]$ with $\deg(f)<m$
can be computed using $O(N)$ operations, see \cite[Eq. (3.3)]{DKPS05}.
Minimizing
$e^2_{N,d,\alpha,\bsgamma}(Y_1g_1,\ldots, Y_{d-1} g_{d-1}, Y_d g)$ with respect to $g$ amounts to
finding the $\mathrm{argmin}_g T_d(g) $, where
\[
T_d(g)=\sum_{n=0}^{b^m-1}\varphi_\alpha\left(\nu \left(\frac{n\, p_d\, g}{x^m}\right)\right)\eta_{d-1}(n)\,.
\]
Now $T_d$ can be computed efficiently provided multiplication of
a $b^{k}$ vector with the matrix
$\Omega^{(k)}=\left(\varphi_\alpha\left(\nu\left(\frac{n \, g}{x^k}\right)\right)\right)_{n,g\in G_{b,k,0}}$ can be performed efficiently for any $k$.
What one needs for the fast matrix-vector multiplication
is a representation of the group of units of the factor ring $\FF_b[x]/(x^m)$
as a direct product of cyclic groups.
Indeed, the decomposition can be computed explicitly,
cf. \cite{SG85}. This explicit decomposition is needed for
implementing the fast multiplication,
but even without that result we get the following:
\begin{lem}
For any prime number $b$ and any positive integer $k$ let
\[
\cU_k:=\{g\in \FF_b[x]/(x^k): g \text{ invertible}\}=\{g\in \FF_b[x]/(x^k) \ : \ g(0)\ne 0 \}
\]
denote the group of units of the factor ring $\FF_b[x]/(x^k)$. Then $\cU_k$ can be written as the direct product of at most $k$ cyclic groups.
\end{lem}
\begin{proof}
The fundamental theorem of finite
abelian groups states that $\cU_k$ is a direct product of cyclic groups,
$\cU_k=\bigotimes_{j=1}^r G_j$, say.
$\cU_k$ has $(b-1)b^{k-1}$ elements and the order of any subgroup must
be a divisor of the group order, i.e., the order of any subgroup must divide
$(b-1)b^{k-1}$. Now
|
$\cU_1$ is isomorphic to the multiplicative group of
the finite field $\FF_b$
and is therefore cyclic. Moreover, $\cU_1$ is isomorphic to a subgroup of $\cU_k$.
Therefore at least one of the cyclic factors of $\cU_k$ contains
$(b-1)b^i$ elements, where $0\le i \le k-1$.
All other factors contain at least $b$ elements.
Thus
\[
(b-1)b^{k-1}=\prod_{j=1}^r |G_j| \ge (b-1)b^{i+r-1}
\ge (b-1)b^{r-1}\,.
\]
So the number $r$ of factors of $\cU_k$ can be at most $k$.
\end{proof}
Now the machinery from \cite{NC06a, NC06b} gives us that multiplication
with $\Omega^{(k)}$ uses $O(k^2 b^k)$ operations.
Similar to our reasoning in Section \ref{sec:fast-mod-cbc} we conclude
that computation of $T_d$ uses $O((m-w_d)^2 b^{m-e_d})$ operations
and therefore we have:
\begin{thm}
The cost of the reduced fast CBC algorithm is
of order $O(b^m+ \min\{s, s^\ast\} b^m+\sum_{d=1}^{\min\{s, s^\ast\}} (m-w_d)^2 b^{m-w_d})$.
\end{thm}
\section*{Appendix: The proof of Theorem~\ref{thmcbclpspp}}
\begin{proof}
We show the result by induction on $d$. For $d=1$, we have $z_1 = 1$. Thus we have
$$e_{N,1}^2 (Y_1 z_1)=\gamma_{\{1\}}\sum_{\substack{h\in\ZZ_{\ast}\\ h Y_1 z_1\equiv 0\ (N)}} \rho_{\alpha} (h).$$
Let now $\lambda\in(1/\alpha,1]$.
We now use Jensen's inequality. This inequality states that for non-negative numbers
$a_k$ and $p\in(0,1]$, it is true that
$$\sum_k a_k\le \left(\sum_k a_k^p\right)^{1/p}.$$
Applying Jensen's inequality to $e_{N,1}^2 (Y_1 z_1)$, and noting that
$(\rho_\alpha (h))^\lambda=\rho_{\alpha\lambda} (h)$, we obtain
$$e_{N,1}^{2\lambda} (Y_1 z_1)\le \gamma_{\{1\}}^{\lambda}\sum_{\substack{h\in\ZZ_{\ast}\\
h Y_1 z_1\equiv 0\ (N)}} \rho_{\alpha\lambda}(h). $$
If $w_1\ge m$, then $\cZ_{N,w_1}=\{1\}$, $z_1=1$ and $N | b^{w_1}$. In this case, the condition $h Y_1 z_1\equiv 0\ (N)$ is satisfied
for any $h\in\ZZ_{\ast}$. Consequently,
$$
e_{N,1}^{2\lambda} (Y_1 z_1)\le \gamma_{\{1\}}^\lambda\sum_{h\in\ZZ_{\ast}} \rho_{\alpha\lambda}(h)\\
= \gamma_{\{1\}}^\lambda 2\zeta(\alpha\lambda)
\le \gamma_{\{1\}}^\lambda \frac{4\zeta(\alpha\lambda)}{b^{\max\{0,m-w_1\}}},
$$
hence \eqref{eqthmcbclpspp} is shown for this case.
If $w_1<m$, then we estimate $e_{N,1}^2 (Y_1 z_1)$ as follows. Since $z_1 = 1$,
\begin{align*}
e_{N,1}^{2\lambda} (Y_1 z_1)\le& \gamma_{\{1\}}^{\lambda} \sum_{\substack{h\in\ZZ_{\ast}\\ h Y_1 \equiv 0\ (N)}} \rho_{\alpha \lambda} (h) = \gamma_{\{1\}}^\lambda \sum_{\substack{h\in\ZZ_{\ast}\\ b^{m-w_1} | h}} \rho_{\alpha\lambda} (h) = \gamma_{\{1\}}^\lambda \sum_{h \in \ZZ_\ast} \rho_{\alpha \lambda}(h b^{m-w_1}).
\end{align*}
Since $\rho_{\alpha \lambda}(h b^{m-w_1}) = b^{-\alpha \lambda (m-w_1)} \rho(h)$ and $\alpha \lambda > 1$, we obtain
\begin{align*}
e_{N,1}^{2\lambda} (Y_1 z_1) \le & \gamma_{\{1\}}^{\lambda} b^{-\alpha \lambda (m-w_1)} \sum_{h\in\ZZ_{\ast}} \rho_{\alpha \lambda} (h) = \gamma_{\{1\}}^\lambda b^{-\alpha \lambda (m-w_1)} 2 \zeta(\alpha \lambda) \le \gamma_{\{1\}}^\lambda \frac{4\zeta(\alpha\lambda)}{b^{\max\{0,m-w_1\}}},
\end{align*}
hence \eqref{eqthmcbclpspp} is also shown for this case.
Assume now that we have shown the result for some fixed $d\in [s-1]$, i.e., the generating vector $\bsz_d=(Y_1 z_1,\ldots,Y_d z_d)$ satisfies
$$e_{N,d}^{2\lambda} ((Y_1 z_1,\ldots,Y_d z_d))\le \sum_{\emptyset\neq\uu\subseteq [d]}\gamma_\uu^\lambda
\frac{2(2\zeta (\alpha\lambda))^{\abs{\uu}}}{b^{\max\{0,m-\max_{j\in\uu} w_j\}}}.$$
Furthermore, assume that $z_{d+1}\in \cZ_{N,w_{d+1}}$ has been chosen according to Algorithm~\ref{algcbclpspp}. We then have
\begin{align*}
e_{N,d+1}^2 (\bsz_d,Y_{d+1} z_{d+1})=&\sum_{\emptyset\neq\uu\subseteq [d+1]}\gamma_\uu\sum_{
\bsh_\uu\in\D_\uu}\rho_\alpha (\bsh_\uu)\\
=&\sum_{\emptyset\neq\uu\subseteq [d]}\gamma_\uu\sum_{\bsh_\uu\in\D_\uu}
\rho_\alpha (\bsh_\uu)+\sum_{\substack{\emptyset\neq\uu\subseteq [d+1]\\
\{d+1\}\subseteq\uu}}\gamma_\uu\sum_{\bsh_\uu\in\D_\uu}\rho_\alpha (\bsh_\uu)\\
=&e_{N,d}^2 (\bsz_d) +\sum_{\substack{\emptyset\neq\uu\subseteq [d+1]\\
\{d+1\}\subseteq\uu}}\gamma_\uu\sum_{\bsh_\uu\in\D_\uu} \rho_\alpha (\bsh_\uu)\\
\le&\left(\sum_{\emptyset\neq\uu\subseteq [d]}\gamma_\uu^\lambda
\frac{2(2\zeta (\alpha\lambda))^{\abs{\uu}}}{b^{\max\{0,m-\max_{j\in\uu} w_j\}}}\right)^{1/\lambda}+
\theta (z_{d+1}),
\end{align*}
where we used the induction assumption and where we write
\begin{equation}\label{eqthetalpspp}
\theta (z_{d+1})=\sum_{\substack{\emptyset\neq\uu\subseteq [d+1]\\
\{d+1\}\subseteq\uu}}\gamma_\uu\sum_{\bsh_\uu\in\D_\uu} \rho_\alpha (\bsh_\uu),
\end{equation}
where the dependence on $z_{d+1}$ in the right hand side is in $\D_\uu$.
By employing Jensen's inequality, we obtain
\begin{equation}\label{eqJensenlpspp}
e_{N,d+1}^{2\lambda} (\bsz_d,z_{d+1})\le
\sum_{\emptyset\neq\uu\subseteq [d]}\gamma_\uu^\lambda
\frac{2(2\zeta (\alpha\lambda))^{\abs{\uu}}}{b^{\max\{0,m-\max_{j\in\uu} w_j\}}} + \left(\theta (z_{d+1})\right)^{\lambda}.
\end{equation}
We now analyze the expression $\left(\theta (z_{d+1})\right)^{\lambda}$. As $z_{d+1}$ was chosen to minimize the squared worst-case error,
we obtain
$$\left(\theta (z_{d+1})\right)^{\lambda}\le \frac{1}{|\cZ_{N,w_{d+1}}|}
\sum_{z\in \cZ_{N,w_{d+1}}}\left(\theta(z)\right)^{\lambda},$$
where $\theta(z)$ is the analogue of \eqref{eqthetalpspp} for $z\in \cZ_{N,w_{d+1}}$.
We now have, using Jensen's inequality twice,
\begin{align*}
(\theta(z))^{\lambda}\le&\sum_{\substack{\emptyset\neq\uu\subseteq [d+1]\\
\{d+1\}\subseteq\uu}}\gamma_\uu^\lambda\sum_{\bsh_\uu\in\D_\uu}\rho_{\alpha\lambda} (\bsh_\uu)\\
=&\gamma_{\{d+1\}}^\lambda \sum_{h_{d+1}\in\D_{\{d+1\}}} \rho_{\alpha\lambda}(h_{d+1})\\
&+\sum_{\emptyset\neq\vv\subseteq [d]}\gamma_{\vv\cup\{d+1\}}^\lambda\sum_{h_{d+1}\in\ZZ_{\ast}} \rho_{\alpha\lambda} (h_{d+1})
\sum_{\substack{\bsh_\vv\in\ZZ_{\ast}^{\abs{\vv}}\\ \sum_{j\in \vv} h_j Y_j z_j\equiv - h_{d+1} Y_{d+1} z\ (N) }} \rho_{\alpha\lambda} (\bsh_\vv),
\end{align*}
and therefore
\begin{align*}
\lefteqn{ \left(\theta (z_{d+1})\right)^{\lambda}\le\frac{1}{|\cZ_{N,w_{d+1}}|}
\sum_{z\in \cZ_{N,w_{d+1}}}\gamma_{\{d+1\}}^\lambda \sum_{h_{d+1}\in\D_{\{d+1\}}} \rho_{\alpha\lambda}(h_{d+1})}\\
&+\frac{1}{|\cZ_{N,w_{d+1}}|}
\sum_{z\in \cZ_{N,w_{d+1}}}\sum_{\emptyset\neq\vv\subseteq [d]}\gamma_{\vv\cup\{d+1\}}^\lambda\sum_{h_{d+1}\in\ZZ_{\ast}} \rho_{\alpha\lambda} (h_{d+1})
\!\!\!\!\!\!\!\!\!\!\sum_{\substack{\bsh_\vv\in\ZZ_{\ast}^{\abs{\vv}}\\
\sum_{j\in \vv}h_j Y_j z_j\equiv - h_{d+1} Y_{d+1} z\ (N)}}\!\!\!\!\!\!\!\!\!\! \rho_{\alpha\lambda} (\bsh_\vv)\\
=:&T_1 + T_2.
\end{align*}
For $T_1$, we see, in exactly the same way as for $d=1$, that
\begin{equation}\label{eqT1lpspp}
T_1\le \frac{4\gamma_{\{d+1\}}^\lambda \zeta (\alpha\lambda)}{b^{\max\{0,m-w_{d+1}\}}}.
\end{equation}
Regarding $T_2$, we again distinguish two cases. If $w_{d+1}\ge m$, we have $\cZ_{N,w_{d+1}}=\{0\}$, and
thus $T_2$ simplifies to
\begin{align}\label{eqT2_1lpspp}
T_2=&\frac{1}{|\cZ_{N,w_{d+1}}|}
\sum_{z\in \cZ_{N,w_{d+1}}}\sum_{\emptyset\neq\vv\subseteq [d]}\gamma_{\vv\cup\{d+1\}}^\lambda\sum_{h_{d+1}\in\ZZ_{\ast}} \rho_{\alpha\lambda} (h_{d+1})
\!\!\!\!\!\!\!\!\!\!\sum_{\substack{\bsh_\vv\in\ZZ_{\ast}^{\abs{\vv}}\\
\sum_{j\in \vv} h_j Y_j z_j\equiv 0\ (N)}}\!\!\!\!\!\!\!\!\!\! \rho_{\alpha\lambda} (\bsh_\vv)\nonumber\\
=&\frac{2\zeta (\alpha\lambda)}{b^{\max\{0,m-w_{d+1}\}}}
\sum_{\emptyset\neq\vv\subseteq [d]}\gamma_{\vv\cup\{d+1\}}^\lambda
\!\!\!\!\!\!\!\!\!\!\sum_{\substack{\bsh_\vv\in\ZZ_{\ast}^{\abs{\vv}}\\
\sum_{j\in \vv}h_j Y_j z_j\equiv 0\ (N)}}\!\!\!\!\!\!\!\!\!\! \rho_{\alpha\lambda} (\bsh_\vv)\nonumber\\
<&\frac{4\zeta (\alpha\lambda)}{b^{\max\{0,m-w_{d+1}\}}}
\sum_{\emptyset\neq\vv\subseteq [d]}\gamma_{\vv\cup\{d+1\}}^\lambda
\sum_{\bsh_\vv\in\ZZ_{\ast}^{\abs{\vv}}} \rho_{\alpha\lambda} (\bsh_\vv)\nonumber\\
=&\frac{4\zeta (\alpha\lambda)}{b^{\max\{0,m-w_{d+1}\}}}
\sum_{\emptyset\neq\vv\subseteq [d]}\gamma_{\vv\cup\{d+1\}}^\lambda
(2\zeta(\alpha\lambda))^{\abs{\vv}}\nonumber\\
=&\sum_{\emptyset\neq\vv\subseteq [d]}\gamma_{\vv\cup\{d+1\}}^\lambda
\frac{2(2\zeta(\alpha\lambda))^{\abs{\vv}+1}}{b^{\max\{0,m-w_{d+1}\}}}.
\end{align}
On the other hand, if $w_{d+1}<m$, we obtain
\begin{align*}
T_2=&\frac{1}{b^{m-w_{d+1}-1}(b-1)}\\
& \times \left[
\sum_{z\in \cZ_{N,w_{d+1}}}\sum_{\emptyset\neq\vv\subseteq [d]}\gamma_{\vv\cup\{d+1\}}^\lambda
\sum_{\substack{h_{d+1}\in\ZZ_{\ast}\\ h_{d+1}\equiv 0\ (b^{m-w_{d+1}})}} \rho_{\alpha\lambda} (h_{d+1})
\!\!\!\!\!\!\!\!\!\!\sum_{\substack{\bsh_\vv\in \ZZ_{\ast}^{\abs{\vv}}\\
\sum_{j\in \vv}h_j Y_j z_j\equiv - h_{d+1} Y_{d+1} z\ (N)}}\!\!\!\!\!\!\!\!\!\! \rho_{\alpha\lambda} (\bsh_\vv)\right.\\
&+ \left.
\sum_{z\in \cZ_{N,w_{d+1}}}\sum_{\emptyset\neq\vv\subseteq [d]}\gamma_{\vv\cup\{d+1\}}^\lambda
\sum_{\substack{h_{d+1}\in\ZZ_{\ast}\\ h_{d+1}\not\equiv 0\ (b^{m-w_{d+1}})}} \rho_{\alpha\lambda} (h_{d+1})
\!\!\!\!\!\!\!\!\!\!\sum_{\substack{\bsh_\vv\in \ZZ_{\ast}^{\abs{\vv}}\\
\sum_{j\in \vv} h_j Y_j z_j\equiv - h_{d+1} Y_{d+1} z\ (N)}}\!\!\!\!\!\!\!\!\!\! \rho_{\alpha\lambda} (\bsh_\vv)\right]\\
=:&T_{2,1}+T_{2,2}.
\end{align*}
For the term $T_{2,1}$, note that if $h_{d+1}\equiv 0\ (b^{m-w_{d+1}})$, then $h_{d+1} Y_{d+1} z\equiv 0\, (N)$, so we obtain
\begin{align*}
T_{2,1}=&\frac{1}{b^{m-w_{d+1}-1}(b-1)}\\
& \times \sum_{z\in \cZ_{N,w_{d+1}}}\sum_{\emptyset\neq\vv\subseteq [d]}\gamma_{\vv\cup\{d+1\}}^\lambda
\sum_{\substack{h_{d+1}\in\ZZ_{\ast}\\ h_{d+1}\equiv 0\ (b^{m-w_{d+1}})}} \rho_{\alpha\lambda} (h_{d+1})
\sum_{\substack{\bsh_\vv\in \ZZ_{\ast}^{\abs{\vv}}\\
\sum_{j\in \vv} h_j Y_j z_j\equiv 0\ (N)}} \rho_{\alpha\lambda} (\bsh_\vv)\\
=&\sum_{\emptyset\neq\vv\subseteq [d]}\gamma_{\vv\cup\{d+1\}}^\lambda
\sum_{\substack{\bsh_\vv\in \ZZ_{\ast}^{\abs{\vv}}\\
\sum_{j\in \vv}h_j Y_j z_j\equiv 0\ (N)}} \rho_{\alpha\lambda} (\bsh_\vv)
\sum_{\substack{h_{d+1}\in\ZZ_{\ast}\\ h_{d+1}\equiv 0\ (b^{m-w_{d+1}})}} \rho_{\alpha\lambda} (h_{d+1})\\
=&\frac{2\zeta (\alpha\lambda)}{(b^{m-w_{d+1}})^{\alpha\lambda}}\sum_{\emptyset\neq\vv\subseteq [d]}\gamma_{\vv\cup\{d+1\}}^\lambda
\sum_{\substack{\bsh_\vv\in \ZZ_{\ast}^{\abs{\vv}}\\
\sum_{j\in \vv} h_j Y_j z_j\equiv 0\ (N)}} \rho_{\alpha\lambda} (\bsh_\vv).
\end{align*}
From this, it is easy to see that
$$T_{2,1}\le \frac{4\zeta (\alpha\lambda)}{b^{m-w_{d+1}}}\sum_{\emptyset\neq\vv\subseteq [d]}\gamma_{\vv\cup\{d+1\}}^\lambda
\sum_{\substack{\bsh_\vv\in \ZZ_{\ast}^{\abs{\vv}}\\
\sum_{j\in \vv} h_j Y_j z_j\equiv 0\ (N)}} \rho_{\alpha\lambda} (\bsh_\vv).$$
Regarding $T_{2,2}$, note that $h_{d+1}\not\equiv 0\ (b^{m-w_{d+1}})$ and $z\in \cZ_{N,w_{d+1}}$ implies $h_{d+1} Y_{d+1} z \not\equiv 0\ (N)$,
and for $z_1,z_2\in \cZ_{N,w_{d+1}}$ with $z_1\neq z_2$ we have $h_{d+1} Y_{d+1} z_1 \not\equiv h_{d+1} Y_{d+1} z_2\ (N)$. Therefore,
\begin{align*}
T_{2,2}\le&\frac{1}{b^{m-w_{d+1}-1}(b-1)}
\sum_{\emptyset\neq\vv\subseteq [d]}\gamma_{\vv\cup\{d+1\}}^\lambda
\sum_{\substack{h_{d+1}\in\ZZ_{\ast}\\ h_{d+1}\not\equiv 0\ (b^{m-w_{d+1}})}} \rho_{\alpha\lambda} (h_{d+1})
\!\!\!\!\!\!\!\!\!\!\sum_{\substack{\bsh_\vv\in\ZZ_{\ast}^{\abs{\vv}}\\
\sum_{j\in \vv}h_j Y_j z_j\not\equiv 0\ (N)}}\!\!\!\!\!\!\!\!\!\! \rho_{\alpha\lambda} (\bsh_\vv)\\
\le&\frac{2}{b^{m-w_{d+1}}}
\sum_{\emptyset\neq\vv\subseteq [d]}\gamma_{\vv\cup\{d+1\}}^\lambda
\sum_{k_{d+1} \in \ZZ_{\ast}} \rho_{\alpha\lambda} (k_{d+1})
\!\!\!\!\!\!\!\!\!\!\sum_{\substack{\bsh_\vv\in\ZZ_{\ast}^{\abs{\vv}}\\
\sum_{j\in \vv}h_j Y_j z_j\not\equiv 0\ (N)}}\!\!\!\!\!\!\!\!\!\! \rho_{\alpha\lambda} (\bsh_\vv)\\
=&\frac{4\zeta (\alpha\lambda)}{b^{m-w_{d+1}}}
\sum_{\emptyset\neq\vv\subseteq [d]}\gamma_{\vv\cup\{d+1\}}^\lambda
\!\!\!\!\!\!\!\!\!\!\sum_{\substack{\bsh_\vv\in\ZZ_{\ast}^{\abs{\vv}}\\
\sum_{j\in \vv}h_j Y_j z_j\not\equiv 0\ (N)}}\!\!\!\!\!\!\!\!\!\! \rho_{\alpha\lambda} (\bsh_\vv)\\
=&\frac{4\zeta (\alpha\lambda)}{b^{m-w_{d+1}}}
\sum_{\emptyset\neq\vv\subseteq [d]}\gamma_{\vv\cup\{d+1\}}^\lambda
\left(\sum_{\bsh_\vv\in\ZZ_{\ast}^{\abs{\vv}}} \rho_{\alpha\lambda} (\bsh_\vv)
-\!\!\!\!\!\!\!\!\!\sum_{\substack{\bsh_\vv\in\ZZ_{\ast}^{\abs{\vv}}\\
\sum_{j\in \vv}h_j Y_j z_j\equiv 0\ (N)}}\!\!\!\!\!\!\!\!\!\! \rho_{\alpha\lambda} (\bsh_\vv)\right).
\end{align*}
This yields
\begin{align}\label{eqT2_2lpspp}
T_2=& T_{2,1}+T_{2,2}\nonumber\\
\le& \frac{4\zeta (\alpha\lambda)}{b^{m-w_{d+1}}}
\sum_{\emptyset\neq\vv\subseteq [d]}\gamma_{\vv\cup\{d+1\}}^\lambda
\sum_{\bsh_\vv\in\ZZ_{\ast}^{\abs{\vv}}} \rho_{\alpha\lambda} (\bsh_\vv)\nonumber\\
=&\sum_{\emptyset\neq\vv\subseteq [d]}\gamma_{\vv\cup\{d+1\}}^\lambda
\frac{2(2\zeta (\alpha\lambda))^{\abs{\vv}+1}}{b^{m-w_{d+1}}}\nonumber\\
=&\sum_{\emptyset\neq\vv\subseteq [d]}\gamma_{\vv\cup\{d+1\}}^\lambda
\frac{2(2\zeta (\alpha\lambda))^{\abs{\vv}+1}}{b^{\max\{0,m-w_{d+1}\}}}
\end{align}
Combining \eqref{eqT1lpspp}, \eqref{eqT2_1lpspp}, and \eqref{eqT2_2lpspp} yields
$$(\theta (g_{d+1}))^\lambda\le \sum_{\substack{\uu\subseteq [d+1]\\ \{d+1\}\subseteq\uu}}\gamma_\uu^\lambda
\frac{2(2\zeta (\alpha\lambda))^{\abs{\uu}}}{b^{\max\{0,m-w_{d+1}\}}}.$$
Plugging into \eqref{eqJensenlpspp}, we obtain
$$e_{N,d+1}^{2 \lambda}(\bsg,g_{d+1}) \le \sum_{\emptyset\neq\uu\subseteq [d]}\gamma_\uu^\lambda
\frac{2(2\zeta (\alpha\lambda))^{\abs{\uu}}}{b^{\max\{0,m-\max_{j\in\uu} w_j\}}}
+\sum_{\substack{\uu\subseteq [d+1]\\ \{d+1\}\subseteq\uu}}\gamma_\uu^\lambda
\frac{2(2\zeta (\alpha\lambda))^{\abs{\uu}}}{b^{\max\{0,m-w_{d+1}\}}}.$$
This yields the result for $d+1$.
\end{proof}
|
\section{Introduction}
\label{Intro}
Experimental evidences from Supernova SNIa,
Baryon acoustic oscillations and gravitational waves
have definitely proved that our Universe is expanding
at an accelerated rate. In spite of enormous effort, a fully consistent explanation for the origin of this behavior is missing. Among the various mechanisms, the existence of an unknown form of energy (Dark Energy, DE) affecting the Universe on large scales is the
most widely accepted proposal. But yet, the nature of DE remains quite elusive. The possibility that DE is modeled by the cosmological constant
acting as source of vacuum energy has been originally
considered as natural way out of the DE puzzle~\cite{Sahni:1999gb,Peebles:2002gy}. However, this scenario is at odds
with our field theoretical understanding of the quantum properties of vacuum,
thus requiring further investigation. Along this line, a plethora of
DE models have been put forward over the years~\cite{Peebles:1987ek,Ratra:1987rm,Wetterich:1987fm,Frieman:1995pm,Turner:1997npq,Caldwell:1997ii,Armendariz-Picon:2000nqq,Amendola:1999er,Armendariz-Picon:2000ulo,Kamenshchik:2001cp,Caldwell:1999ew,Caldwell:2003vq,Nojiri:2003vn,Feng:2004ad,Guo:2004fq,Elizalde:2004mq,Nojiri:2005sx,Deffayet:2001pu,Sen:2002in}.
An interesting model to account for the nature of DE
is the so called Holographic Dark Energy (HDE)~\cite{Cohen:1998zx,Horava:2000tb,Thomas:2002pq,Li:2004rb,Bamba:2012cp,Ghaffari:2014pxa,Wang:2016och}, which emerges within quantum gravity framework.
The main ingredient of this approach is the
holographic principle, according to which the description of a volume of space can be thought of as encoded on a lower-dimensional boundary surface to the region.
In~\cite{tHooft:1993dmi,Susskind:1994vu} it has been pointed
out that effective local quantum field theories over-count the number of
independent degrees of freedom, predicting that entropy scales
extensively ($S\sim L^3$) for systems of size $L$ with UV cutoff $\Lambda$.
Later on, a solution to this problem has been provided in~\cite{Cohen:1998zx},
where it has been argued that the total energy of a system with size $L$
should not exceed that of an equally sized black hole, i.e. $L^3\rho_\Lambda\le L M_p^2$. Here, $M_p=(8\pi G)^{-1/2}$ is the reduced Planck mass, while $\rho_\Lambda$ denotes
the quantum zero-point energy density caused by the UV cutoff $\Lambda$
(we are working in natural units $\hslash=c=1$). The inequality is saturated
for the largest value of $L$. In this context,
the holographic dark energy density is obtained as
\begin{equation}
\label{HDE}
\rho_\Lambda=\frac{3c^2 M_p^2}{L^2}\,,
\end{equation}
where $c$ is a dimensionless constant and the
factor $3$ has been introduced for mere convenience.
Cosmological applications of the holographic principle and HDE
have been largely considered in literature. As an example, it was analyzed by~\cite{Enqvist:2004ny} that the consequence of excluding
those degrees of freedom of the system that will never be observed by the effective field theory results into an IR cutoff $L$ at the future event horizon.
In a DE dominated Universe, such an horizon is then predicted to tend toward a constant value of the order $H_0^{-1}$,
with $H_0$ being the present Hubble radius~\cite{Setare:2007hq}.
Furthermore, the issue of assuming the apparent (Hubble) horizon
$R_A=1/H$ as IR cutoff in a flat Universe has been examined in~\cite{Hsu:2004ri}.
Despite the intensive study, the shortcomings of the HDE
in describing the history of a flat Friedmann-Robertson-Walker (FRW) Universe have prompted tentative changes to this approach. For instance, HDE has been used to address the DE problem in Brans-Dicke Cosmology~\cite{Gong:1999ge,Kim:2005gk,Setare:2006yj,Banerjee:2000gt,Xu:2008sn,Khodam-Mohammadi:2014wla}
by considering different IR cutoffs~\cite{Wang:2016och,Ghaffari:2014pxa,Xu:2008sn} or and/or deformed entropy-area laws~\cite{Tavayef:2018xwx,Saridakis:2018unr,Saridakis:2020zol,Moradpour:2020dfm,Drepanou:2021jiv,Hernand}.
In particular, the latter path has led to promising models,
such as Tsallis~\cite{Tavayef:2018xwx,Saridakis:2018unr},
Barrow~\cite{Saridakis:2020zol} and Kaniadakis~\cite{Drepanou:2021jiv,Hernand} holographic dark energy.
Recently, motivated by the analysis of~\cite{Setare:2007hq},
it has been shown that Tsallis holographic description of DE (THDE) is non-trivially intertwined with tachyon dark energy model~\cite{Liu:2021heo}, in which the tachyon scalar field
is considered as a
source of DE. Specifically,
a correspondence between the two scenarios has been
established based on the reconstruction of
the dynamics of the tachyon field in THDE.
Starting from the above premises, in this work we explore more in-depth the connection between the tachyon dark energy model and HDE. We frame our analysis in the context of HDE based on Barrow entropy~\cite{Saridakis:2020zol}, which arises from the attempt to incorporate quantum gravity effects on the horizon surface~\cite{Barrow:2020tzx}.
In this sense, our study
must be intended as a preliminary effort toward extending
the paradigm of~\cite{Liu:2021heo} to a fully quantum gravity picture.
We analyze the cases of flat and non-flat FRW Universe, for both interacting and non-interacting DE.
Since scalar fields are generally conjectured to have driven inflation
in the very early Universe, we then study the inflation mechanism in our BHDE model. We find an analytical solution for
the slow-roll parameters, the scalar
spectral index and the tensor-to-scalar ratio. We also compare our findings with recent results in the literature.
The remainder of the work is structured as follows:
in the next Section we introduce BHDE. Section~\ref{Tachflat} and~\ref{Tachnonflat} are devoted to analyze the correspondence between the tachyon dark energy and BHDE in a flat and non-flat FRW Universe, respectively. In Sec.~\ref{infl} we discuss inflation in BHDE, while conclusions and outlook are summarized in
Sec.~\ref{Conc}.
\section{Barrow holographic dark energy}
\label{BHDE}
Let us briefly review the basics of BHDE.
We consider the four-dimensional Friedmann-Robertson-Walker (FRW) metric
\begin{equation}
\label{FRW}
ds^2\,=\,-dt^2+a^2(t)\left(\frac{dr^2}{1-k\,r^2}+r^2d\Omega^2\right),
\end{equation}
of scale factor $a(t)$ and spatial curvature $k=0,1,-1$ for a flat, closed and open Universe, respectively.
We use the definition~\eqref{HDE} for the holographic dark energy in standard Cosmology and
assume
\begin{equation}
\label{ldef}
L(t)\,=\,a(t) r(t)\,,
\end{equation}
where $r(t)$ is the (time-dependent) radius that is relevant to the future event horizon of the Universe~\cite{Setare:2007hq}. Since
\begin{eqnarray}
\label{r}
\int_0^{r_1}\frac{dr}{\sqrt{1-k r^2}}&=&\frac{1}{\sqrt{|k|}}\sin\mathrm{n}^{-1}(\sqrt{|k|}r_1)\\[2mm]
&=&\left\{\begin{array}{rcl}
\nonumber
&&\hspace{-0.4cm}\sin^{-1}(\sqrt{|k|}r_1)/\sqrt{|k|}, \hspace{0.6cm}k=1, \\[3mm]
\vspace{2mm}
\nonumber
&&\hspace{-0.4cm}r_1,\hspace{3.35cm} k=0, \\[1mm]
\nonumber
&&\hspace{-0.4cm}\sinh^{-1}(\sqrt{|k|}r_1)/\sqrt{|k|},\,\,\,\,\,\,\,k=-1,
\end{array}\right.
\end{eqnarray}
we easily obtain
\begin{equation}
\label{lbis}
L(t)\,=\,\frac{a(t)\sin\mathrm{n}\left[\sqrt{|k|}R_h(t)/a(t)\right]}{\sqrt{|k|}}\,,
\end{equation}
where $R_h$ is the future event horizon
given by
\begin{equation}
R_h\,=\,a\int_t^{\infty}\frac{dt}{a}\,=\,a\int_a^{\infty}\frac{da}{Ha^2}\,,\quad\,\, H=\frac{\dot a}{a}\,.
\end{equation}
HDE relies on the holographic principle, which
asserts that the number of degrees of freedom describing the physics of any quantum gravity system \emph{i}) scales as the bounding surface (rather than the volume) of the system and \emph{ii}) should be constrained by an infrared cutoff~\cite{tHooft:1993dmi,Susskind:1994vu}. This is in tune
with Bekenstein-Hawking (BH) relation $S_{BH}=A/A_0$ for black holes, where $S_{BH}$ and $A$ denote the entropy and area of the black hole, respectively, while $A_0=1/(4G)$ its Planck area. Recently, deformations of this relation have been proposed to
take account of quantum~\cite{Barrow:2020tzx,Tsallis:1987eu,Tsallis:2012js} and/or relativistic~\cite{Kaniadakis:2002zz} effects.
In particular, in~\cite{Barrow:2020tzx} it has been argued
that quantum gravity may introduce intricate, fractal features
on the black hole horizon, leading to the modified area law
\begin{equation}
\label{Barrow}
S_\Delta\,=\,\left(\frac{A}{A_0}\right)^{1+\Delta/2}.
\end{equation}
Deviations from BH entropy are quantified by the
exponent $0\le\Delta\le1$, with $\Delta=0$ giving the BH
limit, while $\Delta=1$ corresponding to the maximal horizon
deformation. We emphasize that although this relation
resembles Tsallis entropy in non-extensive statistical thermodynamics~\cite{Tsallis:1987eu,Tsallis:2012js}, the origin and motivation
underlying Eq.~\eqref{Barrow} are completely different. Cosmological implications of Barrow entropy have been recently studied in
the context of Big Bang Nucleosynthesis~\cite{Barrow:2020kug} and Baryogenesis~\cite{Luciano:2022pzg}, among others.
The possibility of a running $\Delta$ has also been considered in~\cite{Running}.
Strictly speaking, Eq.~\eqref{Barrow} has been
formulated for black holes. However, it
is known that in any gravity theory one can
consider the entropy for the Universe
horizon in the same form as the black hole entropy, the only adjustment being the replacement of the black hole horizon radius
with the apparent horizon radius. This is at the heart
of the various generalizations of HDE with
modified entropy laws (see, e.g.~\cite{Tavayef:2018xwx,Saridakis:2018unr,Saridakis:2020zol,Moradpour:2020dfm,Drepanou:2021jiv}).
Now, in~\cite{Cohen:1998zx} Cohen et al. have proposed
the following inequality between the entropy, the IR ($L$)
and UV ($\Lambda$) cutoffs for a given system in an effective local quantum field theory
\begin{equation}
L^3 \Lambda^3\,\le\, S_{max}\simeq\,S_{BH}^{3/4}\,.
\end{equation}
If we express $S_{BH}$ as in Eq.~\eqref{Barrow}, we have
\begin{equation}
\Lambda^4\,\le\,\left(2\sqrt{\pi}\right)^{2+\Delta}\frac{L^{\Delta-2}}{A_0^{1+\Delta/2}}\,,
\end{equation}
where $\Lambda^4$ denotes the vacuum energy density,
i.e. the energy density of DE ($\rho_D$) in the HDE hypothesis~\cite{Guberina:2006qh}.
By using the above inequality, Barrow holographic dark energy density
can be proposed as
\begin{equation}
\label{brhod}
\rho_D\,=\,C L^{\Delta-2}\,,
\end{equation}
where $C$ is an unknown parameter with dimensions $[L]^{-2-\Delta}$.
It is worth noticing that for $\Delta=0$, the above relation
reduces to the standard HDE~\eqref{HDE}, provided that $C=3c^2M_p^2$. On the other hand, in the case where
deformations effects switch on ($\Delta\neq0$), BHDE
departs from the standard HDE, leading to different cosmological
scenarios~\cite{Saridakis:2020zol}.
Following the standard literature, we now define the critical
energy density $\rho_{cr}$ and the curvature energy density
$\rho_k$ as
\begin{equation}
\rho_{cr}\,=\, 3 M_p^2 H^2\,,\qquad \rho_k\,=\,\frac{3k}{8\pi G a^2}\,.
\end{equation}
We also introduce the three fractional energy densities
\begin{eqnarray}
\label{Om}
\Omega_m&=&\frac{\rho_m}{\rho_{cr}}\,=\,\frac{\rho_m}{3M_p^2H^2}\,,\\[2mm]
\label{OD}
\Omega_D&=&\frac{\rho_D}{\rho_{cr}}\,=\,\frac{C}{3M_p^2H^2}L^{\Delta-2}\,,\\[2mm]
\label{Ok}
\Omega_k&=&\frac{\rho_k}{\rho_{cr}}\,=\,\frac{k}{H^2a^2}\,,
\end{eqnarray}
where $\rho_m=3(1-c^2)M_p^2H^2$ is the matter energy density.
In particular, by setting $L=H^{-1}$ we obtain\footnote{There are several choices for the IR cutoff $L$. Following~\cite{Tavayef:2018xwx}, here we resort to the simplest one $L=H^{-1}$.
Other possible choices are the particle horizon, the future event horizon, the GO cutoff~\cite{GO} or combination thereof. However, in these cases one must generally resort to numerical evaluation to study the cosmological evolution of the model~\cite{Saridakis:2018unr}. Since we are interested in extracting analytical solutions and given the degree of arbitrariness in the selection of the best dark energy description, we leave the analysis of dark energy models with different IR cutoffs for future investigation.}
\begin{equation}
\Omega_D\,=\,\frac{C}{3M_p^2\,H^{\Delta}}\,.
\end{equation}
From Eqs.~\eqref{ldef} and~\eqref{r}, one can derive the following
expression for the time derivative of $L$~\cite{Liu:2021heo}
\begin{equation}
\label{dotL}
\dot{L}\,=\,HL+a\dot r\, =\, 1-\frac{1}{\sqrt{|k|}}\cos\mathrm{n}(\sqrt{|k|}R_h/a)\,,
\end{equation}
where we have defined~\cite{Setare:2007hq}
\begin{equation}
\label{cosn}
\frac{1}{\sqrt{|k|}}\cos \mathrm{n}(\sqrt{|k|}x)=\left\{\begin{array}{rcl}
&&\hspace{-0.4cm}\cos (x)\,, \hspace{0.6cm}k=1, \\[3mm]
\vspace{2mm}
&&\hspace{-0.4cm}1\,,\hspace{1.37cm} k=0, \\[1mm]
&&\hspace{-0.4cm}\cosh(x),\,\,\,\,\,\,\,\hspace{0.5mm}k=-1.
\end{array}\right.
\end{equation}
Now, for a flat FRW Universe filled by non-interacting BHDE and pressureless DM, the first Friedmann equation takes the form
\begin{equation}
\label{ffe}
H^2\,=\,\frac{1}{3M_p^2}\left(\rho_D+\rho_m\right),
\end{equation}
which, by use of Eqs.~\eqref{Om} and~\eqref{OD}, can be
rewritten as
\begin{equation}
\label{sumOm}
\Omega_m+\Omega_D\,=\,\Omega_D(1+u)\,=\,1\,,
\end{equation}
where
\begin{equation}
\label{u}
u=\frac{\rho_m}{\rho_D}=\frac{\Omega_m}{\Omega_D}\,.
\end{equation}
Since BHDE does not interact with other parts of cosmos (DM),
the conservation equations of dust and THDE read
\begin{eqnarray}
\label{c1}
&&\dot\rho_m+3H\rho_m\,=\,0\,,\\[2mm]
&&\dot\rho_D+3H\rho_D(1+\omega_D)\,=\,0\,,
\label{c2}
\end{eqnarray}
where we have denoted by $\omega_D=p_D/\rho_D$ and $p_D$
the equation of state parameter and pressure of THDE, respectively.
Deriving Eq.~\eqref{ffe} respect to time and using
the continuity equations~\eqref{c1} and~\eqref{c2}, after
some algebra we are led to
\begin{equation}
\label{dotH1}
\frac{\dot H}{H^2}\,=\,-\frac{3}{2}\left(1+\omega_D+u\right)\Omega_D\,.
\end{equation}
Likewise, by plugging Eq.~\eqref{brhod} into~\eqref{c2},
we find
\begin{equation}
\label{dotH2}
\frac{\dot H}{H^2}\,=\,(1+\omega_D)\frac{3}{\Delta-2}\,,
\end{equation}
which gives, by comparison with Eq.~\eqref{dotH1}
\begin{equation}
\label{omegad}
\omega_D\,=\,\frac{u(2-\Delta)\Omega_D}{2-(2-\Delta)\Omega_D}-1\,.
\end{equation}
With the aid of Eq.~\eqref{sumOm}, this finally yields\footnote{Unlike THDE model, where $\omega_D$ is divergent for $\delta_T<1$ and $\Omega_D=1(2-\delta_T)$ ($\delta_T$ is the Tsallis exponent), BHDE is well-defined for any value of $0\le\Delta\le1$. }
\begin{equation}
\label{omegaD}
\omega_D\,=\,\frac{\Delta}{(2-\Delta)\Omega_D-2}\,.
\end{equation}
On the other hand, if there exists an interaction
\begin{equation}
\label{intercoup}
Q=3b^2H(\rho_m+\rho_D)
\end{equation}
between BHDE and matter, the continuity equations~\eqref{c1} and~\eqref{c2} become
\begin{eqnarray}
\label{c3}
&&\dot\rho_m+3H\rho_m\,=\,Q\,,\\[2mm]
&&\dot\rho_D+3H\rho_D(1+\omega_D)\,=\,-Q\,.
\label{c4}
\end{eqnarray}
Following similar calculations as above, one can show that Barrow
holographic energy equation of state takes the form
\begin{equation}
\omega_D\,=\,\frac{\Delta+2b^2/\Omega_D}{(2-\Delta)\Omega_D-2}\,,
\end{equation}
where $b$ is the coupling parameter that quantifies the interaction.
\subsection{The age of the Universe}
We can now estimate the age of the present Universe as
\begin{eqnarray}
\nonumber
t&=&\int dt\,\frac{dH}{dH} = \int \frac{H^2}{\dot H}\,\frac{dH}{H^2}\\[2mm]
&=&\frac{2\left(\frac{C}{3 M_p^2}\right)^{1/\Delta}}{3\,\Delta}\int \frac{1-\left(1-\Delta/2\right)\Omega_D}{\Omega_D^{1-1/\Delta}\left(1-\Omega_D\right)}\,d\Omega_D\,,
\label{test}
\end{eqnarray}
where we have used Eqs.~\eqref{dotH2} and \eqref{omegaD} along with
\begin{equation}
\frac{dH}{H^2}\,=\,-\frac{\left(\frac{C}{3 M_p^2}\right)^{1/\Delta}}
{\Delta\,\Omega_D^{1-1/\Delta}}
\,d\Omega_D\,.
\end{equation}
By integrating the above relation, it follows that
\begin{equation}
t=\frac{2-\Delta}{3H}\left[1+\left(\frac{\Delta}{2-\Delta}\right)\,
{}_2 F_1(1,\frac{1}{\Delta};1+\frac{1}{\Delta};\frac{C}{3M_p^2\,H^{\Delta}})
\right],
\end{equation}
where ${}_2 F_1(a,b;c;d)$ is the hypergeometric function of first kind.
This equation can be used to estimate the order
of the age of the current Universe ($z=0$) in our model. By exploiting
Eqs.~\eqref{dotH2} and~\eqref{test}, we get
\begin{eqnarray}
\nonumber
t&\approx& \left(\frac{H^2}{\dot H}\right)\big|_{z=0}\int \frac{dH}{H^2}\\[2mm]
&=&\frac{2-\Delta}{3H_0}\left(1-\frac{\omega_D(z=0)}{1+\omega_D(z=0)}\right).
\label{tfin}
\end{eqnarray}
For $\omega_D(z=0)=-2/3$, we then
have $t=1/H_0$ for $\Delta=1$, corresponding
to the maximal deformation of the Bekenstein-Hawking area law.
As observed in~\cite{Tavayef:2018xwx},
further corrections to Eq.~\eqref{tfin} may arise due
to either different modifications of the horizon entropy or
other IR cutoffs.
\section{Tachyon scalar field as Barrow holographic dark energy: flat FRW Universe}
\label{Tachflat}
In this Section we analyze the correspondence between
the tachyon dark energy model and the BHDE scenario in a flat FRW Universe. We consider the non-interacting and interacting case separately. A preliminary study along this direction has been carried out in~\cite{Prelimin}.
\subsection{Non-interacting case}
In~\cite{Setare:2007hq} it has been shown that
the energy density $\rho_T$ and pressure $p_T$ for the tachyon scalar field take the form
\begin{eqnarray}
\label{ted}
\rho_T&=&\frac{V(T)}{\sqrt{1-\dot T^2}}\,,\\[2mm]
p_T&=&-V(T)\sqrt{1-\dot T^2}\,,
\label{ped}
\end{
|
eqnarray}
where $V(T)$ is the tachyon potential energy. From these relations,
we derive the equation of state parameter for the tachyon as
\begin{equation}
\label{omegaT}
\omega_T\,=\,p_T/\rho_T=\dot T^2-1\,.
\end{equation}
We now aim at describing the dynamics of the tachyon scalar field in BHDE.
Toward this end, we assume that the tachyon energy density~\eqref{ted}
can be modeled by Barrow holographic dark energy density~\eqref{brhod} evaluated at the future event horizon $R_h$. We then obtain
\begin{equation}
\label{Vt}
C R_h^{\Delta-2}\,=\,\frac{V(T)}{\sqrt{1-\dot T^2}}\,.
\end{equation}
On the other hand, by equating Eqs.~\eqref{omegaD} and~\eqref{omegaT},
it follows that
\begin{equation}
\frac{\Delta}{(2-\Delta)\Omega_D-2}\,=\,\dot T^2-1\,,
\end{equation}
which gives
\begin{equation}
\label{dotT}
\dot T\,=\,\sqrt{\frac{\left(2-\Delta\right)\Omega_D+\Delta-2}{\left(2-\Delta\right)\Omega_D-2}}\,.
\end{equation}
Let us now invert Eq.~\eqref{Vt} respect to the potential $V(T)$
\begin{equation}
\label{potv}
V(T)\,=\,C R_h^{\Delta-2}\sqrt{1-\dot T^2}\,.
\end{equation}
Inserting Eq.~\eqref{dotT}, we infer
\begin{equation}
V(T)\,=\,C R_h^{\Delta-2}\sqrt{\frac{\Delta}{2-(2-\Delta)\Omega_D}}\,.
\end{equation}
Similarly, from Eq.~\eqref{ped} we get for the tachyon pressure
\begin{equation}
\label{ptac}
p_T\,=\,p_D\,=\,-C R_h^{\Delta-2}\,\frac{\Delta}{2-(2-\Delta)\Omega_D}\,.
\end{equation}
The above relations can be further elaborated
by taking the time derivative of $\rho_D =\rho_T$ in Eq.~\eqref{brhod}
\begin{equation}
\label{tderiv}
\dot\rho_D\,=\,C(\Delta-2)L^{\Delta-3}\dot L\,,
\end{equation}
and observing that $\dot L=0$ in a flat Universe (see Eq.~\eqref{dotL}), which implies $\dot\rho_D=0$. In the absence of interactions between holographic dark energy and matter, from Eq.~\eqref{c2} we obtain
$\omega_D=-1$ and, thus, $\dot T=0$. This is satisfied,
provided that
\begin{equation}
\left(2-\Delta\right)\Omega_D+\Delta-2\,=\,0\,,
\end{equation}
which admits as solution
\begin{equation}
\label{om1}
\Omega_D=1\,.
\end{equation}
By using the above results in Eqs.~\eqref{potv} and~\eqref{ptac},
we reconstruct the tachyon potential and pressure as
\begin{eqnarray}
V(T)&=&C R_h^{\Delta-2}\,,\\[2mm]
p_T&=&-C R_h^{\Delta-2}\,,
\end{eqnarray}
We notice that the above picture is consistent with the
cosmological constant scenario, since $\omega_D=-1$
as discussed above.
\subsection{Interacting case}
\label{thintca}
Let us now extend the above analysis to the interacting case.
In this context, the continuity equations for dark energy and matter
are provided by Eqs.~\eqref{c3} and~\eqref{c4}, respectively, with the coupling
term being given by Eq.~\eqref{intercoup}.
As in the previous case,
the condition $\dot\rho_D=0$ holds true. However, by combining
with Eq.~\eqref{c4}, we now obtain
\begin{eqnarray}
\nonumber
\omega_D&=&-b^2\left(\frac{\Omega_m}{\Omega_D}+1\right)-1\\[2mm]
&=&-\frac{b^2}{\Omega_D}-1\,,
\end{eqnarray}
where in the second step we have used Eq.~\eqref{sumOm}.
By equating to Eq.~\eqref{omegaT}, we get
\begin{equation}
\dot T^2\,=\,-\frac{b^2}{\Omega_D}\,,
\end{equation}
which makes only sense for
$b=0$. As in~\cite{Liu:2021heo}, we thus find that
in a flat FRW Universe, $\dot T^2$ must always be vanishing.
We can also extract the following relations
\begin{eqnarray}
\label{51}
V(T)\,=\,\rho_D\,,\\[2mm]
p_D\,=\,-\rho_D\,,
\label{52}
\end{eqnarray}
which are consistent with the condition $\omega_D=-1$ discussed in the non-interacting case.
From Eqs.~\eqref{51} and \eqref{52} we can
also explore the stability of our BHDE model against
perturbation. To this end, we compute the
squared sound speed $v_s^2$. Clearly, for $v_s^2>0$,
the model is stable, otherwise it is unstable. The squared
sound speed is defined as
\begin{equation}
v^2_s = \frac{dp_D}{d \rho_D}\,,
\end{equation}
which gives in our case $v^2_s=-1<0$. This
means that the model is unstable. Therefore, we cannot conclude that a Barrow holographic dark energy dominated Universe will be the fate of the future Universe.
Moreover, we have $\omega_D = -1$, then $\omega'_D =0$, where the prime denotes the derivative with respect to $\ln a$. Therefore, the $\omega$-$\omega'$ analysis is meaningless for this model.
\section{Tachyon scalar field as Barrow holographic dark energy: non-flat FRW Universe}
\label{Tachnonflat}
Let us explore how the connection between BHDE and
tachyon dark energy model appears in a non-flat FRW Universe.
Toward this end, we consider the time derivative of BHDE~\eqref{brhod} and use Eq.~\eqref{dotL} to get
\begin{equation}
\dot\rho_D\,=\,C(\Delta-2)L^{\Delta-3}\left[1-\frac{1}{\sqrt{|k|}}\cos\mathrm{n}(\sqrt{|k|}x)\right],
\end{equation}
where $x\equiv R_h/a$. By means of the continuity equation~\eqref{c4}, this can be cast as
\begin{eqnarray}
\nonumber
&&-3H\rho_D(1+\omega_D)-3b^2H(\rho_m+\rho_D)\\[2mm]
&&\hspace{3mm}=C(\Delta-2)L^{\Delta-3}\left[1-\frac{1}{\sqrt{|k|}}\cos\mathrm{n}(\sqrt{|k|}x)\right].
\end{eqnarray}
We can now resort to Eq.~\eqref{omegaT} (we remind
that we are imposing $\omega_T=\omega_D$ in our model) to obtain
\begin{eqnarray}
\nonumber
&&-3 H \rho_D \dot T^2-3b^2H(\rho_m+\rho_D)\\[2mm]
&&\hspace{3mm}=C(\Delta-2)L^{\Delta-3}\left[1-\frac{1}{\sqrt{|k|}}\cos\mathrm{n}(\sqrt{|k|}x)\right].
\end{eqnarray}
This relation can be further manipulated by dividing both sides
by $3H\rho_D$ and using Eq.~\eqref{u} to give
\begin{eqnarray}
\nonumber
&&-\dot T^2-b^2(u+1)\\[2mm]
&&\hspace{3mm}=\frac{C(\Delta-2)L^{\Delta-3}}{3H\rho_D}
\left[1-\frac{1}{\sqrt{|k|}}\cos\mathrm{n}(\sqrt{|k|}x)\right].
\end{eqnarray}
After employing Eq.~\eqref{brhod} and the condition $L=H^{-1}$,
we finally reach
\begin{eqnarray}
\nonumber
&&-\dot T^2-b^2(u+1)\\[2mm]
&&\hspace{3mm}=\frac{\Delta-2}{3}
\left[1-\frac{1}{\sqrt{|k|}}\cos\mathrm{n}(\sqrt{|k|}x)\right],
\end{eqnarray}
which can be equivalently written as
\begin{eqnarray}
\dot T^2&=&\frac{2}{3}-b^2(u+1)-\frac{\Delta}{3}\\[2mm]
\nonumber
&&+\,\frac{\Delta-2}{3}\frac{1}{\sqrt{k}}\cos\mathrm{n}(\sqrt{|k|}x)\,
\end{eqnarray}
\begin{figure}[t]
\hspace{-5mm}\begin{minipage}{\columnwidth}
{\resizebox{8.9cm}{!}{\includegraphics{k1VaryD.pdf}}}
\end{minipage}
\caption{Evolution trajectories of $\dot T^2$ for a closed ($k=1$) Universe. We set $u=0.04$ and $b^2=0.01$ as in~\cite{Liu:2021heo}.}
\label{fig1}
\end{figure}
Let us now distinguish between the cases $k=\pm1$ (one can
verify that $k=0$ correctly reproduces the model in Sec.~\ref{thintca}).
For $k=1$, we get from the definition~\eqref{cosn}
\begin{equation}
\label{k1}
\dot T^2\,=\,\frac{2}{3}-b^2(u+1)-\frac{\Delta}{3}+\frac{\Delta-2}{3}\cos(x)\,.
\end{equation}
In order for $\dot T$ to be zero (i.e. $\omega_D=-1$), we must have
\begin{equation}
\label{cosxT0}
\cos(x)\,=\,\frac{3b^2(u+1)}{\Delta-2}+1\,.
\end{equation}
This provides the condition for which tachyon model of BHDE
can be used to explain the origin of the cosmological constant
in a closed FRW Universe. By contrast, in~\cite{Liu:2021heo} it is
shown that $\dot T^2$ cannot be zero in Tsallis holographic dark energy in a non-flat Universe. The evolution trajectory of $\dot T^2$ in Eq.~\eqref{k1} is plotted in Fig.~\ref{fig1} for fixed $b$ and $u$ and various $\Delta$. One can see that $\dot T^2$ is positive and decreases monotonically for increasing $\cos(x)$.
Similarly, we can consider the dynamics of the
tachyon field in an open ($k=-1$) Universe. Following
the same reasoning as above, we now obtain
\begin{equation}
\label{kmeno1}
\dot T^2\,=\,\frac{2}{3}-b^2(u+1)-\frac{\Delta}{3}+\frac{\Delta-2}{3}\cosh(x)\,,
\end{equation}
which vanishes provided that
\begin{equation}
\label{cosh}
\cosh(x)\,=\,\frac{3b^2(u+1)}{\Delta-2}+1\,.
\end{equation}
However, since $\cosh(x)>1$, we infer that this model
cannot explain the cosmological constant, in agreement with the result of~\cite{Liu:2021heo}. Indeed, there is no
value of $0\le\Delta\le1$, such that the right side of Eq.~\eqref{cosh}
is higher than unity.
The evolution of $\dot T^2$ in Eq.~\eqref{kmeno1} is
plotted in Fig.~\ref{fig2} for different $\Delta$.
As before, we notice that $\dot T^2$
decreases monotonically for increasing $\cosh(x)$,
but in this case it is always smaller than zero,
which is not a physically valid situation.
In passing, we mention that the same behavior has been found in~\cite{Liu:2021heo}
in the context of HDE based on Tsallis entropy.
Therefore, we conclude that BHDE is not suitable
to model the tachyon dark energy in an open Universe.
\begin{figure}[t]
\hspace{-5mm}\begin{minipage}{\columnwidth}
{\resizebox{9.cm}{!}{\includegraphics{km1VaryD.pdf}}}
\end{minipage}
\caption{Evolution trajectories of $\dot T^2$ for an open ($k=-1$) Universe. We set $u=0.04$ and $b^2=0.01$ as in~\cite{Liu:2021heo}.}
\label{fig2}
\end{figure}
\section{Inflation in Barrow Holographic Dark Energy}
\label{infl}
In this Section we discuss inflation in BHDE. For reasons that will appear clear below and following~\cite{Inft}, here we consider the more general expression for the length scale $L^{-2}=\alpha H^{2}+\beta\dot{H}$, where $\alpha$ and $\beta$ are dimensionless constant.
Assuming that the expansion of the Universe
is driven by BHDE~\eqref{brhod} and neglecting the matter
contribution due to the rapid inflationary expansion, Eq.~\eqref{ffe}
becomes
\begin{equation}
\label{neglect}
H^2\,=\,\frac{C}{3M_p^2}\left(\alpha H^2+\beta\dot H\right)^{1-\Delta/2}\,,
\end{equation}
from which we infer
\begin{equation}
\label{tderiv}
\dot H\,=\,\frac{H^2}{\beta}\left[\left(\frac{3M_p^2}{C}\right)^{\frac{2}{2-\Delta}}(H^2)^{\frac{\Delta}{2-\Delta}}-\alpha
\right].
\end{equation}
From this relation, it is clear that setting
the IR cutoff $L\simeq H^{-1}$ (i.e. $\beta=0$) as in the previous study
would give rise to technical issues in the present framework.
To simplify the resolution of Eq.~\eqref{tderiv}, we introduce
the e-folds variable $N=\log\left(a/a_i\right)$, where $a_i$ is the
initial value of the scale factor $a$.
By observing that $dN=Hdt$ and $\dot H=\frac{1}{2}\frac{dH^2}{dN}$,
integration of Eq.~\eqref{tderiv} gives
\begin{equation}
\label{logH}
\log\left\{\tilde H^2\left[\gamma\left(\tilde H^2
\right)^{\frac{\Delta}{2-\Delta}}
\right]^{1-2/\Delta}
-\alpha
\right\}\Bigg|_{\tilde H_i}^{\tilde H_f}\,=\,-\frac{2\alpha N}{\beta}\,,
\end{equation}
where $\tilde H=H/M_p$ is the dimensionless Hubble parameter and
\begin{equation}
\label{gamma}
\gamma=\left(\frac{3M_p^2}{C}\right)^{\frac{2}{2-\Delta}}\,M_p^{\frac{2\Delta}{2-\Delta}}\,.
\end{equation}
Here we have denoted the Hubble parameter
at the end of inflation by $\tilde H_f$.
From Eq.~\eqref{neglect} we can now compute
the characteristic parameters of slow-roll inflation. Specifically, the
first slow-roll parameter is given by
\begin{equation}
\label{ep1}
\epsilon_1\,=\,-\frac{\dot H}{H}=-\frac{1}{\beta}\left[\gamma\left(\tilde H^2
\right)^{\frac{\Delta}{2-\Delta}}-\alpha\right].
\end{equation}
The other slow-roll parameters can be derived
by using the definition $\epsilon_{n+1}=d\log(\epsilon_n)/dN$.
For the second parameter $\epsilon_2$ we get
\begin{equation}
\label{ep2}
\epsilon_2\,=\,\frac{\dot \epsilon_1}{H\epsilon_1}=\frac{2\gamma}{\beta}\left(\frac{\Delta}{2-\Delta}\right)\left(\tilde H^2
\right)^{\frac{\Delta}{2-\Delta}}\,.
\end{equation}
Let us now evaluate the Hubble parameter at the end of inflation.
This phase is characterized by $\epsilon_1=1$.
By straightforward calculations, we obtain
\begin{equation}
\label{Hf}
\tilde H_f^2\,=\,\left(\frac{\gamma}{\alpha-\beta}\right)^{1-2/\Delta}\,.
\end{equation}
On the other hand, at the beginning of inflation (including
the horizon crossing time) Eq.~\eqref{logH} gives
\begin{equation}
\tilde H_i^2\,=\,\left[\frac{\gamma}{\alpha}\left(1+\frac{\beta\,e^{2\alpha N/\beta}}{\alpha-\beta}\right)\right]^{1-2/\Delta}\,,
\end{equation}
which can be used to calculate the slow-roll parameters for earlier time
by direct substitution in Eqs.~\eqref{ep1} and~\eqref{ep2}.
In order to derive the scalar spectral index $n_s-1$
and the tensor-to-scalar ratio $r$, we follow~\cite{PLBSar,Inft}
and make use of the usual perturbation procedure. We are led to
\begin{equation}
\label{nsr}
n_s-1 \,=\,-2\epsilon_1-2\epsilon_2\,,\qquad r=16\epsilon_1\,.
\end{equation}
Clearly, a full perturbation analysis is needed to
obtain the exact expressions of $n_s-1$ and $r$.
Two comments are in order here: first, we notice
that the constant $\gamma$ does not intervene
in the calculation of the slow-roll parameters at
the horizon crossing time, which means that neither
$n_s-1$ nor $r$ depend on it. As explained in~\cite{Inft},
this constant can be estimated by considering the amplitude
of the scalar perturbation.
Furthermore, it is worth mentioning that a similar analysis of inflation and correspondence between BHDE and tachyon field has been proposed in~\cite{Prelimin}. However, in that case the authors consider
values of Barrow parameter
$\Delta$ higher than unity, which are actually forbidden
in Barrow model. This somehow questions the results exhibited in~\cite{Prelimin}.
\subsection{Trans-Planckian Censorship Conjecture}
The large-scale structures we currently see in the Universe
originated from matter and energy quantum fluctuations
produced during inflation. Such fluctuations
cross the Hubble radius during the early phase,
are stretched out and classicalize, and finally
re-enter the Hubble horizon to produce the CMB anisotropies.
The key point is that if inflation lasted longer than
the supposed minimal period,
then it would be possible to observe length scales
originated from modes smaller than the Planck length
at inflation~\cite{Martin}. This problem is usually referred to
as ``Trans-Planckian problem''. To avoid inconsistencies,
it has been conjectured that this problem cannot arise
in any consistent model of quantum gravity (``Trans-Planckian Censorship Conjecture'', TCC)~\cite{Bedroya}.
The TCC states that no length scales which cross the
Hubble horizon could ever have had a wavelength smaller
than the Planck length. This is imposed
by requiring that
\begin{equation}
\label{TCC}
\frac{L_p}{a_i} \,<\,\frac{H_f^{-1}}{a_f}\,,
\end{equation}
where $L_p=1/M_p$ is the Planck length and we have
denoted by $a_f$ the scale factor at the end of inflation.
By using Eq.~\eqref{Hf} for the Hubble parameter
at the final time, the TCC~\eqref{TCC} becomes
\begin{equation}
\left(\frac{\gamma}{\alpha-\beta}\right)^{1-2/\Delta}<(8\pi\hspace{0.2mm}e^{N})^2\,,
\end{equation}
the validity of which can be examined by comparison with future
observational data.
\section{Conclusions and outlook}
\label{Conc}
The origin of the accelerated expansion
of the Universe is an open problem
in modern Cosmology. To date, the most reliable explanation
is provided by the existence of an enigmatic form of energy - the Dark Energy - affecting the Universe on large scales. Several candidates
have been considered to account for this phenomenon.
In particular, the holographic dark energy has been
largely studied, also in connection with different
real scalar field theories, such as quintessence~\cite{Peebles:1987ek,Ratra:1987rm,Wetterich:1987fm,Frieman:1995pm,Turner:1997npq,Caldwell:1997ii}, K-essence~\cite{Armendariz-Picon:2000nqq},
phantom~\cite{Caldwell:1999ew,Caldwell:2003vq,Nojiri:2003vn}, braneworld~\cite{Amendola:1999er}, interacting models~\cite{Deffayet:2001pu} and tachyon~\cite{Sen:2002in,Setare:2007hq}.
Recently, the interest has been extended to
the Tsallis holographic dark energy~\cite{Tavayef:2018xwx,Saridakis:2018unr,Saridakis:2020zol,Moradpour:2020dfm,Drepanou:2021jiv} and the possibility of using it to describe the dynamics of
the tachyon field~\cite{Liu:2021heo}.
In this work we have considered the further scenario
of tachyon model as Barrow holographic dark energy.
Barrow entropy arises from the effort to include
quantum gravity effects on the black hole horizon. In
this sense, the present analysis must be intended
as a preliminary step toward a fully quantum gravity extension of~\cite{Liu:2021heo}. We have established a correspondence between
BHDE and the tachyon field model in a FRW Universe,
both in the presence and absence of interactions between dark energy and matter. For a flat FRW Universe, we have found that $\dot T^2=0$, i.e. $\omega_D=-1$, providing a possible explanation for
the cosmological constant. On the other hand, the dynamics
of the tachyon field turns out to be much more complicated
for a curved Universe. Specifically, for a closed Universe
$\dot T^2$ decreases monotonically for increasing $\cos(R_h/a)$
and vanishes when Eq.~\eqref{cosxT0} is satisfied.
By contrast, $\dot T^2$ is always negative for an open Universe,
which shows that BHDE cannot be used to model tachyon field dynamics in this case. We have finally investigated an inflationary scenario described by a Universe filled with BHDE and commented on the Trans-Planckian Censorship Conjecture. Comparison with future observational data will enable us
to establish whether such model is empirically consistent.
Further aspects remain to be addressed. For instance,
we can look at the correspondence between the tachyon field and other
dark energy scenarios, in particular stable dark energy models. Furthermore, it is interesting to extend the above framework
to the case of Kaniadakis holographic dark energy~\cite{Drepanou:2021jiv}, which is based on a relativistic self-consistent generalization of the classical Boltzmann-Gibbs entropy~\cite{Kaniadakis:2002zz}.
Finally, it is essential to examine to what extent
our effective model reconciles with predictions of
more fundamental candidate theories of quantum gravity,
such as String Theory or Loop Quantum Gravity.
Work along these directions is presently under active investigation
and will be presented elsewhere.
\smallskip
|
\section{Introduction}
The unit quaternions form a group that is isomorphic to SU(2), and therefore they have the ideal mathematical structure to represent (pure) spin-1/2 quantum states, or qubits. But while a unit quaternion $\bm{q}$ is effectively a point on a 3-sphere, a qubit $\psi$ is often represented as a point on a 2-sphere (the Bloch sphere). Such dimensional reduction results from ignoring the global phase of the spinor $\ket{\chi}$, dropping to a projective Hilbert space where $\ket{\chi}$ and $exp(i\alpha)\ket{\chi}$ correspond to the same qubit $\psi$ (in this case, a Hopf fibration \cite{Penrose,Urbantke}). This paper examines certain symmetries and natural operations (evident on the full 3-sphere) that have been obscured by this usual reduction; after all, a 3-sphere has a different global geometry than does a circle mapped to every point of a 2-sphere.
Despite widespread agreement that the SU(2) symmetries of the 3-sphere are more applicable to qubits than are 2-sphere symmetries, the project of analyzing qubits on the full 3-sphere has been generally neglected. The likely reason is that such an analysis might imply that the global phase has some physical meaning, against conventional wisdom. To avoid this potential conclusion the global phase is typically removed at the outset. But the topological mismatch noted above means there is no continuous way to remove this phase from all points on the 3-sphere. This issue might be seen as a reason to at least temporarily retain the global phase when analyzing spin-1/2 states or equivalent two-level quantum systems.
As further motivation, note that geometric (Berry) phases \cite{Berry} are routinely measured in the laboratory, in seeming contradiction to the orthodox position that global phases are irrelevant. The typical response here is to deny that single-particle Hilbert spaces are appropriate for measuring relative phases, but nevertheless such phases \textit{can} be computed in a single-spinor framework (see Section 5.2 for further discussion). This paper takes the position that the predictions of quantum theory would be the same whether global phase is a meaningless gauge or an unknown hidden variable, and the latter possibility is enough to motivate this line of research.
Even if global phases are canonically meaningless, they still can be important to research that strives to extend and/or explicate quantum theory. Several independent researchers have hit upon using the global phase as a natural hidden variable with a role in probability distributions \cite{Pearle,KGWE,Harrison}, and having a richer single-qubit structure may be useful for ongoing efforts to explain quantum probabilities in terms of natural hidden variables \cite{Argaman,WhartonInfo}. But for any such work it is important to look at the full 3-sphere, to avoid defining a hidden variable in terms of a globally ill-defined parameter.
For those readers unconcerned with such foundational questions, one can still motivate the 3-sphere viewpoint where it is mathematically advantageous to represent and manipulate spinors in quaternionic form (even if the global phase is eventually discarded). These applications are developed in the next two sections. Section 2 defines a useable map between spinors and unit quaternions that conveniently provides a direct quaternion-to-Bloch-sphere mapping. Right- and left- quaternionic multiplications are shown to correspond to rotations on the Bloch sphere, with particularly surprising results for left-multiplications. After developing dynamics in Section 3, one immediate application is a dramatic simplification of issues related to time-reversal. Specifically, one can time-reverse an arbitrary spin-1/2 state via a simple left-multiplication, without either complex-conjugating the state vector or including such a conjugation as part of a time-reversal operator. This permits a straightforward, classical-style analysis of the time-symmetry of the Schr\"odinger-Pauli equation.
Section 4 looks at further ramifications motivated by this 3-sphere viewpoint. It is shown that a (seemingly) necessary symmetry-breaking in the Schr\"odinger-Pauli equation (the sign of $i$) looks quite unnatural when framed in terms of quaternions. This broken symmetry is related to the choice of \textit{which} Hopf fibration one uses to reduce the 3-sphere to the quantum state space. In section 4.3, restoring this symmetry motivates an alternative second-order dynamical equation, encoding standard dynamics but containing a richer hidden structure. The extra parameters naturally encode the choice of Hopf fibration, avoiding the broken symmetry while maintaining a clear connection to ordinary quantum states. Combined with the global phase, these new parameters comprise either a larger gauge symmetry or a natural space of hidden variables. Section 5 then discusses and expands upon all of these results.
Surveying the literature, the closest analog to the analysis in Section 2 concerns transformations of plane-wave electromagnetic signals, translated into four-dimensional Euclidean space via an extended Jones calculus. \cite{Karlsson} (That approach used normalized real 4-vectors instead of unit quaternions, but this is not an essential difference.) While it seems doubtful that the novel digital signaling applications motivated by that research might be applicable in a quantum context, it nevertheless demonstrates that a quaternionic viewpoint can yield new perspectives on a well-understood system.
It has also been noted that the standard mathematics for spinors looks awkward when expressed in quaternionic form, most explicitly in work by Adler \cite{Adler}. In this prior work, Adler focuses on the complex inner product, and proposes a quaternionic replacement while leaving the dynamics unchanged. Such a step has the effect of halving the state space, and motivates the field of quaternionic quantum mechanics \cite{Adler2}. But apart from the initial motivation, it should be noted that the present paper does not follow this path in any way. Far from extending the traditional machinery of quantum mechanics into the domain of quaternionic inner products, this work simply explores the evident symmetries of the 3-sphere, and tries to preserve such symmetries while maintaining a map to standard spin-1/2 quantum states. It turns out that this goal can best be accomplished via a dramatically \textit{enlarged} state space; a one-to-many mapping from qubits to quaternions.
\section{Quaternionic Qubits}
\subsection{A Spinor-Quaternion Map}
A qubit can be represented by any point along the surface of the Bloch sphere, with the north and south poles corresponding to the pure states $\ket{0}$ and $\ket{1}$ respectively, as shown in Figure 1. (Qubits here are assumed to be pure; a later discussion of mixed states will be framed in terms of distributions over pure states, never as points inside the Bloch sphere.) For a given point $(\theta,\phi)$ on the sphere (in usual spherical coordinates), the corresponding qubit is defined by
\begin{equation}
\label{eq:qubit}
\psi =e^{-i\phi/2} cos(\theta/2)\hspace{1mm} |0> + e^{i\phi/2}sin(\theta/2)\hspace{1mm} |1>.
\end{equation}
As noted above, the global phase is not encoded in a qubit, so $\psi$ and $exp(i\alpha)\psi$ correspond to the same physical state.
The distinction between a spinor and a qubit, as used in this paper, is that spinors distinguish between such global phases. Here a spinor is defined as $\ket{\chi}={a \choose b}$, with $a,b\in\mathbb{C}$ and imposed normalization $\ip{\chi}{\chi}=1$. Multiplying $\ket{\chi}$ by $exp(i\alpha)$ results in a different spinor, albeit one that corresponds to the same qubit. It is crucial to note that there is not a globally-unique way to decompose $\ket{\chi}$ into the three angles $(\theta,\phi,\alpha)$, where the first two represent the location of the qubit on the Bloch sphere. For example, attempting
\begin{equation}
\label{eq:3phase}
\ket{\chi} \hspace{2mm}= e^{i\alpha}\begin{pmatrix}
cos(\frac{\theta}{2}) e^{-i\frac{\phi}{2}} \\
sin(\frac{\theta}{2}) e^{i\frac{\phi}{2}}
\end{pmatrix}
\end{equation}
results in a coordinate singularity for qubits on the z-axis, leading to many possible values of $\alpha$. This reflects the fact that $\ket{\chi}$ naturally represents a point on a 3-sphere, and the global geometry of a 3-sphere is not simply a phased 2-sphere. And if $\alpha$ cannot be globally defined, it cannot be neatly removed without consequences.
This point is clearer when the spinor is rewritten as a quaternion. There are many ways to accomplish this, but an obvious choice is the invertible map $M_i\!:\! \ket{\chi} \to \bm{q}$ defined by $\bm{q}=a+b\bm{j}$, where $\bm{q}\in\mathbb{H}$. (A short primer on quaternions can be found in the Appendix.) Explicitly, this map reads
\begin{equation}
\label{eq:qdef}
M_i[\ket{\chi}]=\bm{q}=Re(a)+\bm{i}Im(a)+\bm{j}Re(b)+\bm{k}Im(b).
\end{equation}
\begin{figure}[h]
\includegraphics[width=10cm]{Bloch-1.png}
\caption{Six representative spinors are shown on the Bloch sphere, along with their quaternion equivalent under the map $M_i$. The states can be (left)-multiplied by a global phase term $exp(\bm{i}\alpha)$, so there are many spinors (and quaternions) at a given point on the sphere. For example, $\bm{q}\!=\!\bm{i}$ is also at $\ket{0}$, and $\bm{q}\!=\!\bm{k}$ is also at $\ket{1}$.}
\end{figure}
Normalization is enforced by restricting $\bm{q}$ to unit quaternions, $|\bm{q}|^2=1$. From this it should be evident that the space of all unit quaternions lies on a unit 3-sphere, and so, therefore, does the space of all normalized spinors.
The previous point concerning the ambiguity of $\alpha$ can also be made clearer in a quaternionic context. Under the map $M_i$, the quaternionic version of (\ref{eq:3phase}) is
\begin{equation}
\label{eq:q3phase}
\bm{q} = e^{\bm{i}\alpha}e^{\bm{j}\frac{\theta}{2}} e^{-\bm{i}\frac{\phi}{2}},
\end{equation}
from which it is evident that if $\theta=0$, only the combination $(\alpha-\phi/2)$ can be assigned a unique value. Despite this ambiguity, (\ref{eq:q3phase}) can always be used to find the corresponding Bloch sphere unit vector $\hat{q}$ in spherical coordinates. But if $\bm{q}$ is not already of the form in (\ref{eq:q3phase}), it would seem to be easier to find $\hat{q}$ by inverting the map $M_i$ (\ref{eq:qdef}) and using standard spinor analysis (which involves discarding the global phase).
A more elegant method for finding the Bloch sphere unit vector $\hat{q}$ \textit{without} passing through the spinor representation is to generate a unit pure quaternion $\hat{\bm{q}}$ (with no real component) and then map $\hat{\bm{q}}$ directly to $\hat{q}$ in Cartesian coordinates. Assuming the map $M_i$, this can be done via
\begin{eqnarray}
\label{eq:qhat}
\hat{\bm{q}} =\bar{\bm{q}}\bm{iq}, \\
\label{eq:fdef}
\hat{q}=f[\hat{\bm{q}}]\equiv\hat{\bm{q}}_k\hat{x}-\hat{\bm{q}}_j\hat{y}+\hat{\bm{q}}_i\hat{z}.
\end{eqnarray}
Here $\hat{\bm{q}}_i$ is the $\bm{i}$-component of $\hat{\bm{q}}$, \textit{etc.}, and this last equation is easily invertible, $\hat{\bm{q}}=f^{-1}[{\hat{q}}]$, given the Cartesian components of $\hat{q}$. The former equation (\ref{eq:qhat}), however, is not invertible; inserting the form of $\bm{q}$ from (\ref{eq:q3phase}) into (\ref{eq:qhat}), one finds that the global phase $\alpha$ always disappears exactly.
But despite the mathematical elimination of $\alpha$ when mapping to the Bloch sphere, this quaternionic perspective still permits a geometrical interpretation of the global phase. This is because Eqn. (\ref{eq:qhat}) is known to represent a rotation $\bm{i}\to\hat{\bm{q}}$ on the 2-sphere of unit pure quaternions (as further discussed in the Appendix). The different global phases, then, apparently correspond to \textit{different} rotations that will take $\bm{i}$ to the same $\hat{\bm{q}}$.
These rotations can also be mapped on the Bloch sphere itself. First, write $\bm{q}$ in the most natural form of an arbitrary unit quaternion;
\begin{equation}
\label{qandw}
\bm{q} = e^{\hat{\bm{w}} \beta},
\end{equation}
where $\hat{\bm{w}}$ is another unit \textit{pure} quaternion, and $\beta$ is an angle. To interpret (\ref{eq:qhat}) as a rotation on the Bloch sphere, simply map all of the pure quaternions to the Bloch sphere using Eqn (\ref{eq:fdef}); $\hat{q}=f[\hat{\bm{q}}]$, $\hat{z}=f[{\bm{i}}]$, $\hat{w}=f[\hat{\bm{w}}]$. Eqn (\ref{eq:qhat}) then indicates that the Bloch sphere vector $\hat{q}$ can be found by rotating the vector $\hat{z}$ by an angle $-2\beta$ around the axis $\hat{w}$. Just as there many rotations that will take one vector into another, there are many quaternions that correspond to any given vector $\hat{q}$.
This implies that the most natural reading of the spin-1/2 state in the form of the quaternion $\bm{q}$ is not a mere vector on a 2-sphere, but rather as a \textit{rotation} on a 2-sphere. This rotation can be \textit{used} to generate a particular vector $\hat{q}$, but it also contains more information not available in $\hat{q}$, such as the angle $\beta$. This angle is distinct from the global phase $\alpha$ (as the latter cannot be precisely defined in a global manner).
\subsection{Right Multiplication}
For a spinor represented as a unit vector on the Bloch sphere, a rotation of that vector by an angle $\gamma$ around an arbitrary axis $\hat{n}$ can be achieved by an operation of the complex matrix:
\begin{equation}
\label{eq:Rn}
\bm{R}_{\hat{n}}(\gamma) = cos(\frac{\gamma}{2}) I - i \, sin( \frac{\gamma}{2} ) \hat{n} \cdot \vec{\sigma}
\end{equation}
Here, $\vec{\sigma}$ is the usual vector of Pauli matrices, defined in the Appendix.
There is a simple correspondence between $\bm{R}_{\hat{n}}(\gamma)$ and an exponential quaternion, due to the strict parallel between $i\vec{\sigma}$ and the three imaginary quaternions $\bm{i,j,k}$. (See the Appendix and Table 1 for further details on this point.) Assuming the map $M_i[\chi]\!\!=\!\!\bm{q}$ defined above, a right multiplication by $exp({-\bm{k}\gamma/2})$, $exp({\bm{j}\gamma/2})$, or $exp({-\bm{i}\gamma/2})$ on a unit quaternion $\bm{q}$ rotates the corresponding Bloch sphere vector $\hat{q}$ by an angle $\gamma$ around the positive $\hat{x}$, $\hat{y}$, or $\hat{z}$ axes (respectively).
More generally, a right multiplication by $exp({ -\hat{\bm{n}} \gamma/2 })$ effects a rotation of an angle $\gamma$ around the arbitrary axis $\hat{n}=f[\hat{\bm{n}}]$ such that:
\begin{equation}
\label{eq:RM}
M_i[\bm{R}_{\hat{n}}(\gamma) \ket{\chi}] = \bm{q} e^{ -\hat{\bm{n}} \frac{\gamma}{2}} .
\end{equation}
This simple relationship can also be seen from Eqn (\ref{eq:qhat}); as $\bm{q}\to\bm{q}\,\,exp(-\hat{\bm{n}} \gamma/2)$, one finds $\hat{\bm{q}}\to exp(\hat{\bm{n}} \gamma/2)\,\hat{\bm{q}}\,\,exp(-\hat{\bm{n}} \gamma/2)$. Again, this is a rotation of $\hat{\bm{q}}$ on the 2-sphere of unit pure quaternions, which can be mapped to the Bloch sphere via (\ref{eq:fdef}).
As every unit quaternion can be written in the form $exp({ -\hat{\bm{n}} \gamma/2 })$, and as we are only interested in transformations that keep $\bm{q}$ normalized, there are no other right-multiplications to consider. Table 1 lists some useful special rotations, corresponding to quantum gates (assuming the use of the $M_i$ map from spinors to quaternions).
\begin{table}
\caption{\label{tab1} Some common single-qubit gates are presented in terms of a right-multiplied quaternion (assuming the use of the map $M_i$). For $\pm\pi$-rotations, the two possible directions yield a different sign outcome.}
\begin{indented}
\item[] \begin{tabular}{| c | c | c | p{6cm} | }
\hline
Gate & Matrix Operator & Equivalent Quaternion \\
& & (Right Multiplication)\\ \hline
Pauli X-Gate & $ \pm i \begin{bmatrix}
0 \hspace{2mm} 1 \\
1 \hspace{2mm} 0
\end{bmatrix}
$ & { \large $e^{\pm\bm{k} \frac{\pi}{2}}=\pm \bm{k}$} \\ \hline
Pauli Y-Gate & $ \mp i \begin{bmatrix}
0 \hspace{2mm} $-$i \\
i \hspace{2mm} 0
\end{bmatrix}
$ & { \large $e^{\pm\bm{j} \frac{\pi}{2}}=\pm \bm{j}$ } \\ \hline
Pauli Z-Gate & $ \pm i \begin{bmatrix}
1 \hspace{2mm} 0 \\
0 \hspace{2mm} $-$1
\end{bmatrix}
$ & { \large $e^{\pm\bm{i} \frac{\pi}{2}}=\pm \bm{i}$ } \\ \hline
Phase Shift Gate & $ \begin{bmatrix}
e^{-i \theta/2} \hspace{2mm} 0 \\
\hspace{2mm} 0 \hspace{6mm} e^{i \theta/2}
\end{bmatrix}
$ & { \Large e$^{-\bm{i} \frac{\theta}{2}}$ } \\ \hline
Hadamard Gate & $ \dfrac{\pm i}{\sqrt{2}} \begin{bmatrix}
1 \hspace{3mm} 1 \\
1 \hspace{2mm} $-$1
\end{bmatrix}
$ & { \large e$^{\pm\frac{\bm{i}+\bm{k}}{\sqrt{2}} \frac{\pi}{2}}=\pm\frac{\bm{i}+\bm{k}}{\sqrt{2}}$ } \\ \hline
\end{tabular}
\end{indented}
\end{table}
\subsection{Left Multiplications}
From a quaternion-based viewpoint, one would expect no essential difference for left-multiplication as compared to right-multiplication; given the above results, left-multiplication should merely encode another class of rotations. But right-multiplications have seemingly spanned the range of possible unitary operators, so a spinor-based perspective might find it surprising that there is another type of transformation at all. Indeed, in some quaternion-based approaches to quantum spin \cite{Avron}, left-multiplications have simply been left undefined, despite the clear meaning of such operations in the quaternion algebra.
The resolution of this apparent disagreement lies in the fact that left-multiplications map to non-unitary operators and/or global phase shifts. (This was recently detailed in an analysis of electromagnetic plane-waves, concerning these same left-isoclinic rotations in 4D Euclidean signal space \cite{Karlsson}.) Some left-multiplications correspond to anti-unitary operators, making this mathematics particularly useful for analysis of time-reversed spin-1/2 systems. The connection between quaternions and time-reversal of spin-1/2 states has been known for some time \cite{Avron,Dyson}, but utilizing generic left-multiplications (as opposed to simply a special time-reversal operator) allows for a deeper analysis of the relevant symmetries.
Except for a measure-zero set of (unitary) phase-change operators and (anti-unitary) time-reversal operators, most of the quaternionic left-multiplications correspond to operators that are merely non-unitary. A famous theorem by Wigner \cite{Wigner} indicates that such operators must take pure states into mixed states, and such operators have already found significant application in quantum information theory.
Here a new motivation presents itself, via the quaternion mathematics. Given that there is no essential difference between quaternionic left- and right-multiplication, every symmetry evident in the right-multiplication sector should have an equally important symmetry in the left-multiplication sector. (Recall the original motivation of this paper was to see if perhaps the Bloch-sphere viewpoint had obscured particular features evident on the full 3-sphere.)
As before, we shall assume the map $M_i[\ket{\chi}]\!\!=\!\!\bm{q}$, and enforce the continued normalization of $\bm{q}$ by only multiplying exponential quaternions. Using the notation $\bm{q}'=\bm{q}_L \bm{q}$, the simplest case is the left multiplication $\bm{q}_L\!=\!exp({\bm{i}\gamma/2})$, which simply changes the global phase of $\bm{q}$. Eqn (\ref{eq:qhat}) indicates that this does not lead to any Bloch sphere rotation, because $\hat{\bm{q}}'=\hat{\bm{q}}$.
This same equation offers a geometrical interpretation of a general left-multiplication $\bm{q}_L$. Recall that (\ref{eq:qhat}) implies $\bm{q}$ does not merely encode a Bloch vector $\hat{q}$, but instead a \textit{rotation} of the vector $\hat{z}$, with many different rotations ($\bm{q}$'s) that can take $\hat{z}$ to the same $\hat{q}$ (corresponding to different global phases). Now, a left multiplication on $\bm{q}$ can be viewed as two consecutive rotations;
\begin{equation}
\label{eq:qprime}
\hat{\bm{q}}'=\bar{\bm{q}} \left( \bar{\bm{q}}_L \bm{i} \bm{q}_L \right) \bm{q}.
\end{equation}
In other words, $\bm{q}_L$ serves to \textit{first} rotate $\hat{z}=f[\bm{i}]$, before the rotation encoded by $\bm{q}$ can be executed. And as rotations do not commute, one cannot calculate $\hat{q}'=f[\hat{\bm{q}}']$ without knowing \textit{which particular rotation} $\bm{q}$ encodes. Different global phases of the original $\bm{q}$ will therefore lead to a different final Bloch vector $\hat{q}'$, even if $\hat{q}$ and $\bm{q}_L$ are exactly known. Such a transformation is neither unitary nor anti-unitary, but may be mathematically useful even if it is not physically possible.
The two exceptions to this ambiguity are when the central term in (\ref{eq:qprime}) is either $\bm{i}$ or $-\bm{i}$. The former case has been discussed above; this is just a global phase shift. The latter case can be realized by a left multiplication of (say) $\bm{q}_L=\!\bm{j}$ or $\bm{q}_L=\!\bm{k}$. The net effect of such a left-multiplication would be a simple minus sign; $\hat{\bm{q}}'=-\hat{\bm{q}}$, inverting the original Bloch sphere vector. It follows that both $\bm{j}$ and $\bm{k}$ act like the anti-unitary time-reversal operator $\bm{T}$ (when used to left-multiply a quaternion).
If one insists on thinking of $\bm{q}$ as simply encoding a vector $\hat{q}$ on the Bloch sphere (rather than as a rotation), it is still possible to interpret left-multiplications as a rotation of $\hat{q}$ around a sometimes-unknown axis. A general left multiplication of $\bm{q}$ by $\bm{q}_L=exp({ -\hat{\bm{n}} \gamma/2 })$ is equivalent to rotating the Bloch sphere vector $\hat{q}$ by an angle $\gamma$ around some axis. But this rotation axis $\hat{r}$ is no longer given by $\hat{n}=f[\hat{\bm{n}}]$. Indeed, $\hat{r}$ is not even computable from knowledge of the vector $\hat{q}$; it depends on the entirety of $\bm{q}$. Still, if the global phase $\alpha$ is completely unknown, instead of one particular rotation axis $\hat{r}$, there are instead many possible rotation axes, $\hat{r}(\alpha)$. These possible axes form a circle on the Bloch sphere, and $\hat{q}$ passes through the center of this circle.
The result is that $\bm{q}_L$ corresponds to a cone of possible rotation axes (assuming the global phase is unknown). The angle of this cone can be determined from the relationship
\begin{equation}
\label{eq:cone}
\hat{n} \cdot \hat{z} = \hat{q} \cdot \hat{r}.
\end{equation}
In other words, the angle between the $z$-axis and $\hat{n}$ is the half-angle of the cone produced by the possible values of $\hat{r}$. In the special case that $\hat{\bm{n}}=\bm{i}$, this cone angle is zero. In this case, the only possible rotation axis for $\hat{q}$ is $\hat{q}$ itself, or no rotation at all; this corresponds to a global phase change, with no state change.
The other special case is when $\hat{\bm{n}}$ lies in the quaternionic $\bm{j}-\bm{k}$ plane, which means that $\hat{n}$ lies in the $x-y$ plane of the Bloch sphere. The angle between $\hat{z}$ and $\hat{n}$ is then always $\pi/2$. In this case, the possible rotation axes $\hat{r}(\alpha)$ form a great circle: the equator corresponding to a pole defined by $\hat{q}$. A $\gamma=\pi$ rotation around \textit{any} of these axes will send $\hat{q}\to -\hat{q}$, exactly reversing the direction of the Bloch sphere vector, regardless of $\alpha$. (Again, this corresponds to an anti-unitary operation, $\bm{T}$.) In general, for the map $M_i[\ket{\chi}]=\bm{q}$, this reversal is equivalent to any left-multiplication of the form
\begin{equation}
\label{eq:Tdef}
M_i[\bm{T}\ket{\chi}]=e^{[\bm{j} cos(\delta) + \bm{k} sin(\delta)]\pi/2} \bm{q}
\end{equation}
for any angle $\delta$. One convenient left-multiplication of this form is at $\delta=0$, or $\bm{jq}$, which will be used as the time-reversed representation of $\bm{q}$ in the next section.
\section{Dynamics}
\subsection{Spin-1/2 in a magnetic field}
When it comes to the equations that describe the dynamics of a charged spin-1/2 state in a magnetic field, quaternions also provide a useful and simplifying framework. In a magnetic field $\vec{{B}}(t)$, the standard Schr\"odinger-Pauli equation for $\ket{\chi(t)}$ reads (in spinor form)
\begin{equation}
\label{eq:sspe}
i\hbar \frac{d}{dt} \ket{\chi}= -\gamma \frac{\hbar}{2} \vec{{B}}\cdot\vec{\sigma} \, \ket{\chi}.
\end{equation}
Here $\gamma$ is the gyromagnetic ratio.
Because of the correspondence between the three components of $i\vec{\sigma}$ and the imaginary quaternions $(\bm{i},\bm{j},\bm{k})$, this matrix algebra can be trivially encoded in the quaternionic version of the Schr\"odinger-Pauli equation, which simply reads
\begin{equation}
\label{eq:qspe}
\dot{\bm{q}}=-\bm{q}\bm{b}.
\end{equation}
Here $\bm{b}$ is a pure quaternion defined in terms of the three components of $\vec{{B}}$;
\begin{equation}
\label{eq:bdef}
\bm{b} \equiv \frac{\gamma}{2} \left( \bm{i} B_z - \bm{j} B_y + \bm{k} B_x \right).
\end{equation}
But these equations are unsatisfactory in that they only describe the geometric phase, and this is not measureable on its own; only the combined dynamic plus geometric phase can be detected. An inclusion of even the simplest and most fundamental dynamical phase (say, a constant-energy term $exp(-i\omega_0t)$, where the energy $\hbar\omega_0$ might include a rest mass) dramatically changes these equations. Indeed, in the limit $\vec{\bm{B}}\to0$, this would be the only surviving phase.
Inclusion of this simplest dynamic phase would appear as an extra term $\hbar\omega_0\chi$ on the right side of (\ref{eq:sspe}). The corresponding quaternionic equation (\ref{eq:qspe}) is
\begin{equation}
\label{eq:spe}
\dot{\bm{q}}=-\bm{q}\bm{b}-\bm{i}\omega_0\bm{q}.
\end{equation}
Crucially, while $\bm{b}$ enters as a right-multiplication, a quaternionic $\bm{i}$ enters as a left-multiplication. Somehow, one particular pure quaternion $(\bm{i})$ has been singled out by the dynamics, over $\bm{j}, \bm{k},$ etc.
The source of this asymmetry can be traced back to the original map $M_i$, defined in Section 2; other choices would have resulted in a different final term of (\ref{eq:spe}). To see this, define a different map $M_v(\chi)\equiv \bm{u}M_i(\chi)$, where $\bm{u}$ is any unit quaternion. Under this alternate map, one finds a new representation of the spin state $M_v(\ket{\chi})=\bm{q'}$, related to the old representation by $\bm{q'}=\bm{uq}$ (or, equivalently, $\bm{q}=\bar{\bm{u}}\bm{q'}$). Using this in (\ref{eq:spe}), and left-multiplying by $\bm{u}$, results in
\begin{eqnarray}
\label{eq:vspe}
\dot{\bm{q'}}&=&-\bm{q'}\bm{b}-\hat{\bm{v}}\omega_0\bm{q'},\\
\label{eq:vdef}
\hat{\bm{v}}&\equiv& \bm{ui\bar{u}}.
\end{eqnarray}
Here the unit quaternion $\hat{\bm{v}}$ is guaranteed to be pure.
One can also consider alternate maps of the form $M_i \bm{r}$, but as detailed in section 2.2, this is merely a rotation of the entire coordinate system, and only changes the map $f$ between the pure quaternions and the Bloch sphere (\ref{eq:fdef}). (Also, it changes the inverse of this map, which shows up in $\bm{b}$ via (\ref{eq:bdef}).) The most general map, $\bm{u} M_i \bm{r}$, then, defines both the coordinate system in which the Bloch sphere is embedded and the pure quaternion $\hat{\bm{v}}$ in the dynamical equation (\ref{eq:vspe}).
\subsection{Application: Time Reversal}
One example of the value of quaternionic equations can be found by examining the issue of time-reversal. Applied to the standard (\ref{eq:sspe}), one finds a series of decisions requiring substantial care and expertise to get correct: Does the non-unitary time-reversal operator $\bm{T}$ apply to such differential equations, or merely to instantaneous states? Does the sign of $i$ change along with the sign of $d/dt$, or is conjugation itself a way of effecting $t\to -t$? Are the Pauli matrices all time-odd, or just the imaginary $\sigma_y$? Is energy time-odd or time-even? This section argues that such questions become far more straightforward when applied to (\ref{eq:spe}), and might even provide fresh insights to curious fermionic features such as $\bm{T}^2=-1$.
For the quaternionic equation, it turns out that one can use the same logic as time-reversal in classical physics; no complex conjugations are required. Namely, one first changes the sign of all the time-odd physical quantities, then changes the sign of $t$ in all of the differential equations, and finally looks to see if the transformed equation has the same form as the original equation. (If so, one has time-symmetric physical laws.) In the case of (\ref{eq:spe}), the only time-odd quantities are the magnetic field $\bm{b}$ and the angular momentum encoded by $\bm{q}$. Importantly, classical energy is time-even (recall, $\omega_0$ might represent a rest mass term, $mc^2/\hbar$), and should not change sign under time reversal.
Section 2.3 showed that one can reverse the direction of any arbitrary spin direction encoded by $\bm{q}$ via $\bm{q}\to\bm{jq}$. If one also takes $\bm{b}\to -\bm{b}$ and changes the sign of the time-derivative on the left side of (\ref{eq:spe}), this equation becomes
\begin{equation}
\label{eq:Trev}
-\bm{j}\dot{\bm{q}}=+\bm{j}\bm{q}\bm{b}-\bm{i}\bm{j}\omega_0\bm{q}.
\end{equation}
Another left-multiplication by $\bm{j}$ therefore restores the exact form of (\ref{eq:spe}), because $\bm{jij}=\bm{i}$. Therefore the Schr\"odinger-Pauli equation is time-symmetric, in precisely the same sense as classical physics.
If one performs this time-reversal twice, and demands the same left-multiplication on $\bm{q}$ for each time-reversal, one has (say) $\bm{q} \to \bm{j}^2\bm{q}=-\bm{q}$, matching the $\bm{T}^2=-1$ operation from ordinary quantum physics. But from a classical perspective, such an equation appears baffling; surely if one performs two time-reversals on any given history $\bm{q}(t)$, it should be a logical imperative that one recovers the original $\bm{q}(t)$, not a phase-reversed version. A solution to this quantum-classical disconnect could plausibly lie in the ambiguity of exactly \textit{which} quaternion should be used to implement time-reversal -- or better yet, a removal of this ambiguity entirely.
Looking at the more general (\ref{eq:vspe}), one obvious strategy would be to associate time-reversal directly with a change of the sign of $\hat{\bm{v}}$, rather than any particular left-multiplication on $\bm{q}$. If such a step were meaningful, it would immediately solve the above problems; two time-reversals would then end up back at the original solution, with no extra phase shift.
Unfortunately, at this point, such a step does not seem to be possible, as $\hat{\bm{v}}$ is only defined in terms of the choice of $\bm{u}$ used in the map $M_v$, and the ambiguity of how to choose $\bm{u}$ remains. (One could equally well choose $\bm{u}=\pm\bm{j}$ or $\bm{u}=\pm\bm{k}$ to change the sign of $\hat{\bm{v}}$.) But if one follows through the proposals in the next section, it is possible to extend the dynamics such that $\hat{\bm{v}}$ does indeed have independent physical meaning as a time-odd parameter, raising the possibility that $\bm{T}^2=-1$ could be reconciled with the classical meaning of time-reversal.
\subsection{The Generalized Map}
In Section 3.1, we noted that use of the map $M_i$ singled out $\bm{i}$ as a special quaternion, and a careful reader might have noticed that this was also the case throughout Section 2. The quaternion $\bm{i}$ corresponds to the unit vector $\hat{z}=f[\bm{i}]$, which played several special roles. In Section 2.1, $\hat{z}$ was the starting vector for the natural interpretation of $\bm{q}$ as a rotation. In Section 2.3, the unit vector $\hat{z}$ appears in Eq. (\ref{eq:cone}). Note that it was a left multiplication of $exp(-\bm{i}\gamma/2)$ that changed the global phase, but any pure quaternion other than $\bm{i}$ in the exponent led to a non-unitary transformation of $\bm{q}$.
It is straightforward to generalize Section 2 to work for any map $M_v(\ket{\chi})\equiv \bm{u}M_i(\ket{\chi})$ between spinors and quaternions. Inserting $\bm{q}=\bar{\bm{u}}\bm{q'}$ into (\ref{eq:qhat}), and dropping the primes, one finds simply
\begin{equation}
\label{eq:vqhat}
\hat{\bm{q}} =\bar{\bm{q}}\bm{\hat{v}}\bm{q},
\end{equation}
where $\hat{\bm{v}}$ is defined in (\ref{eq:vdef}). One can continue to use the same map $\hat{q}=f[\hat{\bm{q}}]$ from these pure quaternions to the Bloch sphere defined by (\ref{eq:fdef}) unless one further generalizes the map $M_v$ with a right-multiplication; this would rotate the coordinate system as described at the end of Section 3.1.
Given (\ref{eq:vqhat}), the generalization of quaternion multiplication to the map $M_v(\chi)$ is straightforward. In this general case $\hat{\bm{v}}$ is the special pure quaternion instead of $\bm{i}$. Instead of a special unit vector $\hat{z}$, one instead has a special unit vector $\hat{v}=f[\hat{\bm{v}}]$. So a generic state $\bm{q}$ describes a rotation from $\hat{v}$ to $\hat{q}$, and it is the vector $\hat{v}$ that is first acted upon by a left rotation $\bm{q}_L$. With this change of $\hat{z}\to\hat{v}$, the results of Section 2 go through for the general map $M_v$.
\section{Expanded Dynamics}
\subsection{A Broken Symmetry}
Moving beyond (\ref{eq:sspe}), the more general Schr\"odinger equation, for any quantum system $\ket{\psi(t)}$, reads
\begin{equation}
\label{eq:sse}
i\hbar \frac{\partial}{\partial t} \ket{\psi}= \bm{H} \ket{\psi},
\end{equation}
where $\bm{H}$ is the Hamiltonian operator, possibly time-varying. If one treated the wavefunction as purely time-even (or purely time-odd), the analysis normally applied to classical physics equations (as described in the previous section) would reveal a formal time-aysmmetry. This is because $\bm{H}$ represents energy, a time-even quantity, and time-reversal would lead to a different equation, with different solutions $\ket{\phi}$:
\begin{equation}
\label{eq:trse}
-i\hbar \frac{\partial}{\partial t} \ket{\phi}= \bm{H} \ket{\phi}.
\end{equation}
This issue has a well-known resolution; if the wavefunction is also complex-conjugated along with sending $t\to -t$ (or more-generally, $\ket{\psi'(t)}=\bm{T}\ket{\psi(-t)}$), then this conjugated state $\ket{\psi'}$ will solve the original (\ref{eq:sse}).
But even with this resolution, a broken symmetry remains; what chooses the sign of $i$ in (\ref{eq:sse}), and why should such a choice be necessary in the first place? Almost everyone would agree that this choice is a mere convention, and that equivalent physical predictions would have results if Schr\"odinger had picked the opposite sign for $i$ in his original equation. But the fact that such a choice was necessary in the first place is an indication of a broken symmetry.
Avoiding this choice is possible, but only by going to the second-order (Klein-Gordon) equation;
\begin{equation}
\label{eq:kge}
-\hbar^2\frac{\partial^2}{\partial t^2} \ket{\kappa}= \bm{H}^2 \ket{\kappa},
\end{equation}
with the general solution $\ket{\kappa}=A\ket{\phi}+B\ket{\psi}$. (Here $\ket{\phi}$ and $\ket{\psi}$ are not related to each other, giving $\ket{\kappa}$ twice as many free parameters as either $\ket{\phi}$ or $\ket{\psi}$ alone.) But despite the symmetries and relativity-friendly nature of this equation, it yields solutions with no obvious single-particle interpretation -- in particular, solutions for which $A$ and $B$ are both non-zero. (For another perspective, see \cite{KGWE}.) If such solutions are rejected as unphysical, that forces one to reduce the relevant equation down to either (\ref{eq:sse}) or (\ref{eq:trse}), and that choice would seem to be a \textit{necessary} broken symmetry.
\subsection{The Quaternion Viewpoint}
The arguments in the previous subsection do not properly go through for a spin-1/2 system, and this is most clearly seen from the perspective of quaternionic qubits. For simplicity, first consider the zero-field limit ($\bm{b}\to0$). Eliminating this magnetic field term from (\ref{eq:spe}) one might seem to still have a (quaternionic) $\bm{i}$ present, but as discussed above, this stems from the choice $M_i$ of how one maps the spinor to the quaternion $\bm{q}=M_i(\chi)$. A more-general map $M_v$ yields (\ref{eq:vspe}), or in the zero-field case,
\begin{equation}
\label{eq:vspe2}
\dot{\bm{q}}=-\bm{\hat{v}}\omega_0\bm{q}.
\end{equation}
As before, $\omega_0$ could represent a rest mass ($\omega_0=mc^2/\hbar$), and $\hat{\bm{v}}$ is an arbitrary pure unit quaternion.
The usual link between the standard-form Schr\"odinger equation (\ref{eq:sse}) and the time-reversed Schr\"odinger equation (\ref{eq:trse}) carries over to the quaternions. Specifically, replacing $\hat{\bm{v}}=\bm{i}$ with $\hat{\bm{v}}=-\bm{i}$ is accomplished via the equivalent of time-reversing the spin-state
|
, $\bm{q}\to\bm{jq}$. But in this form it is clear that $\hat{\bm{v}}=\pm\bm{i}$ are not the only two options; depending on the choice of map $M_v$, $\hat{\bm{v}}$ could be \textit{any} pure unit quaternion, spanning a smoothly-connected 2-sphere of possible values. Far from being a clearly broken symmetry (as is the disconnected $\pm$ sign on the complex $i$), one might now ask whether this enlarged and connected symmetry must be broken at all.
Given the above analysis, the symmetry \textit{must} be broken, because one must choose \textit{some} map $M_v$ to interpret $\bm{q}$ and to define $\hat{\bm{v}}$ in (\ref{eq:vspe}) and (\ref{eq:vspe2}). The freedom of such a definition lies on a 2-sphere, and is larger than the usual U(1) phase freedom; this is less surprising if one notices that there is also the same freedom when choosing a particular Hopf fibration \cite{Thurston}. Also note that the generator of a transformation between different choices of $\hat{\bm{v}}$ is a left-multiplication, and is therefore nonunitary, as per Section 2.3. If a gauge is fixed (for example, setting $\hat{\bm{v}}=\bm{i}$ by fiat), then one can ignore this symmetry in the space of left-multiplications and proceed as usual. However, the question remains whether this symmetry must be broken at all, especially as it is not merely choosing a sign convention for a complex $i$.
\subsection{Restoring the Symmetry}
One can avoid breaking this symmetry without ever using a non-unitary transformation, so long as the particular value of $\hat{\bm{v}}$ has a physical meaning and does not appear in (or change) the form of the dynamical equations. As in the case of the Klein-Gordon equation, this goal can naturally be accomplished by extending (\ref{eq:vspe2}) to the second-order dynamical equations familiar from classical field theory;
\begin{equation}
\label{eq:vkge}
\ddot{\bm{q}}=-\omega_0^2\bm{q}.
\end{equation}
In the case of the Klein-Gordon equation, this is thought to be unacceptable because the larger solution space contains solutions that do not reduce to those of the first-order equation. But for the special case of qubits, at least, this concern disappears. So long as one constrains $\bm{q}$ to be a \textit{unit} quaternion, every solution to this equation will also solve (\ref{eq:vspe2}) for some pure unit quaternion $\bm{\hat{v}}$. \cite{CarlosThesis}\footnote{This statement is not technically correct for the zero-magnetic field case, as there is nothing to break the symmetry between right- and left- multiplications; some solutions to (\ref{eq:vkge}) will instead solve $\dot{\bm{q}}=-\omega_0\bm{q}\bm{\hat{v}}$. But this caveat goes away for microscopically-varying magnetic fields; all unit-quaternion solutions to (\ref{eq:bkge}) will solve (\ref{eq:vspe}).} The larger solution space does indeed have new free parameters, but those parameters are the unit quaternion $\bm{u}$ that defines the map $M_v$ and also defines $\bm{\hat{v}}$ via (\ref{eq:vdef}). If all maps to the standard Schr\"odinger-Pauli dynamics are indistinguishable (as implied by the above discussion), then $\bm{u}$ is either a choice of gauge or a hidden parameter.
This result is not specific to the zero-field case. Adding back the magnetic field, the second-order version of (\ref{eq:vspe}) can be found by taking a derivative and eliminating $\hat{\bm{v}}$;
\begin{equation}
\label{eq:bkge}
\ddot{\bm{q}}+2\dot{\bm{q}}\bm{b}+\bm{q}(\bm{b}^2+\dot{\bm{b}}+\omega_0^2)=0.
\end{equation}
This looks a bit more cumbersome than the first-order (\ref{eq:spe}), but notably it also results from a simple real Lagrangian density (see (\ref{eq:2ndL}) below), and yields solutions that exactly map onto the standard Schr\"odinger-Pauli equation. \cite{WLS} (The unit quaternion condition can also be imposed via $L=0$, and the unit quaternion $\bm{u}$ encodes the new constants of the motion. \cite{CarlosThesis}).
This second-order equation on a 4-component quaternion raises the question of whether this is an alternate path to Dirac's extension of 2-spinors to 4-spinors. After all, a first-order differential equation on a 4-component complex spinor has solutions with 8 free real parameters, just like the solutions to (\ref{eq:bkge}). But at this level the only constraint on a Dirac spinor is an overall normalization, leaving 7 available parameters (6 if one ignores the global phase). Here, apart from normalization at some reference time $\bm{q}(t\!\!=\!\!0)=\bm{q_0}$, the solution space is constrained by the additional normalization conditions $|\dot{\bm{q}}(t=0)|=0$ and $|\ddot{\bm{q}}(t=0)|=0$, reducing the solution space to 5 free real parameters, counting the phase. And every one of these solutions can be made to map back to the original spin-1/2 system; there is no room for antimatter solutions, even in this expanded space.
Of these five parameters, three can be made to correspond to $\bm{q_0}$. The remaining 2 parameters are encoded in $\dot{\bm{q}}(t=0)$, and are of course time-odd; these can be made to correspond to $\hat{\bm{v}}$, and they determine which map $M_v$ should be used to interpret ${\bm{q}}$. (There is also a time-odd (dynamic) phase term in $\bm{u}$, but this naturally combines with the time-even (geometric) phase term in $\bm{q}_0$ to determine a single parameter that corresponds to the net global phase.)
The immediate result of this expanded dynamics is that $\hat{\bm{v}}$ now encodes the time-odd parameters, and its sign should be changed upon time-reversal. This not only further simplifies the time-reversal analysis of (\ref{eq:vspe}) above, but provides a non-operator technique for time-reversal, such that two time-reversals always exactly cancel. Further implications of this expanded dynamical equation (\ref{eq:bkge}) will be discussed in Section 5.3.
\section{Discussion}
\subsection{Summary of Basic Results}
Most of the results from the first three sections do not require or imply any new physics; they simply follow from a reversible map $M_i[\ket{\chi}]=\bm{q}$ between spinors and quaternions. These results will be of the most interest to the widest audience (even those not interested in foundations) and they will be summarized here first. A discussion of the more speculative implications will follow.
In the traditional spinor representation there is a clear mathematical distinction between a state $\ket{\chi}={a \choose b}$ and a unitary operator that acts on this state as a 2x2 matrix $\bm{U}$. In addition, there are non-unitary operators that cannot even be written in matrix form, such as the time-reversal operator $\bm{T}$.
Mapping spinors onto the unit quaternion $\bm{q}=Re(a)+\bm{i}Im(a)+\bm{j}Re(b)+\bm{k}Im(b)$ reframes all of these distinctions, as both states and operators can now be written as unit quaternions. There are still more operators than states, because the most general linear transformation of $\bm{q}$ requires both left- and right- multiplications, $\bm{q}'=\bm{q}_L\, \bm{q} \,\bm{q}_R$. Still, it is notable that all unitary operators can be represented as a quaternion right-multiplication only ($\bm{q}'=\bm{q} \bm{q}_R$), so long as one does not demand control over the global phase. (The most general unitary 2x2 matrix has four real parameters, and the unit quaternion $\bm{q}_R$ has three; the missing parameter can be traced to the global phase of $\bm{q}'$.)
Since all 2x2 unitary operators $\bm{U}$ correspond to a rotation on the Bloch sphere, all quaternionic right multiplications also correspond to such a rotation. Specifically, right-multiplying by $\bm{q}_R=exp(-\hat{\bm{n}} \gamma/2)$ is a rotation around the $\hat{n}$ axis by an angle $\gamma$, where $\hat{n}=f[\hat{\bm{n}}]$ as given by (\ref{eq:fdef}).
Furthermore, couched in the language of quaternions, the state itself is effectively just another unitary operator. In particular, the state $\bm{q}=exp(-\hat{\bm{n}} \gamma/2)$ is perhaps most naturally interpreted as a \textit{rotation} rather than a vector, a rotation that takes the z-axis ($\hat{z}$) to the state's standard vector representation on the Bloch sphere ($\hat{q}$). There are many such rotations that will result in any given state vector, exactly corresponding to the many possible global phases of $\bm{q}$.
The global phase can be shifted by a quaternionic \textit{left} multiplication, $\bm{q}'=exp(\bm{i}\alpha) \bm{q}$. Other left multiplications by unit quaternions $\bm{q}_L$ correspond to non-unitary operators. In particular, any left-multiplication of the form shown in (\ref{eq:Tdef}) corresponds to the anti-unitary time-reversal operator $\bm{T}$. Other non-unitary operators (which have found use in quantum information theory) are guaranteed not to change the normalization of $\bm{q}$ so long as $\bm{q}_L$ is also a unit quaternion. Such operators take pure states to mixed states, because that the resulting state vector is as ill-defined/unknown as the global phase of $\bm{q}$.
Besides simplifying the dynamic equations of a spin state in a magnetic field, the ability to implement anti-unitary transformations via left-multiplication offers a view of time-reversal compatible with a classical perspective. Specifically, it becomes far more straightforward to reverse all of the time-odd parameters in the dynamical equations, making the time-symmetry more clearly evident. Although there are many ways to implement time-reversal via a left-multiplication, all of them conform to the usual $\bm{T}^2=-1$ if the \textit{same} left-multiplication is applied twice.
\subsection{The possible importance of global phase}
The results summarized in Section 5.1 hold whether or not global phase is a mere gauge, but the status of global phase is important for the results discussed below. Therefore, a short discussion of this topic seems appropriate here.
The global phase of a single-particle quantum state is either a choice of gauge or an unknown hidden variable. Although most physicists have come down in favor of the former option, there is no experimental evidence either way. Indeed, we cannot even probe down to the Compton scale at which these phases would fluctuate. (For an electron, this phase frequency is $\omega_0=m_ec^2/\hbar$, and $\omega_0^{-1}\approx 10^{-21}\,sec$ is several orders of magntitude shorter than the shortest laser pulses.) It is rare to even see this $exp(-i\omega_0 t)$ oscillation explicitly in quantum equations, because it is typically removed \textit{on the assumption} that global phases are irrelevant.
Of course, photons have lower-frequency oscillations than electrons, but this point only sheds further doubt on the notion that this phase is mere gauge. In the classical limit, the global phase of an electromagnetic wave is indeed meaningful, and in the quantum limit phase issues are necessarily addressed via quantum field theory. The failure of quantum-mechanical states to fully describe photons is arguably an indication that phases are a bit more important than quantum mechanics would have us believe.
Finally, note that even in the absence of a measurement on the time-scale of an oscillation, another way to probe oscillations is via a reference oscillator. And of course, there is an enormous body of experimental evidence that such relative phases are indeed meaningful. One simple explanation for this fact would be that single-particle states have a meaningful global phase, and such experiments are measuring relative values of this phase. Unfortunately, this analysis is confounded by the tensor-product structure of multiparticle quantum states, making this point inconclusive. Still, according the so-called $\psi$-epistemic approaches to quantum theory \cite{Spekkens}, this tensor product structure naturally arises for states of knowledge, not the underlying (hidden) states of reality. And with the experimental fact of relative phase measurements requiring some underlying explanation, $\psi$-epistemic approaches (at least) might be more inclined to see phase as a hidden variable rather than mere gauge.
These arguments are certainly not conclusive, but if global phase \textit{could} be a hidden variable, it is certainly not advisable to immediately dismiss it up front. And if one does not discard the phase, the quaternion form of spinors is arguably the best way to see how the phase is interrelated with the qubit. (Indeed, the fact that these two can not be cleanly separated is another reason to keep the phase.) The chief implication of this viewpoint is that it appears more natural to extend the dynamics, as outlined in Section 4.3; this issue will now be discussed in detail.
\subsection{Second-Order Qubits}
Sections 3 and 4 demonstrated that when the standard dynamical equations for a spin-1/2 state in a magnetic field are written in quaternionic form, the first-order equations reveal a broken symmetry. Namely, one particular pure unit quaternion $\hat{\bm{v}}$ must be singled out from all others (for the map $M_i$ used in most of the above analysis, this corresponds to $\hat{\bm{v}}=\bm{i} $.)
Another way to see this broken symmetry is via the Lagrangian that would generate the Schr\"odinger-Pauli equation. For a spin-1/2 state $\ket{\chi}$, given an arbitrary Hamiltonian $\bm{H}$, the inner-product form of the corresponding Lagrangian \cite{LinckThesis} can be written as
\begin{equation}
\label{eq:ketL}
L_1(\ket{\chi},\ket{\dot{\chi}})=\bra{\chi}\bm{H}\ket{\chi}-\hbar\, Im\ip{\dot{\chi}}{\chi}.
\end{equation}
Taking the imaginary part of this last inner product may look reasonable in such a form, but the inner product structure looks quite unnatural when framed in terms of quaternions \cite{Adler}. In quaternionic form, under the map $M_i$, the last term in (\ref{eq:ketL}) looks instead like $\hbar \, Re(\bm{i}\dot{\bar{\bm{q}}} \bm{q})$, where the special pure quaternion $\bm{i}$ makes an explicit appearance.
The ultimate source of this broken symmetry is the very $\bm{S}^3\to \bm{S}^2$ Hopf fibration procedure that motivated this paper. There are an infinite number of ways to reduce a 3-sphere to a 2-sphere, each corresponding to a particular choice of $\hat{\bm{v}}$. And crucially, this choice must be made before the Lagrangian can even be written down. In other words, the symmetry that relates the possible different Hopf fibrations is not evident at the level of the Lagrangian or the dynamics of $\ket{\chi}$; it is only evident on the higher-level representation of $\bm{S}^3$, where the unit quaternions reside.
Furthermore, this symmetry need not be broken at all. So long as one is willing to extend the Lagrangian (and dynamics) to a second-order form, it becomes easy to write a harmonic-oscillator-like Lagrangian in terms of quaternions, without reference to any particular Hopf fibration:
\begin{equation}
\label{eq:2ndL}
L_2(\bm{q},\dot{\bm{q}})=\frac{1}{2} \left\{ |\dot{\bm{q}}+\bm{qb}|^2-\omega_0^2|\bm{q}|^2 \right\}.
\end{equation}
Remarkably, if $\bm{q}$ is constrained to remain a unit quaternion (which can be imposed via $L_2=0$), there is no obvious new physics implied by this higher-order form. Of course, $\bm{q}(t)$ will have a richer structure, in that it will obey the second-order differential equation (\ref{eq:bkge}) and it will require more initial data to solve. (Instead of merely an initial value of $\bm{q}$, it will require initial values of both $\bm{q}$ and $\dot{\bm{q}}$.) But these additional parameters turn out to be equivalent to the original (arbitrary) choice of $\hat{\bm{v}}$, so \textit{whatever} they happen to be, the resulting dynamics can always be cast back into the first-order form (\ref{eq:vspe}). The choice of map $M_v$ between spinors and quaternions is no longer an arbitrary choice, but is determined by the now-meaningful (and effectively hidden) parameter $\hat{\bm{v}}$.
As discussed in Section 4.3, this expanded dynamics does not encompass any new solutions that might be interpreted as antimatter, and therefore this is not a disguised form of the Dirac equation. Looking at the level of the Lagrangians (\ref{eq:ketL}) and (\ref{eq:2ndL}), it seems this procedure is instead analogous to the extension from the (first-order) Dirac equation to the (second-order) Feynman-Gell-Mann equation, or ``second order fermions'' as they are known in quantum field theory \cite{SOF1,SOF2}. The above proposal has no spatial component in the equation, merely spin; through this analogy, one might call the solutions to (\ref{eq:bkge}) ``second order qubits''.
The most obvious implication of these second order qubits is that the hidden parameter space is much larger than a mere global phase. Now it also includes the two free parameters in $\hat{\bm{v}}$. Indeed, with 3 free hidden parameters now corresponding to the same point on the Bloch sphere, the hidden sector is now \textit{larger} than the measurable sector. (If one considers the phase ``half-measureable'', via relative phase measurements, then this might fall under the umbrella of theories in which one can know exactly half of the ontological parameters, as in \cite{Spekkens}.) A future publication will discuss potential uses of this large hidden variable space in the context of entangled qubits.
A more immediate consequence of this formalism is that it strongly indicates that quantum states evolving like $exp(+i \omega_0 t)$ should not be interpreted as having less energy than standard $\exp(-i \omega_0 t)$ states, but instead exactly the same energy. Classical physics is perfectly clear on this fact (there are no negative-energy-density classical fields), but the single-time-derivative form of (\ref{eq:sse}) has obscured the time-even nature of energy when it comes to quantum systems. Second order (quaternionic) qubits, on the other hand, have a time evolution that goes like $\exp(-\hat{\bm{v}}\omega_0 t)$, demonstrating a smoothly continuous set of solutions that pass from $\hat{\bm{v}}=+\bm{i}$ to $\hat{\bm{v}}=-\bm{i}$. In order for the sign of the energy to flip, one of the intermediate solutions must either have zero energy or some strange discontinuity that would destroy the above symmetries.
Finally, it is worth stressing that when going from first-order to second-order, despite the dramatic change in the Lagrangian and the dynamical equations, there are remarkably few physical consequences. Given the $|\bm{q}|=1$ normalization constraint, there are no new spurious solutions that cannot be interpreted as a standard spin-1/2 state, no unusual dynamics that would lead to a new prediction. In fact, even though this procedure was motivated by viewing the global phase as more than mere gauge, at this point there seems no reason why one could not view the entire hidden parameter sector (the phase plus $\hat{\bm{v}}$, or $\bm{u}$) as a new, larger gauge to be fixed. Such a project is beyond the scope of this paper, but would be interesting to explore.
\section{Conclusions}
Although it is standard practice to remove the global phase of a given spinor, there is no continuous way to do this to the space of \textit{all} spinors, even if one separately keeps track of a phase parameter $\alpha$. This means that one cannot choose phase factors for all qubits that would vary continuously over the entire Bloch sphere. \cite{Urbantke}
One possible reading of this topological fact is that the global phase of a spin-1/2 state should be treated as mere gauge, simply because it cannot be universally defined. But following this logic, there should be no reason to use spinors at all; one would simply represent spin-1/2 states as points on a 2-sphere, and use SO(3) rather than SU(2).
The reason this is not done is because it would throw away valuable phase information; for example, the geometric phase accumulated by a precession around the Bloch sphere. (Again, considering that these Berry phases are in fact measureable, it seems reckless to assume they are a meaningless gauge.)
Instead, we argue that a cleaner approach is to not remove the global phase at \textit{any} stage of the analysis. Given this, the most natural mathematical object to encode the state space of a spin-1/2 particle is a unit vector on a 3-sphere, or a unit quaternion. Symmetries of the state space are then the same as the symmetries on the 3-sphere.
But from this starting point, it appears difficult to map a unit quaternion to a (well-defined) spin-state on the Bloch sphere without breaking these very symmetries. Specifically, choosing one particular Hopf fibration is equivalent to choosing a special pure quaternion, which then makes the original phase look discontinuous over the reduced state space (perhaps encouraging one to again discard it).
Remarkably, there is an alternate path, that does not require breaking any symmetries or discarding any phases. The essential idea is to expand the state space to include \textit{two} quaternions, orthogonal to each other on the 3-sphere. (These two quaternions correspond to $\bm{q}$ and $\dot{\bm{q}}$ for second order qubits, and their orthogonality $Re(\bar{{\bm{q}}}\dot{\bm{q}})\!\!=\!\!0$ ensures the $|\bm{q}(t)|=1$ normalization is preserved.) For a given $\bm{q}$, the allowed values of $\dot{\bm{q}}$ then lie on a $2$-sphere, and $\dot{\bm{q}}$ effectively encodes \textit{which} Hopf fibration one should use to map $\bm{q}$ to the Bloch sphere.
After this map has been performed, the same dynamics on the Bloch sphere is recovered, no matter which particular $\dot{\bm{q}}$ generated the map in the first place. From this perspective the possible values of $\dot{\bm{q}}$ might be seen as a enlarged gauge group. But given the real second-order Lagrangian (\ref{eq:2ndL}) that naturally generates the equations of motion relating $\bm{q}$ and $\dot{\bm{q}}$, it would be a stretch to treat the former as ontological and the latter as a gauge. An alternative viewpoint is that $\dot{\bm{q}}$ is effectively a hidden variable, one that may find uses in novel approaches to quantum foundations.
One minimal example of a new direction that may be inspired by such a viewpoint results from rewriting the traditional Bloch sphere vector $\hat{q}=f[\hat{\bm{q}}]$ in terms of the canonical momentum of the Lagrangian $\bm{p}=\partial L_2/\partial \bm{\dot{q}}$. From (\ref{eq:2ndL}) one finds $\bm{p}=\overline{\bm{\dot{q}}+\bm{qb}}$; then using (\ref{eq:vqhat}) and the effective dynamical equation (\ref{eq:vspe}), it transpires that $\hat{\bm{q}}$ is simply $\bm{pq}/\omega_0$. In other words, in the second-order qubit framework, the traditional quantum state $\hat{q}=f[{\bm{pq}}/\omega_0]$ naturally encodes the very underlying phase-space product crucial to the ``old'' quantum theory, but does not encode which particular phase-space orbit is hidden in $\bm{q}(t)$ and $\bm{p}(t)$.
Even without enlarging the state space to this extent, viewing spinors in quaternion form has other advantages, most notably a straightforward way to implement time-reversal via left-multiplication. More general non-unitary transformations also become easily available, which may be of interest to the field of quantum information (as well as any foundational proposals in which pure states naturally become mixed, via some new non-unitary process). Finally, note that this recasting of quantum states allows pure states to have the same mathematical structure as generic (but phaseless) unitary operators; these can both correspond to unit quaternions. In quaternion form, then, a spin state is more naturally viewed as a \textit{rotation}; perhaps unsurprising, given that these states encode angular momentum, but an interesting perspective nonetheless.
\section*{Acknowledgements}
The authors would like to specifically thank Carlos Salazar-Lazaro for introducing KW to quaternions in this context, and also Rebecca Linck for tireless preliminary analysis. Further thanks go to Nick Murphy and Jerry Finkelstein.
\\
\\
\section*{Appendix: Quaternions}
A quaternion is a number with one real and three imaginary components $(\bm{i,j,k})$. First described by William Rowan Hamilton in 1843, quaternions obey the following rules:
\begin{subequations}
\begin{align}
\bm{i}^2 = \bm{j}^2 = \bm{k}^2 = \bm{ijk} = -1 \nonumber \\
\bm{ij} = \bm{k} \hspace{8mm} \bm{ji} = -\bm{k} \nonumber \\
\bm{jk} = \bm{i} \hspace{8mm} \bm{kj} = -\bm{i} \nonumber \\
\bm{ki} = \bm{j} \hspace{8mm} \bm{ik} = -\bm{j} \nonumber
\end{align}
\end{subequations}
Note that quaternions do not in general commute.
The conjugate of a quaternion, $\bar{\bm{q}}$, has the same real component as $\bm{q}$, but opposite signs for each of the three imaginary components. When conjugating a product, one can use $\overline{\bm{pq}}=\bar{\bm{q}}\bar{\bm{p}}$, but it is crucial to change the multiplication order, just as in a Hermetian conjugate of a product of matrices. A unit quaternion is defined as $|\bm{q}|^2 \equiv \bm{q}\bar{\bm{q}}=1$. (The norm $|\bm{q}|$ is the square root of the real value $\bm{q}\bar{\bm{q}}$.) Explicitly,
\begin{eqnarray}
\bm{q} = A + \bm{i}B + \bm{j}C + \bm{k}D \nonumber \\
\bar{\bm{q}} = A - \bm{i}B - \bm{j}C - \bm{k}D \nonumber \\
| \bm{q} |^2 = \bm{q} \bar{\bm{q}} = \bar{\bm{q}} \bm{q} = A^2 + B^2 + C^2 + D^2. \nonumber
\end{eqnarray}
A pure quaternion has no real component. Pure unit quaternions therefore have two free parameters, and can map to a unit vector on a 2-sphere. In this paper, pure unit quaternions are generally notated as $\hat{\bm{v}}$. (This is distinct from unit vectors on the Bloch sphere which are not bold; $\hat{v}$.)
Multiplying two unit quaternions always results in another unit quaternion, because $(\bm{pq})(\bm{\overline{pq}}) = \bm{pq\bar{q}\bar{p}}=\bm{p\bar{p}}=1$. Note that terms of the form $\bm{u} \hat{\bm{v}} \bm{\bar{u}}$ do not have a real component, even if $Re(\bm{u})\ne 0$. For example, if $\hat{\bm{v}}=\bm{i}$,
\begin{eqnarray}
\begin{aligned}
\bm{q} = \bm{ui\bar{u}} =& (A + \bm{i}B + \bm{j}C + \bm{k}D)\bm{i}( A - \bm{i}B - \bm{j}C - \bm{k}D) ,\nonumber \\
Re( \bm{q} ) =& -AB + AB -CD +CD = 0.
\end{aligned}
\end{eqnarray}
Thus, if $\bm{u}$ is a unit quaternion, $\bm{u} \hat{\bm{v}} \bm{\bar{u}}$ is guaranteed to be a \textit{pure} unit quaternion.
\subsection*{Exponential Quaternions}
Euler's formula can be generalized to quaternions as
\begin{equation}
\label{eq:exp}
e^{\hat{\bm{v}}\theta} \equiv cos(\theta) +\hat{\bm{v}}sin(\theta) .
\end{equation}
As long as $\hat{\bm{v}}$ is a pure unit quaternion, $exp({\hat{\bm{v}}\theta})$ will also be a \textit{unit} quaternion, but it will not be pure unless $cos(\theta)=0$. When multiplying two exponentials together, in general one cannot simply add exponents. This is best seen by expanding the exponentials using (\ref{eq:exp}). The exception to this is when both are exponentials use the same $\bm{\hat{v}}$, in which case the angles are indeed additive.
Every unit quaternion can be written in the exponential form (\ref{eq:exp}). Note that given the full 2-sphere of possible pure unit quaternions $\hat{\bm{v}}$, even if the angle $\theta$ is restricted as $-\pi<\theta\le\pi$, there are still two exponential forms that map to the same unit quaternion, as $exp[\hat{\bm{v}}\theta] = exp[-\hat{\bm{v}}(-\theta)]$.
The most general, norm-preserving transformation of a quaternion involves both a left and a right multiplication by unit quaternions, $\bm{q}' = e^{\bm{\hat{u}} \phi}\bm{q}e^{\bm{\hat{v}} \theta}$. (One cannot get to any given $\bm{q}'$ via a left- or a right- multiplication alone.) To invert this transformation, one does not interchange the left- and right- terms, but merely conjugates them; $\bm{q} = e^{\text{-}\bm{\hat{u}} \phi}\bm{q}'e^{\text{-}\bm{\hat{v}} \theta}$.
Pure unit quaternions $\hat{\bm{v}}$ can be easily mapped to a vector $\hat{v}$ on a unit 2-sphere, either using $\hat{v}=f[\hat{\bm{v}}]$ as defined in (\ref{eq:fdef}) or another invertible map. Many of the results in this paper hinge on the fact that a rotation of $\hat{v}$ around some axis $\hat{n}$ by an angle $\theta$ can easily be effected by quaternionic multiplication. The quaternion corresponding to the rotation axis is $\hat{\bm{n}}=f^{-1}[\hat{n}]$, and the rotation is equivalent to the multiplication $exp(\hat{\bm{n}}\theta/2)\, \hat{\bm{v}} \,exp(-\hat{\bm{n}}\theta/2)$. Mapping the resulting pure quaternion back to the 2-sphere will reveal the rotated vector. (If one imagines this rotation in the space of unit pure quaternions, no mapping is required.)
\subsection*{Pauli Matrices vs i,j,k}
The Pauli matrices are defined as follows:
\begin{eqnarray}
\begin{aligned}
\sigma_x \hspace{1mm}= \begin{pmatrix}
0 \hspace{3mm} 1 \\
1 \hspace{3mm} 0
\end{pmatrix} \hspace{5mm}
\sigma_y \hspace{1mm}&= \begin{pmatrix}
\hspace{1mm} 0 \hspace{3mm} $-i$ \\
$i$ \hspace{4mm} 0
\end{pmatrix} \hspace{5mm}
\sigma_z \hspace{1mm}= \begin{pmatrix}
1 \hspace{3mm} 0 \\
0 \hspace{3mm} $-$1
\end{pmatrix} \nonumber
\nonumber \\
\end{aligned}
\end{eqnarray}
\begin{equation}
\label{eq:vecs}
\vec{\sigma} = \sigma_x \hat{x} + \sigma_y \hat{y} + \sigma_z \hat{z}
\end{equation}
An important property of the Pauli matrices is their relation to rotations, as demonstrated in Eqn (\ref{eq:Rn}). But more relevant to this paper, is the quantity $-i\vec{\sigma}$. These matrices, call them $u_n = -i\sigma_n$, obey the same algebra as the imaginary quaternions $\bm{i}$, $\bm{j}$, and $\bm{k}$. From our convention defined via the map $M_i$, we have $u_x$ $\Leftrightarrow$ $\bm{k}$, $u_y$ $\Leftrightarrow$ -$\bm{j}$, and $u_z$ $\Leftrightarrow$ $\bm{i}$. This gives rise to the equivalent commutation relations:
\begin{eqnarray}
\begin{aligned}\
[u_x,u_y] = 2u_z \hspace{7mm} [u_y,u_z] &= 2u_x \hspace{7mm} [u_z,u_x] = 2u_y\nonumber \\
[k,\text{-}j] = 2i \hspace{10mm} [\text{-}j,i] &=2k \hspace{10mm} [i,k] = \text{-}2j \\
\end{aligned}
\end{eqnarray}
As noted in the first line of Table 1, the operation of $u_x$ on a spinor $\chi$ is equivalent to right-multiplication of $\bm{q}=M_i[\chi]$ by $\bm{-k}$. (The map $M_i$ between $\chi$ and $\bm{q}$ is defined by (\ref{eq:qdef}).) The same pattern holds in the table for $u_y$ $\Leftrightarrow$ $\bm{j}$, and $u_z$ $\Leftrightarrow$ $-\bm{i}$.
\section*{References}
|
\section{Introduction}
The annual Snook Prize awards honor our late Australian colleague Ian Snook ( 1945-2013 )
and his contributions to computational statistical mechanics during his 40-year
tenure at the Royal Melbourne Institute of Technology. The Snook Prize Problem for 2017
was to explore and discuss the uniqueness of the chaotic sea, the spatial variation of
local small-system values of the kinetic temperature at equilibrium,
the time required for Lyapunov-exponent pairing and its dependence on the integrator,
as well as the possibility of the long-time {\it absence} of exponent pairing\cite{b1}.
The Lyapunov exponents, two for each Hamiltonian degree of freedom, characterize the
strength of chaos in classical dynamical systems.
Kenichiro Aoki took the Snook Prize challenge seriously and explored the behavior of
$\phi^4$-model exponent pairs, including the dependence of pairing on energy,
symmetry, and system size. In his work\cite{b2} Aoki asked whether or not chaotic
ergodicity implies a uniform temperature distribution in phase space, and whether or
not disparities in kinetic temperature could persist for long in isoenergetic chaotic
Hamiltonian steady states,
$$
mkT_i = \langle \ p_i^2 \ \rangle \stackrel{?}{=}
\langle \ p_j^2 \ \rangle = mkT_j \ .
$$
Persistent temperature differences might well appear to enable violations of the Second Law
of Thermodynamics through the spontaneous generation of enduring temperature gradients.
Aoki demonstrated the existence
of microcanonical temperature differences along with a lack of long-time exponent
pairing. He found faster pairing for exponents at higher temperatures. He explored
the effects of symmetries on the dynamics of anharmonic chains and on their exponents.
He pointed out the short-term dependence of the exponents on the ordering of the
variables using the Gram-Schmidt orthonormalization algorithm. We have judged his work
worthy of the 2017 Snook Prize, awarded in April 2018.
\section{The $\phi^4$ Model Generates Chaotic Dynamical Systems}
\subsection{Chaos with Just Two One-Dimensional Bodies}
Kenichiro Aoki and Dimitri Kusnezov\cite{b3} pioneered investigations of the $\phi^4$
model as a prototypical ideal material supporting Fourier heat conduction. The model
augments the harmonic chain with the addition of attractive quartic tethering potentials which
bind the particles to individual lattice sites. The
simplest interesting chaotic case invoves two masses and two springs. It is shown at the top of
{\bf Figure 1} :
$$
{\cal H} = (1/4)(q_1^4+q_2^4) + (1/2)[ \ p_1^2+p_2^2 + q_1^2 +(q_1-q_2)^2 \ ] \ .
$$
For convenience the masses and spring constants have all been set equal to unity.
The rest positions of both particles define the coordinate origins,
$q_1 = 0 \ ; \ q_2 = 0$. Particle 1 is linked to a fixed boundary to its left with
$\phi_1 = (q_1^2/2)$. Particle 1 is additionally linked to Particle 2 at its right with
$\phi_{12} = (q_1-q_2)^2/2$. {\bf Figure 1} shows the momentum correlation for the two
Particles for four different initial conditions, all of them with total energy ${\cal H} = 6$.
\noindent
\begin{figure}[h]
\includegraphics[width=3.0in,angle=0,bb= 16 71 600 726]{fig1.eps}
\caption{The top row shows $p_2(p_1)$ projections for initial values $\{ \ p^2 \ \} =
(11.6,0.4)$ at the left and $(6,6)$ at right. The bottom row shows $p_2(p_1)$ projections
for initial squared momenta of (12,0) at left and (0,12) at right. Runge-Kutta fourth-order
integration with $dt = 0.001$ and ${\cal H} = 6$ are used in Figures 1-3. The bottom left
projection represents chaos, with a Lyapunov exponent of 0.05. The other three do not. It
appears that the combination (11.4,0.6) {\it is} chaotic with a Lyapunov exponent of order
0.003. The circles in {\bf Figure 1} represent the maximum momenta, $p^2 = 12$.
}
\end{figure}
\noindent
\begin{figure}[h]
\includegraphics[width=3.0in,angle=-90,bb=35 37 580 758]{fig2.eps}
\caption{
The left column shows phase-plane projections $p_1(q_1)$ and the right column $p_2(q_1)$ shows
Poincar\'e sections $p_2(q_1)$ with $p_1=0$. The initial momenta for the top row are $\{ \ p^2 \ \}
= (12,0)$ and for the bottom row $(11.85,0.15)$. The top row is chaotic with $\lambda_1 = 0.05$
while the bottom row is close to a periodic orbit with $\lambda_1 = 0$. The top-row section suggests a
``fat fractal'' with infinitely-many tori threading through its perforations.
}
\end{figure}
\noindent
\begin{figure}[h]
\includegraphics[width=3.0in,angle=-90,bb=51 65 556 730]{fig3.eps}
\caption{
Here the top row initial momenta are $\{ \ p^2 \ \} = (11.5,0.5)$ and the motion is regular,
with $\lambda_1 = 0$. The bottom row momenta are $(11.4,0.6)$ where chaos or its lack are
still problematic, a region meriting more study. Here the left panels are projections while
the right ones show Poincar\'e penetrations.
}
\end{figure}
Aoki goes well beyond this interesting two-body problem. He considers a variety of $\phi^4$
chain problems in his provocative paper, considering the effects of symmetry and chaos on
equilibrium temperature distributions, Lyapunov instability, and the pairing of the Lyapunov
exponents. At present it is unknown whether or not the spectra become paired for {\it all},
or maybe ``almost all'' initial conditions. The $\phi^4$ model provides a surprisingly rich
testbed for investigating Lyapunov instability.
Poincar\'e sections, two-dimensional cuts through the three-dimensional phase-density in its
four-dimensional embedding space are shown at the right in {\bf Figure 2}. The upper right
section suggests a distribution
resembling a ball of yarn with holes here and there, with these holes filled with periodic
tori. Although far from ergodic the one-dimensional trajectory certainly explores most of the
${\cal H}=6$ region for this simplest of initial conditions where a single momentum variable
carries {\it all} of the energy. Despite its two-spring simplicity the model's quartic tethers
are enough to bring out all the general features associated with Hamitonian chaos.
The Sections of {\bf Figure 3} hint at the ``chains-of-islands'' features typical of the boundaries separating
regular and chaotic regions in sections of Hamiltonian chaos. A series of squared momenta
in the neighborhood of (11.5,0.5) to 11.4,0.4) would make an interesting research
investigation. Aoki also investigated a sightly more complex model with four degrees of freedom
rather than just two. Let us consider it next.
\subsection{Chaos with Four Bodies Subject to Periodic Boundary Conditions}
A four-body periodic chain of $\phi^4$ particles has the Hamiltonian
$$
{\cal H} = (1/4)(q_1^4+q_2^4+q_3^4+q_4^4) + (1/2)(p_1^2+p_2^2+p_3^2+p_4^2) +
$$
$$
(1/2)[ \ (q_1-q_2)^2+(q_2-q_3)^2+(q_3-q_4)^2+(q_4-q_1)^2 \ ] \ .
$$
With the implied periodic boundary conditions the special initial conditions ,
$$
q_1=q_2=q_3=q_4 = 0 \ ; \ p_1 = p_2 = -p_3 = p_4 = 2 \ ,
$$
generate an interesting chaotic solution. It is symmetric. Consequently Particles 2
and 4 obey {\it identical} equations of motion. If as here the initial conditions
have proper symmetry, identical dynamics results, with $q_2(t) = q_4(t)$ for all time. The
motion corresponding to these conditions is chaotic. It likely generates a
five-dimensional isoenergetic and chaotic ``fat fractal'', in either six-dimensional
or eight-dimensional phase space, $\Gamma = \{ q,p \}$. Here the $\{q,p\}$ are the
Cartesian coordinate-momentum pairs describing the system dynamics.
{\bf Figure 4} demonstrates the chaotic nature of that symmetric motion. It displays the
convergence process $\langle \ \lambda_1(t) \ \rangle_{t \rightarrow \infty} \longrightarrow
\
|
lambda_1$ . The maximum Lyapunov exponent $\langle \ \lambda_1(t) \ \rangle$ is calculated
by following {\it two} neighboring phase-space trajectories, the ``reference'' and a nearby ``satellite''
in either six or eight-dimensional phase space. The satellite trajectory is kept close to the reference
by a rescaling of its phase-space separation from that reference at
the end of every timestep :
$$
| \ \Gamma_r(t+dt) - \Gamma_s(t+dt) \ | \longrightarrow \delta \ .
$$
This condition is implemented by rescaling the separation to the fixed length
$\delta$ \ :
$$
\Gamma_s(t+dt) \longrightarrow \Gamma_r(t+dt) +
\delta \frac{ \ \Gamma_s(t+dt) - \Gamma_r(t+dt) \ }{| \ \Gamma_s(t+dt) - \Gamma_r(t+dt) \ |} \ .
$$
The ``local'' ``largest'' Lyapunov exponent, $\lambda_1(t)$ for that timestep, is
given in terms of the phase-space offsets before and after the rescaling operation :
$$
| \ \delta \ | \propto e^{\lambda t} \longrightarrow
\lambda_1(t) \equiv (1/dt)\ln[ \ {\delta}_{\rm before}/\delta_{\rm after} \ ] \ .
$$
Note well that what is here called ``largest'' is so only as a time average. The
``largest'' can at times actually be the {\it smallest} !
{\bf Figure 4} shows both six-dimensional and eight-dimensional cumulative time
averages of the local largest Lyapunov exponent, as computed with fourth- and
fifth-order Runge-Kutta integrators. The chaos of the $\phi^4$ dynamics guarantees
that different integrators eventually generate different trajectory pairs and
different local Lyapunov exponents. But the ``global'' long-time averages of the
Lyapunov exponents are in statistical agreement with each other for any properly
convergent integrator.
\noindent
\begin{figure}[h]
\includegraphics[width=2.5in,angle=90,bb=142 229 482 697]{fig4.eps}
\caption{
The cumulative ``largest'' Lyapunov exponent describing the four-particle
chaos of a $\phi^4$ system with initial velocities $\{2,2,-2,2\}$ as
computed in six-dimensional (thin lines) and eight-dimensional spaces ( thick lines )
using fourth-order and fifth-order Runge-Kutta integrators. For these simulations,
with 10,000,000,000 timesteps $dt= 0.001$, the reference trajectories agree precisely
while the satellite trajectories show real, but negligibly small, differences.\\
}
\end{figure}
Unlike many simple one-dimensional models augmenting harmonic chains the $\phi^4$ chains
are tethered to fixed lattice sites by a quartic potential :
$$
\{ \ \ddot q_i = q_{i-1} - 2q_i + q_{i+1} - q_i^3 \ \} \
[ \ {\rm Newtonian} \ \phi^4 \ {\rm Model} \ ] \ .
$$
The tethers kill the ballistic transmission of energy. Over wide ranges of energy and
temperature $\phi^4$ chains exhibit Lyapunov instability ( exponential growth of small
perturbations ) and Fourier conductivity, even for just a few particles. The initial
conditions are not crucial so long as we choose them from a unique chaotic sea. With
uniqueness {\it long-time} averages, either at or away from equilibrium conform to
well-defined long-time averages. Until now investigations of small-system chaos have
typically found a single chaotic sea for a given energy. This apparent simplicity
could be misleading. In Aoki's two-body $\phi^4$ model, pictured in {\bf Figures 1-3},
$\lambda_1$ is 0.05 with the
initial squared momenta $(12.0,0.0)$. For moderate run lengths at the same energy the initial
momenta (11.4,0.6) lead instead to a smaller Lyapunov constant, $\simeq 0.003$. In their
contribution Hofmann and Merker likewise suggest that the chaotic sea is not unique in
a generalized H\'enon-Heiles model with two degrees of freedom.
\subsection{Fourier Heat Flow in Chaotic $\phi^4$ Systems}
Hot and cold boundaries can be introduced into chaotic systems by adding
temperature-dependent ``thermostat'' forces directing the kinetic temperatures
of two or more particles' mean values to their individual values
$\{T_i\}$ :
$$
\ddot q_i = F \longrightarrow \ddot q_i = F - \zeta \dot q_i \ {\rm with}
\ \dot \zeta = \dot q_i^2 - T_i \
[ \ {\rm Nos\acute{e}-Hoover} \ \phi^4 \ {\rm Model} \ ] \ .
$$
The heat current that results reduces the dimensionality of the phase-space
distribution from the equilibrium value of $2N$ for $N$ degrees of freedom
by an amount roughly quadratic in the heat current. Reductions of the order
of 50$\%$ have been obtained in steady-state heat flow simulations with a
few dozen particles, one hot, one cold, and the rest purely Newtonian\cite{b4}.
These results follow from measurements of the {\it spectrum} of Lyapunov exponents\cite{b4}.
$\lambda_1(t)$ quantifies the local rate at which nearby pairs of phase-space
trajectories separate. $\lambda_1+\lambda_2$ quantifies the rate at which {\it triangles}
of phase-space trajectories grow or shrink, and $\lambda_1 + \lambda_2 + \lambda_3$
does this for tetrahedra. When such a sum of $n$ Lyapunov exponents has a positive
average, overall growth of $n$-dimensional phase volumes occurs, and at the
rate $\sum \lambda_{i\leq n}$. A negative sum indicates shrinkage onto a strange
attractor with a dimensionality less than that of the full phase space.
In Hamiltonian systems growth and shrinkage necessarily cancel due to the
time-reversibility of the motion equations. Nonequilibrium simulations with
thermostats are typically time-reversible too. Both $\{ \ \ddot q \ \}$ and the
thermostat forces $\{ \ -\zeta \dot q \ \}$ are even functions of the time.
Nevertheless, in practice the greater abundance of compressive phase-space states
causes symmetry breaking. The invariable shrinkage of the comoving phase volume onto
a nonintegral-dimensional ``fractal'' strange attractor results. In the nonequilibrium
case the spectrum of exponents is {\it always} shifted toward more negative values.
This is the heat-based mechanical explanation of the Second Law of Thermodynamics.
\section{H\'enon-Heiles' Model}
A more mathematically oriented paper by Timo Hofmann and Jochen Merker\cite{b5}
took Honorable Mention in the Snook Prize competition with their study of the
H\'enon-Heiles model in a four-dimensional Hamiltonian phase space :
$$
{\cal H} = (1/2)(p_x^2 + p_y^2 + x^2 + y^2) + x^2y - (y^3/3) \
[ \ {\rm H\acute{e}non-Heiles} \ {\rm Model} \ ] \ .
$$
H\'enon-Heiles models are unsuited to conductivity or to manybody problems
but still have considerable interest. Hofmann and Merker made a plausible argument
for the coexistence of two chaotic seas for a generalized H\'enon-Heiles
model augmented by three quartic, three quintic, and four sextic terms in
the four-variable generalized Hamiltonian.
Hofmann and Merker compared three versions of local Lyapunov exponents for their
model and concluded that the Gram-Schmidt local exponents are not necessarily
paired because their values depend upon the initial conditions. We decided to
award their work an Honorable Mention in view of its interest and their
exploration of a problem area quite different to Aoki's. We congratulate all
three men.
\section{Epilogue and Moral}
For Carol and me it was a challenge to reproduce some of the work of all
the Snook Prize entries. It often takes real effort to dissipate the
uncertainty characteristic of numerical chaos work. But by using reproducible
``random number generators'', straightforward integrators, and careful
descriptions of {\it all} the necessary initial conditions it is still possible to
describe reproducible results, the {\it sine qua non} of physics. We are very
grateful to Ken Aoki, Timo Hofmann, and Jochen Merker for their useful and
interesting contributions, as well as the welcome input from several anonymous referees
and colleagues.
\section{Acknowledgments}
We are particularly grateful to Kris Wojciechowski for stimulating and
overseeing this work and to Giancarlo Benettin, Carl Dettmann, Puneet Patra,
Harald Posch, and Roger Samelson for their patient comments and suggestions
regarding aspects of last year's Snook Prizes. We very much appreciate the
Institute of Bioorganic Chemistry of the Polish Academy of Sciences and the
Pozna\'n Supercomputing and Networking Center for their continued support of
the Ian Snook Additional Prize.
\pagebreak
|
\section{The problem of peeking}
Consider a scientist trying to test a hypothesis on some huge population of samples $X_1, \dots, X_n$.
The test statistic $f (X_1, \dots, X_n)$ is estimated by drawing a random sample of the data (say $X_1, \dots, X_t$) to compute the conditional expectation $\evp{f (X_1, \dots, X_n) \mid X_{1:t}}{}$. Assuming a null hypothesis with some given $\evp{f (X_1, \dots, X_n)}{}$, a $p$-value $A_t$ is calculated.
A pragmatic practitioner with ample computing resources is primarily limited by the availability of data, gathering more samples with time.
While repeatedly testing all data gathered so far, it is common to ``peek" at the reported $p$-values $P_1, \dots, P_t$ until one is low enough to be significant (say at time $\tau$), and report that $p$-value $P_{\tau} = \min_{s \leq \tau} P_s$, resulting in the reported $p$-value having a downward bias.
Peeking is a form of \textit{$p$-value hacking} that is widespread in empirical science for appealing reasons -- collecting more data after an apparently significant test result can be costly, and of seemingly questionable benefit.
It has long been argued that the statistician's opinion should not influence the degree of evidence against the null -- ``the rules governing when data collection stops are irrelevant to data interpretation" \citep{ELS63} -- and that collecting more data and hence evidence should always help, not invalidate, previous results.
However, standard $p$-value analyses ``depend on the intentions of the investigator" \citep{N00} in their choice of stopping rule.
But it can be proven that for many common tests, repeating the test long enough will lead the scientist to only report a low enough $p$-value -- classical work recognizes that they are ``sampling to reach a foregone conclusion" \citep{A54}.
The lamentable conclusion is that peeking makes it much more likely to falsely report significance under the null hypothesis.
This problem has been addressed by existing theory on the subject.
A line of work by Vovk and coauthors \citep{V93, SSVV11, VW19} develops the idea of correcting the $p$-values uniformly over time using a ``test martingale," and contains further historical references on this idea.
As viewed within the context of Bayes factors and likelihood ratios, this has also drawn more recent attention for its robustness to stopping \citep{G18, grunwald2019safe}.
Such work is based on a martingale-based framework for analyzing $p$-values when peeking is performed in such scenarios, described in Section \ref{sec:pval-impl}.
The corrected $p$-value is valid for all times, not just the time it is computed -- seeing \emph{at any time} a value of $\delta$ allows rejection of the null at significance level $\delta$.
This holds irrespective of the details of the peeking procedure.
In a certain sense, this allows us to peer into the future, giving a null model for the future results of peeking.
We build on this to introduce a family of peeking-robust sequential hypothesis tests in Sec. \ref{sec:multreprmart} and \ref{sec:aytest}.
The basic vulnerability of many statistical tests to peeking is that they measure \emph{average} phenomena, which are easily distorted by peeking.
We develop sequential mechanisms for estimating \emph{extremal} functions of a test statistic.
These use quantitative diagnostics that track the risk of future peeking under the null with past information, and lead to a general random walk decomposition of possible independent interest (e.g., Theorem \ref{thm:revbachelier}).
Section \ref{sec:disc} discusses them at length in the context of several previous lines of work.
Most proofs are deferred to the appendix.
\section{Setup: always valid $p$-values
\label{sec:pval-impl}
Recalling our introductory discussion, a common testing scenario involving a statistic $f$ tests a sample using the conditional mean over the sample: $N_t := \evp{f (X_1, \dots, X_n) \mid X_{1:t}}{}$.
The stochastic process $N$ is a \emph{martingale} because $\forall t: \;\evp{(N_t - N_{t-1}) \mid X_{1:(t-1)}}{} = 0$ \citep{D10}.
Similarly, a \emph{supermartingale} has differences with conditional mean $\leq 0$.
A more general and formal definition conditions on the canonical filtration $\mathcal{F}$ (see \Cref{sec:allproofs}).
A $p$-value is a random variable $P$ produced by a statistical test such that under the null,
$\prp{P \leq s}{} \leq s \;\;\forall s > 0$.
We will discuss this in terms of \emph{stochastic dominance} of random variables.
\begin{definition}
\label{defn:fsd}
A real-valued random variable $X$ (first-order) stochastically dominates another real r.v. $Y$ (written $X \succeq Y$) if either of the following equivalent statements is true \citep{RockafellarRoyset14}:
$(a)$ For all $c \in \RR$, $\prp{X \geq c}{} \geq \prp{Y \geq c}{}$.
$(b)$ For any nondecreasing function $h$, $\evp{h(X)}{} \geq \evp{h(Y)}{}$.
Similarly, define $X \preceq Y$ if $-X \succeq -Y$. If $X \preceq Y$ and $X \succeq Y$, then $X \stackrel{d}{=} Y$.
\end{definition}
In these terms, a $p$-value $P$ satisfies $P \succeq \mathcal{U}$, with $\mathcal{U}$ a $\text{Uniform}([0,1])$ random variable.
This can be described as the quantile function of the test's statistic under the null hypothesis.
The peeker can choose any random time $\tau$ without foreknowledge, to report the value they see as final -- they choose a \emph{stopping time} $\tau$ (see \Cref{sec:allproofs} for formal definitions) instead of pre-specifying a fixed time $t$.
So a peeking-robust $p$-value $H_t$ requires that for all stopping times $\tau$, $H_{\tau} \succeq \mathcal{U}$.
As $\tau$ could be any fixed time, this condition is more strict than the condition on $P$ for a fixed $t$.
$H$ is an inflated process that compensates for the downward bias of peeking.
How is the stochastic process $H$ defined?
There is one common recipe: define $H_t = \frac{1}{M_t}$, using a nonnegative discrete-time (super)martingale $M_t$
with $M_0 = 1$.
This guarantees $H$ is a robust $p$-value process, i.e. $H_{\tau} \succeq \mathcal{U}$ for stopping times $\tau$.
(The reason why is briefly stated here: the expectation $\evp{M_{\tau}}{}$ is controlled at any stopping time $\tau$ by the supermartingale optional stopping theorem (Theorem \ref{thm:optstoppingsupermart}), so $\evp{ M_{\tau} }{} = \evp{ M_{0} }{} = 1$.
Therefore, using Markov's inequality on $M_{\tau}$, we have $\prp{ H_{\tau} \leq s}{} = \prp{ M_{\tau} \geq \frac{1}{s}}{} \leq s$. )
Such a ``test [super]martingale" $M_t$ turns out to be ubiquitous in studying sequential inference procedures \citep{SSVV11, VW19}, and is effectively necessary for such inference \citep{RRLK20}.
\Cref{sec:ztest}
Therefore, our analysis focuses on a nonnegative discrete-time supermartingale $M_t$
with $M_0 = 1$.
We also use the cumulative maximum $S_{t} := \max_{s \leq t} M_{s}$ and the lookahead maximum $S_{\geq t} := \max_{s \geq t} M_s$.
\section{Warm-up: has the ultimate maximum been attained?}
\label{sec:multreprmart}
In the peeking scenario, it suffices to consider times until $\tau_F := \max \lrsetb{ s \geq 0 : M_s = S_s}$, the time of the final attained maximum, because no peeker can report a greater value than they see at this time.
However, $\tau_F$ is not a stopping time because it involves occurrences in the future, so traditional martingale methods do not study it.
Studying $\tau_F$ is a useful introduction to the main results of this paper.
We describe $\tau_F$ by establishing a ``multiplicative representation" of a nonnegative discrete-time (super)martingale $M_t$
(with $M_0 = 1$) in terms of its maxima.
\begin{theorem}[Bounding future extrema with the present]
\label{thm:supermartdecomp}
Define the supermartingale $Z_t := \prp{ \tau_F \geq t \mid \mathcal{F}_t}{}$.
Then with $\mathcal{U}$ a standard $\text{Uniform}([0,1])$ random variable:
\begin{enumerate}[(a)]
\item
\label{item:doobineqpastfuture}
$S_{\geq t} \preceq \frac{M_t}{\mathcal{U}} $. Therefore, $S_{\infty} \preceq \frac{1}{\mathcal{U}}$, and $\forall t$ such that $M_t > 0$, $S_{\infty} \preceq S_t \max \lrp{ 1, \frac{M_t}{S_t} \lrp{ \frac{1}{\mathcal{U}}} } $.
\item
$Z_t \leq \frac{M_t}{S_t}$, with equality if $M$ is a martingale.
\item
Define $ Q_t := \sum_{i=1}^{t} \lrp{ \frac{ M_{i} - M_{i-1} }{S_{i}} } $
and
$ L_t := \sum_{q=1}^{t} M_{q-1} \lrp{ \frac{1}{S_{q-1}} - \frac{1}{S_{q}} }$.
Then the decomposition $Z_t \leq 1 + Q_t - L_t$ holds, with equality for martingale $M$.
Furthermore:
\begin{itemize}
\item
$Q$ is a (super)martingale if $M$ is.
\item
$L$ is a nondecreasing process which only changes when $M$ hits a new maximum.
\end{itemize}
\end{enumerate}
\end{theorem}
$Z_t$ is called the \emph{Az\'{e}ma supermartingale} of $M$ \citep{Azema73}.
Note that $M_{t-1} \leq S_{t-1}$, so that
\begin{align}
\label{eq:logmaxupp}
L_t \leq \sum_{q=1}^{t} S_{q-1} \lrp{ \frac{1}{S_{q-1}} - \frac{1}{S_{q}} } = \sum_{q=1}^{t} \lrp{ 1 - \frac{ S_{q-1} }{S_{q}} } \leq \sum_{q=1}^{t} \log \lrp{ \frac{ S_{q} }{S_{q-1}} } = \log S_{t}
\end{align}
where we use the inequality $1 - \frac{1}{x} \leq \log x$ for positive $x$. This can be quite tight ($L_t \approx \log S_{t}$) when the steps are small relative to $S_{t-1}$, so that $M_{t-1}$ is not much lower than $S_{t-1}$ at the times $L_t$ changes.
This decomposition is intimately connected with $\log S_t$, as we will see that the martingale $Q_t$ is effectively equal to $\evp{ \log S_{\infty} \mid \mathcal{F}_t}{} - 1$ (\Cref{thm:logmaxprocess}).
Notably, $\frac{M_{t}}{S_{t}}$ can be calculated pathwise, so a natural question is if it can be used as a peeking-robust statistic, i.e. if we can reason about its peeked version
$$ R_t := \min_{s \leq t} \frac{M_{s}}{S_{s}} \geq \min_{s \leq t} Z_{s} $$
which is a nonincreasing process.
The following result
shows that $R_t$ can be considered a valid $p$-value at any time horizon.
\begin{theorem}[An alternative $p$-value]
\label{thm:normpval}
With $\mathcal{U}$ denoting a standard $\text{Uniform}([0,1])$ random variable,
\begin{enumerate}[(a)]
\item
For any stopping time $\tau \leq \tau_{F}$, $R_{\tau} \succeq \mathcal{U}$.
\item
Define $\rho_{F} := \max \lrsetb{ t \leq \tau_{F} : Z_{t} = \min_{u \leq \tau_{F} } Z_{u} }$. Then $R_t \geq \prp{ \rho_F > t \mid \mathcal{F}_t}{}$.
\end{enumerate}
\end{theorem}
\section{Estimating extrema of martingales
\label{sec:aytest}
For fixed sample sizes, any statistic $T$ with null distribution $\mu$ can be computed from its $p$-value by applying the statistic's inverse complementary CDF $\bar{\mu}^{-1}$ to the $p$-value $P$.
In this way, we can think of any distribution $\mu$ in terms of a nondecreasing function $g (x) := \bar{\mu}^{-1} (1/x)$ for $x \geq 1$, so that $g \lrp{P}$ corresponds to the statistic $T$.
In this prototypical case, $T \preceq \mu$.
Similarly, given a martingale $M$ associated with a robust $p$-value process $H$, the equivalent statistic $g(M) = g \lrp{\frac{1}{H}}$ is dominated by $\mu$.
Assume $M$ is a martingale and suppose we test a statistic $g^{\mu} (M_t)$ with a process $A_t$.
The obvious choice $A_t = g^{\mu} (M_t)$ is prone to peeking.
We instead inoculate $A$ against future peeking by maximizing over the entire trajectory of $A$, and using that as a test statistic.
We directly estimate the extreme value $\max_{t} g (M_{t}) = g ( S_{\infty} )$
-- a quantity robust to peeking --
with the process (martingale) $\evp{g ( S_{\infty} ) \mid \mathcal{F}_t}{} $.\footnote{If the peeker can be assumed to have a limited waiting period of $T$ samples, $S_{\infty}$ can be replaced by $S_{T}$ in this analysis.}
This quantity has a natural motivation, but it depends on the future through $S_{\infty}$,
and confounds attempts at estimation with fixed-sample techniques.
Nevertheless, we show how to efficiently compute this as a stochastic process (\Cref{thm:AYpotentialdecomp}), and prove that its null distribution is $\preceq \mu$, under a ``good" stopping rule (\Cref{thm:ayprocessproperties}).
This characterization leads to results which are more generally novel (Section \ref{sec:univay}).
We also study the interplay between the statistic $\evp{g ( S_{\infty} ) \mid \mathcal{F}_t}{}$ and its own ``peeked" cumulative maximum $\max_{t} \evp{g ( S_{\infty} ) \mid \mathcal{F}_t}{}$, characterizing it in terms of $\mu$ (\Cref{thm:ayprocessproperties}, \Cref{thm:givenmuoptisay}) and showing that $\evp{g ( S_{\infty} ) \mid \mathcal{F}_t}{}$.
\subsection{Estimating the running extremum}
\label{sec:runningmaxtest}
We can use the distributional characterization of Theorem \ref{thm:supermartdecomp} to provide insight into the statistic $\evp{g ( S_{\infty} ) \mid \mathcal{F}_t}{} $ and ways to compute it.
\begin{theorem}
\label{thm:AYpotentialdecomp}
For any nondecreasing function $g$, denote $\mathcal{G} (s) := \int_0^{1} g \left( \frac{s}{u} \right) du$ and its derivative $\mathcal{G}' (s) := \frac{d \mathcal{G} (s)}{ds} = \int_{s}^{\infty} \frac{dx}{x^2} \lrp{ g (x) - g (s) }$.
Then $\mathcal{G}$ is continuous, concave, and nondecreasing.
Also:
\begin{align*}
\evp{g ( S_{\infty} ) \mid \mathcal{F}_t}{}
\leq Y_t
&\stackrel{\textbf{(a)}}{:=} \left( 1 - \frac{M_t}{S_t} \right) g (S_t) + M_t \int_{S_t}^{\infty} \frac{g (x)}{x^2} dx \\
&\stackrel{\textbf{(b)}}{=} \left( 1 - \frac{M_t}{S_t} \right) g (S_t) + \frac{M_t}{S_t} \mathcal{G} (S_t) \\
&\stackrel{\textbf{(c)}}{=} \mathcal{G} (S_t) - (S_t - M_t) \mathcal{G}' (S_t) \\
&\stackrel{\textbf{(d)}}{=} g (S_t) + M_t \mathcal{G}' (S_t)
\end{align*}
with equality when $M$ is a martingale. Furthermore, $g \geq 0 \implies Y \geq 0$.
\end{theorem}
Theorem \ref{thm:AYpotentialdecomp}$(a)$ shows exactly which choices of $g$ are appropriate, as $\evp{g ( S_{\infty} ) }{} $ can only be bounded if $\frac{g (x)}{x^2}$ is integrable away from zero.
This paper assumes this hereafter:
\begin{assumption}
\label{ass:tailintegrab}
$\frac{g (x)}{x^2}$ has a finite integral on any closed interval away from zero.
\end{assumption}
Theorem \ref{thm:AYpotentialdecomp} characterizes the test statistic $Y$, the \emph{Az\'{e}ma-Yor (AY) process} of $M$ with respect to $g$ \citep{AY79}. Thm. \ref{thm:AYpotentialdecomp}(b) can be interpreted as an expectation over two outcomes, using Theorem \ref{thm:supermartdecomp}(b). With probability $1 - \frac{
|
M_t}{S_t}$, the cumulative maximum is not exceeded in the future ($\tau_F \leq t$), so $g (S_{\infty}) = g (S_t)$. Alternatively with probability $\frac{M_t}{S_t}$, the cumulative maximum is exceeded in the future ($\tau_F > t$), and the conditional expectation of $g (S_{\infty})$ in this case is $\mathcal{G} (S_t)$, using Theorem \ref{thm:supermartdecomp} to get a precise idea of the lookahead maximum from the present.
The AY process $Y$, constructed by Theorem \ref{thm:AYpotentialdecomp} using any $g$, has some remarkable properties that further motivate its use.
\begin{lemma}[Properties of AY processes]
\label{lem:aydifferences}
Define the Bregman divergence $D_F (a, b) := F(a) - F(b) - (a-b) F'(b) \geq 0$ for any convex function $F$. Any AY process $Y$ defined as in Theorem \ref{thm:AYpotentialdecomp} is a supermartingale. The following relations hold pathwise for all $t$:
\begin{enumerate}[a)]
\item
$\displaystyle Y_t \geq g (S_t)$
\item
$Y_{t} - Y_{t-1}
= (M_{t} - M_{t-1}) \mathcal{G}' (S_{t-1}) - D_{- \mathcal{G}} (S_{t}, S_{t-1}) $
\item
$\displaystyle \max_{s \leq t} Y_s = \mathcal{G} (S_t) \geq Y_{t} \geq \mathcal{G} (M_t)$
\item
For any stochastic process $A$, if $A_u \geq \mathcal{G}(M_u)$ for all $u$, then $\displaystyle \max_{s \leq t} A_s \geq \max_{s \leq t} Y_s$.
\end{enumerate}
\end{lemma}
\subsection{Consequences and examples}
\begin{theorem}
\label{thm:logmaxprocess}
Define $ Q_t := \sum_{i=1}^{t} \lrp{ \frac{ M_{i} - M_{i-1} }{S_{i}} } $ as in \Cref{thm:supermartdecomp}(c).
Then $Q_t \leq \evp{\log (S_\infty) \mid \mathcal{F}_t}{} - 1 $.
Here the inequality is as tight as that in \eqref{eq:logmaxupp}.
\end{theorem}
\begin{proof}
First, note that $1 + Q_t = Z_t + L_t$ from \Cref{thm:supermartdecomp}(c).
Define $ Q_t := \sum_{i=1}^{t} \lrp{ \frac{ M_{i} - M_{i-1} }{S_{i}} } $ from
Using \Cref{thm:AYpotentialdecomp} with $g(x) = \log (x)$,
we have $\mathcal{G} (x) = \log (x) + 1$, so
\begin{align*}
\evp{\log (S_\infty) \mid \mathcal{F}_t}{}
&= \log (S_t) + 1 - (S_t - M_t) \frac{1}{S_t}
= \log (S_t) + \frac{M_t}{S_t}
\geq L_t + Z_t
= 1 + Q_t
\end{align*}
\end{proof}
\Cref{thm:AYpotentialdecomp}(d) implies a simple formula for the mean of the ultimate maximum $\evp{ g( S_\infty ) }{}$.
\begin{corollary}
With $g$ and $\mathcal{G}$ defined as in \Cref{thm:AYpotentialdecomp}, $\evp{ g( S_\infty ) }{} = g( 1 ) + \mathcal{G}' (1)$.
\end{corollary}
\subsection{Bounding the null distribution}
\label{sec:designnull}
Next, we characterize the null distribution of the test statistic process $Y$.
Our stated motivation for $g$ in Sec. \ref{sec:runningmaxtest} involves a distribution $\mu$, which plays the role of the null in the fixed-sample case. We proceed to specify a stopping time $\tau^{\mu}$ such that the stopped test statistic satisfies the same null guarantee as the fixed-sample one: $Y_{\tau^{\mu}} \preceq \mu$. Our development depends on some properties of $\mu$.
\begin{definition}
\label{defn:supq}
A real-valued distribution $\mu$ has a \textbf{complementary CDF} $\bar{\mu} (x) := \prp{X \geq x}{X \sim \mu}$, a \textbf{tail quantile} function $\bar{\mu}^{-1} (\xi) := \min \lrsetb{ x: \bar{\mu} (x) < \xi }$, and \textbf{barycenter} function $\psi_{\mu} (x) = \evp{X \mid X \geq x }{\mu}$.
Its \textbf{superquantile} function is $\textsc{SQ}^{\mu} (\xi) := \psi_{\mu} \lrp{ \bar{\mu}^{-1} (\xi) } = \frac{1}{\xi} \int_{0}^{\xi} \bar{\mu}^{-1} (\lambda) d \lambda $, and its \textbf{Hardy-Littlewood transform} is the distribution $\mu^{\textsc{HL}} := \textsc{SQ}^{\mu} (\mathcal{U})$ for a $[0,1]$-uniform random variable $\mathcal{U}$ \citep{CEO12, RockafellarRoyset14}.
$\mu$ is associated with a nondecreasing function $ g^{\mu} (x) = \bar{\mu}^{-1} (1/x) $ with corresponding \textbf{future loss potential} $\mathcal{G}^{\mu} (x) := \int_0^{1} g^{\mu} \left( \frac{x}{u} \right) du = \textsc{SQ}^{\mu} (1/x)$.
\end{definition}
(Hereafter, superscripts of $\mu$ will be omitted when clear from context.) The characterization provided by Theorem \ref{thm:AYpotentialdecomp} precisely characterizes the mediating function $g$'s effect on the distribution of the given null process $Y$, fully specifying its distribution.
\begin{theorem}
\label{thm:ayprocessproperties}
Fix a $\mu$ and define $\displaystyle \tau^{\mu} := \min \lrsetb{ t : g^{\mu} (S_t) \geq Y_{t} } $. Then $\displaystyle \max_{s \leq \tau^{\mu} } Y_s \preceq \mu^{\textsc{HL}}$, and $Y_{\tau^{\mu}} \preceq \mu$.
\end{theorem}
\begin{theorem}
\label{thm:givenmuoptisay}[see also \cite{GM88}]
For any distribution $\mu$, nonnegative martingale $A$, and stopping time $\tau$, if $A_{\tau} \preceq \mu$, then $\displaystyle \max_{s \leq \tau} A_{s} \preceq \mu^{\textsc{HL}} $.
\end{theorem}
\subsection{Universality}
\label{sec:univay}
Having derived the AY process for any nonnegative supermartingale $M$, we have introduced a number of perspectives on its favorable properties and usefulness as a test statistic (Thm. \ref{thm:AYpotentialdecomp}, Lemma \ref{lem:aydifferences}). This section casts those earlier developments more powerfully, with a converse result: any stochastic process can be viewed as an AY-like process. We know this to be only a loose solution because $Y$ is a strict supermartingale even when $M$ is a martingale (by Lemma \ref{lem:aydifferences}). Instead, a recentered version of this process is appropriate, satisfying two important difference equations pathwise.
\begin{lemma}
\label{lem:bachelier}
Given any process $M$ and continuous concave nondecreasing nonnegative $\mathcal{G}$, there is an a.s. unique process $B$ with $B_0 = \mathcal{G} (1)$ such for all $t$,
\begin{equation}`
\label{eq:bacheq}
B_{t} - B_{t-1} = (M_{t} - M_{t-1}) \mathcal{G}' (S_{t-1})
\qquad \text{and} \qquad
\max_{s \leq t} B_s - B_t = (S_t - M_t) \mathcal{G}' (S_t)
\end{equation}
Due to \eqref{eq:bacheq}, if $M$ is a nonnegative (super)martingale respectively, so is $B$.
For $t \geq 0$, $B$ is defined by
\begin{align}
\label{eq:mmdecomp-rev}
B_t := \mathcal{G} (S_t) - (S_t - M_t) \mathcal{G}' (S_t) + \sum_{s=1}^{t} D_{- \mathcal{G}} (S_{s}, S_{s-1})
\end{align}
\end{lemma}
Lemma \ref{lem:bachelier} says that $B$, a bias-corrected version of $Y$ (w.r.t. $M$), is a ``damped" version of $M$ with variation modulated by the positive nonincreasing function $\mathcal{G}' (S_{t-1})$. This result couples the entire evolutions of $M$ and $B$, so after fixing initial conditions we can derive a unique decomposition of any process $B$ in terms of a martingale $M$ and its cumulative maximum $S$.
\begin{theorem}[Martingale-max (MM) Decomposition]
\label{thm:revbachelier}
Fix any continuous, concave, strictly increasing, nonnegative $\mathcal{G}$. Any process $B$ with $B_0 = \mathcal{G} (1)$ can be uniquely (a.s.) decomposed in terms of a ``variation process" $M$ and its running maximum $S$, such that $M_0 = S_0 = 1$ and \eqref{eq:bacheq} holds.
The processes $M_t$ and $S_t$ are defined for any $t \geq 1$ inductively by
\begin{equation}
\label{eq:revbachelier}
M_t = 1 + \sum_{s=1}^{t} \frac{B_s - B_{s-1}}{\mathcal{G}' (S_{s-1})} \qquad , \qquad S_t = \max_{s \leq t} M_s
\end{equation}
If $B$ is a (super)martingale respectively, so is $M$.
\end{theorem}
This depends on an attenuation function $\mathcal{G}$, decomposing the input $B$ into a variation process $M$ and its cumulative maximum $S$, which (as a nondecreasing process) functions as an ``intrinsic time" quantity. Thm. \ref{thm:revbachelier} vastly expands the scope of these analytical tools for AY processes to be applicable to stochastic processes more generally, readily allowing manipulation of cumulative maxima.
\subsection{Max-plus decompositions}
We can also cast the scenario of Section \ref{sec:aytest} in terms of the quantity $\mathcal{G} (M_t)$. This is a supermartingale if $M$ is \citep{D10}, and many supermartingales can be written in such a form. By Theorem \ref{thm:AYpotentialdecomp}, $\mathcal{G} (M_t) = \evp{ \frac{M_t}{u} }{u \in \mathcal{U}} \geq \evp{ g (S_{\geq t}) \mid \mathcal{F}_t}{} = \evp{ \max_{s \geq t} g (M_s) \mid \mathcal{F}_t}{}$, where the inequality is by the stochastic dominance relation in Theorem \ref{thm:supermartdecomp}. In our scenario, this can be viewed without further restrictions as a unique decomposition of $\mathcal{G} (M_t)$, following the continuous-time development (\cite{EM08}, Prop. 5.8).
\begin{theorem}[Max-plus (MP) Decomposition]
\label{thm:maxplus}
Fix any continuous, concave, strictly increasing, nonnegative $\mathcal{G}$. For any nonnegative martingale $M_t$ with $M_0 = 1$, there is an a.s. unique process $L_t$ such that $\mathcal{G} (M_t) = \evp{ \max_{s \geq t} L_s \mid \mathcal{F}_t}{}$, with equality for martingale $M$. This can be written as $L_t := g (M_t)$ for the nondecreasing function $g(x) := \mathcal{G} (x) - x \mathcal{G}' (x)$. Also, there is an a.s. unique supermartingale $Y$ with $Y_t \geq \mathcal{G} (M_t)$ for all $t$ pathwise.
\end{theorem}
\section{Discussion}
\label{sec:disc}
\begin{comment}
\subsection{Assumptions}
\label{sec:lessbounded}
\cite{HRMS18} discusses in detail other relaxations of the Gaussianity assumptions we used for convenience in our exposition of the $H$-value.
For statistics that are well-concentrated (e.g. bounded and sub-Gaussian), there is not much change to the framework.
Generally, it suffices to keep a running estimate of the cumulative variance $V_t$ and substitute $\frac{\lambda^2}{2} t$ with $\frac{\lambda^2}{2} V_t$ in the definition of $H$, with no modification of $\Gamma$ required (see \Cref{sec:generalhvals}).
When concentration assumptions hold, this is a valid $H$-value by the arguments we provide in earlier sections.
The other side is anti-concentration, which determines whether the $H$-value is overly large, instead of just large enough.
When martingale increments exhibit Gaussian anti-concentration, \cite{B14} proves that LIL anti-concentration also holds for the martingale; applied to our situation, this shows that the generalized $H$-value of \Cref{sec:generalhvals} is (within a constant factor of) the lowest possible in such situations.
\end{comment}
\subsection{Sequential testing}
\label{sec:relwork}
Treating the sample size as a random stopping time is central to the area of sequential testing.
Much work in this area has focused around the likelihood-ratio martingale of a distribution $f$ for data under a null distribution $g$: $M_t := \displaystyle \prod_{i=1}^{t} \frac{f (X_i)}{g (X_i)}$.
A prototypical example is the Sequential Probability Ratio Test (SPRT, from \cite{wald48}), which is known to stop optimally soon given particular type I and type II error constraints.
The likelihood-ratio martingale has been explored for stopping in other contexts as well \citep{robbinsNonparam, RS70, BBW97}, including for composite hypotheses \citep{wasserman2020universal, grunwald2019safe}.
These all deal with specific situations in which the martingale formulation allows for tests with anytime guarantees.
Frequentist or nonparametric perspectives on sequential testing typically contend with LIL behavior.
For example, the work of \cite{BR16} presents sequential nonparametric two-sample tests in a framework related to ours.
Such work requires changing the algorithm itself to be a sequential test in an appropriate setting, with a specified level of $\alpha$.
The setting of $p$-values is in some sense dual to this, as explored in recent work \citep{HRMS18, shin2020nonparametric}.
Sequential testing involves specifying a type I error a priori (and sometimes also type II, e.g. for the SPRT),
while what we are reporting is a minimum significance level at which the data show a deviation from the null.
This is exactly analogous to the relationship between Neyman-Pearson hypothesis testing and Fisher-style significance testing -- the method of this paper can be considered a robust Fisher-style significance test under martingale nulls, just as sequential testing builds on the Neyman-Pearson framework.
Similarly, we do not analyze any alternative hypothesis, which would affect the power of the test
(though the choice of test statistic governs the power).
\subsection{Technical tools}
The particulars of computing $H$-values are direct algorithmic realizations of the proof of \cite{B14}, which also shows that these $H$-values are as tight as possible within a constant factor on the probability. The broader martingale mixture argument has been studied in detail in an inverted form, as a uniform envelope on the base martingale $M$ \citep{R52, RS70}.
In testing maxima, we are guided by the framework fundamentally linking the $\textsc{SQ} (\cdot)$ function and the maxima of stochastic processes.
$\textsc{SQ}$ has been used in much the same time-uniform context (\cite{BD63}, Thm. 3a), and seminal continuous-time contributions showed that this can control the maximum of a continuous martingale in general settings \citep{DG78, AY79}.
Related work also includes the continuous (super)martingale ``multiplicative representations" of \cite{NY06}, whose techniques we repurpose.
The modern usage crucially involves a variational characterization of $\textsc{SQ}$ \citep{RockUCvar00} that would be an interesting avenue to future methods \citep{RockafellarRoyset14}.
Many stopping-time issues in this paper have been studied for Brownian motion, and some for martingales in continuous time under regularity conditions.
Stopping Brownian motion to induce a given stopped distribution has been well studied in probability, as the Skorokhod embedding problem \citep{obloj2004skorokhod}.
AY processes were originally proposed as a continuous solution of the Skorokhod problem \citep{AY79}, analogous to our discrete-time results on the null distribution of our AY test statistic, for which we adapted techniques from previous work \citep{GM88, CEO12}.
The difference equation of Lemma \ref{lem:bachelier} has been studied in the context of future maxima since \cite{Bach1906}.
To our knowledge the \textsc{MM} decomposition is novel, though in continuous time the AY process can be inverted directly \citep{EM08}.
\subsection{Future work}
The importance of peeking has long been recognized in the practice of statistical testing \citep{R52, AMR69, N00, W07p, SNS11}, mostly in a negative light.
The statistician typically does not know their sampling plan, which is necessary for standard hypothesis tests.
The stopping rule is subject to many sources of variation: for example, it could be unethical to continue sampling when a significant effect is detected in a clinical trial \citep{I08}, or the experimenter could run out of resources to gather more data.
Solutions to this problem are often semi-heuristic and generally involve ``spending a budget of $\alpha$," the willingness to wrongly reject the null, over time.
Such methods are widely used \citep{Peto77, P77, SAL14} but are not uniformly robust to sampling strategies, and their execution suffers from many application-specific complexities arising from assumptions about the possible stopping times employed by the peeker \citep{pocock05}.
We hope to have presented general and useful theory to address this state of affairs.
A main open problem of interest here is applying these results to design and deploy new hypothesis tests.
\bibliographystyle{plainnat}
|
\section{Introduction}\label{sec:intro}
\IEEEPARstart{D}{ata} association across camera views is an intermediate key step in many complex computer vision tasks, such as multi-camera 3D pedestrian detection \cite{lima2021generalizable}, 3D pose estimation for multiple views \cite{dong2019fast,chen2020cross,kadkhodamohammadi2021generalizable}, multi-view multi-target tracking \cite{leal2012branch,zhang2008global,he2020multi}, and even robotic perception \cite{aragues2011consistent}, among others. As the end result of all these tasks depends, to a large extent, on how good the association is, it is worth investing in finding an optimal solution.
This association is typically stated as a bipartite graph matching problem and solved applying minimum-cost flow techniques, e.g., resolving an association matrix using the Hungarian algorithm \cite{chu2019online, tan2019multi,xu2019spatial}. Re-Identification appearance cues \cite{he2020city} that may be combined with 3D spatial location \cite{chen2019multi,qian2020electricity} are widely used to build the association matrix. However, due to their computational cost, the use of minimum-cost flow-based solvers in practical implementations is limited to datasets not containing large number of data \cite{wang2019efficient}.
A fundamental issue in data association is the affinity measurement, i.e., the likelihood that two objects from different views belong to the same instance. In order to compare location coordinates (on image or gound-plane) and appearance features of the objects, the Euclidean and Cosine distances are widely used \cite{Narayan2017,ristani2018features,Maksai_2019_CVPR,he2020multi,zhou2021learning}.
However, it is an unsupervised and sub-optimal approach \cite{gou2018systematic} since these fixed metrics are not adapted to the target domain. Moreover, a manual threshold is often required to take an association decision.
Metric learning \cite{kulis2012metric} incorporates training data in a supervised manner to achieve a better performance. It aims to learn a task-specific distance function, thus, a new feature space, such that features of the same instance are close.
Other challenge is to handle the process of $n$-view matching (i.e., $n>2$). Previous works have been limited to perform cross-camera data association by pair of views \cite{dong2019fast, leal2012branch,kadkhodamohammadi2021generalizable}. However, matching each pair of views independently may produce undesired cycle-inconsistent correspondences \cite{phillips2019all}, i.e., a pair of corresponding data in two different views not corresponding to the same data in another view.
Graphs are a type of structure that model data as a set of objects or concepts (represented by nodes) and the relationships between them (edges). Recently, traditional graph analysis using machine learning techniques has been applied to many fields such as social sciences \cite{hamilton2017inductive,kipf2016semi}, natural sciences \cite{sanchez2018graph,NIPS2017_f5077839}, knowledge graphs \cite{ijcai2017-250}, and computer vision \cite{liang2016semantic,hu2018relation,sheng2018heterogeneous}, among other research fields. Graph Neural Networks (GNNs) were initially introduced by \cite{scarselli2008graph} as an adaptation of Neural Networks (NNs) to work on graph structures, including cyclic, directed, and undirected graphs. GNNs are based on an information diffusion mechanism. The main idea is that a graph is built as a set of nodes linked according to the graph connectivity. The nodes and edges may update their hidden states by exchanging (propagating), information along their neighbourhood (adjacent nodes). Finally, the output is computed based on the states, locally at node or edge-level, or globally at graph-level, depending on the target task. The information (message) passing mechanisms along the graph structure were reformulated as a common framework called Message Passing Networks (MPNs) by \cite{gilmer2017neural} and later extended in \cite{battaglia2018relational}.
In this paper, we propose to treat the cross-camera data association task as a learning problem. We learn, simultaneously, the feature representations and the similarity function to merge bounding box detections of the same object from different camera views, aiming to deal with the view-point variation challenge. The idea of learning both the feature representation as well as the similarity at the same time has been already proposed and its effectiveness proven in other scopes such as single-view multi object tracking \cite{braso2020learning}, human pose recovery \cite{kanazawa2018end}, and vehicle re-identification \cite{zhu2018joint}, among others; however, to the best of our knowledge, it has not yet been considered for the cross-camera data association task. To this aim we propose to learn a solution by training a GNN, since it has been proven effective in other similar tasks, being, as far as we know, the first paper tackling the cross-camera data association task using GNNs. We leverage the appearance and spatial information without making any prior assumption about the true number of objects in the scene. We perform learning directly in the graph domain with a Message Passing Network, thus providing a global association solution for all the cameras at once, instead of performing association by pairs.
In summary, the main contributions of this paper include:
\begin{itemize}
\item A new approach fully devoted to associate data from different views frame by frame, giving a global solution at once, instead of finding local sub-optimal solutions.
\item The use of GNNs to solve the cross-camera data association task, previously unused in this scope.
\item A novel approximation based on similarity learning, yet unconsidered for this task, to avoid the usage of fixed thresholded distances.
\item An extensive ablation study and comparison with state of the art techniques showing the favorable performance of the proposed approach (we reach an improvement of 18.55\% over the better state-of-the-art approach).
\end{itemize}
\begin{figure*} \label{fig:block-diagram}
\centering
\includegraphics[width=0.99\textwidth,keepaspectratio]{img/blockdiagramgold.pdf}
\caption{Block diagram of the proposed approach. The Graph Construction module creates the graphs and defines the connections between nodes. In Graph Initialization, an initial feature embedding is associated to each node and edge. The Message Passing Network propagates and combines the embeddings of the nodes and edges along the graph. Finally, the Classifier defines which edges in the graph are active. When training, the loss function is computed and backpropagated to perform edge classification. When inference, the graph is pruned according to each edge's likelihood in the Post-Processing stage. Finally, the Connected Components of the graph are computed to define the association between detections of different views. }
\end{figure*}
\section{Related Work}
Within the computer vision field, cross-camera data association is the process of finding the correspondences between the data, typically points, regions or bounding boxes, of different camera views. This work focuses on associating multi-view detections frame by frame, and processing all the camera views global and simultaneously.
\subsection{Multi-view Association}
The traditional multi-view data association approaches address the issue by finding local optimum solutions, opposed to finding a global solution. Thus, solving a sequence of decoupled pairwise matching sub-tasks, i.e. by pairs of cameras independently. Moreover, these matchings tend to be solved as linear assignment problems. This greedy approximation does not leverage the redundancy in the data and often leads to cycle-inconsistent associations \cite{fathian2020clear}. For instance, a detection in view 1 is matched to another in view 2, and to another in view 3, but the detections in views 2 and 3 are not matched together. A multi-view multi-target tracking approach performing cross-camera data association solving a min-cost problem over conventional graphs exploiting spatial and temporal information was presented in \cite{leal2012branch}. However, they proposed association only between pairs of cameras.
The task of multi-person 3D pose estimation from multiple views was tackled in \cite{dong2019fast} by carrying out data association across views limited to a pair of cameras at the same time.
\subsection{Affinity Measure}
Since deep learning spread to computer vision (e.g., in re-identification field \cite{yi2014deep, li2014deepreid}), deep features (i.e., embeddings) are chosen to describe bounding boxes' appearance for multi-view matching. In order to measure the affinity between these appearance features, the use of non-learnable pre-defined distances, such as Euclidean and Cosine distances, was widely spread \cite{Narayan2017,ristani2018features,Maksai_2019_CVPR,he2020multi,zhou2021learning}.
The Euclidean distance between spatial features is also used by \cite{lopez2018semantic} to associate multi-camera people detection. An extension of this work was proposed by \cite{lima2021generalizable}, in which they also use the Euclidean distance between location and appearance features to associate detections. In \cite{dong2019fast}, they combine re-identification appearance and geometrical information performing a linear assignment based on the Euclidean distance for the similarity computation, for the task of multi-camera 3D pose estimation. Although the usage of this fixed distances is broadly common, it has been proven to be less efficient than similarity learning \cite{koestinger2012large,liao2015person,guillaumin2009you,davis2007information}. Similarity learning involves three main processes: 1) transformation of the data into features vectors using a deep learning architecture (typically an encoder); 2) pairwise comparison of the vectors by using a distance metric (typically Euclidean); and 3) classification of this distance as similar or dissimilar (using any classifier). Since we adopt this methodology, these three concepts are present in our approach.
\subsection{Graph Neural Networks}
Graph Neural Networks are used in several scopes, however they have not been exploited until recently in computer vision. GNNs have been applied in point feature matching \cite{phillips2019all, fathian2020clear}, gesture learning \cite{xie2021sequential}, video moment retrieval \cite{gao2021learning}, visual question answering \cite{narasimhan2018out} or single-camera single-object tracking \cite{gao2019graph}. Regarding single-camera multi-object tracking, \cite{Papakis2020} proposes the use of GNN to extract node and edge embeddings, but computing similarity using the cosine distance and perform data association by using a linear assignment, i.e., Hungarian Algorithm. The first approach of performing feature and similarity learning jointly for associating detections was introduced in \cite{braso2020learning} by proposing a time-aware MPN variation: detecting associations across time to perform batch-based/offline single-camera multi-object tracking. Although they propose a very novel and well-developed idea, the appearance variation of a detected object (bounding box) from frame to frame is not very significant, in contrast to multi-view association, where the view-point variation is the main challenge.
\section{Method}
Although the proposed approach can be applied to associate any type of cross-camera object detections, we focus our evaluation on pedestrians. We exploit the connectivity of a Graph Neural Network (GNN) using a Message Passing Network (MPN) \cite{gilmer2017neural, battaglia2018relational} to perform feature learning as well as to provide a solution for cross-camera association by computing edge classification. It is composed of the following main stages: Graph Construction (Section \ref{sec:graph-const}), Graph Initialization (Section \ref{sec:feat-enc}), Message Passing Network (Section \ref{sec:mpn}) and the Classifier (Section \ref{classifier}). The Post-Processing stage (Section \ref{seq:post}) and the computation of the Connected Components (Section \ref{sec:CC}) are only performed when inference.
\subsection{Problem Formulation \label{sec:prob-form}}
Let us define a graph $G=(V,E)$, where $V$ is a set of nodes and $E$ is the set of edges between them. On the basis of GNNs, we can give each unit in a graph, i.e. nodes and edges, a state representation, i.e., embedding, to represent its concept. In our case, nodes stand for all pedestrian detections in the available views, and edges for the relationship between them. The initialization, update and classification of nodes, and edges embeddings are described in the following subsections.
Let us assume $\mathcal{C} = \left\{c_{i}, \; i \in [1,M] \right\}$ as a set of $M$ simultaneously available cameras with overlapping fields of view (FoVs).
Let $\mathcal{P}_t=\left\{ p_i, \; i\in [1,|\mathcal{P}_t|\;]\right\}$, be the set of detected pedestrians in a scene at frame $t$. Each person identity $p_i$ is formed by a set of detections coming from a single or multiple camera views. Equivalently, an identity can be defined as a set of nodes in the graph such that $p_i = \left\{v_{j}, \; j \in [1,M_{p_{i}}] \right\}$, being $M_{p_{i}}$ the number of cameras detecting the person $p_i$.
The goal is, at each frame $t$, to group the nodes in the graph $G_{t}=(V,E)$ corresponding to the same people identity, obtaining as many clusters of nodes as pedestrians appear. To this aim, we perform edge classification based on the final edge embedding. Let us consider a binary variable for each edge in the graph in order to represent the graph partitions. The binary variable $y_{(v_i, v_j)}$ for the edge between the nodes $v_i$ and $v_j$ is defined as:
\begin{equation}
y_{(v_i, v_j)}=\left\{\begin{array}{ll}
1 & \exists \; p_{i} \in \mathcal{P}_{t} \; \textrm{ s.t. } \; (v_i, v_j) \in p_{i} \\
0 & \textrm{ otherwise.}
\end{array}\right.
\end{equation}
In other words, it is defined to be 1 (active) between nodes that belong to the same pedestrian, and 0 (non-active) otherwise. Thus, each person in the scene can be mapped into a group of nodes in the graph, in other words, a Connected Component(CC) in the graph.
\subsection{Graph Construction} \label{sec:graph-const}
As input we consider, at each frame $t$, a set of pedestrian detections $\mathcal{D}_t = \left\{d_{i}, \; i \in [1,N_t]\right\}$, being $N_t$ the number of total detections at $t$. Each detection is defined by $d_i = (\textbf{b}_i, c_i)$, where $\textbf{b}_i$ and $c_i$ stand for its bounding box image and camera number, respectively.
We model the problem with an undirected graph $G_{t}=(V,E)$, where all the edges are bidirectional. Thus, at each frame $t$ a graph $G_t$ is constructed so that all detections from different camera views are connected. We apply this constraint since we work under the assumption that two detections coming from the same camera cannot belong to the same identity (person $p_i$).
Let $V=\left\{v_{i}, \; i \in [1,N_t]\right\}$ be the set of nodes, and each node $v_i \in V$ denotes a single detection $d_i \in \mathcal{D}_t$. Let $E$ be the set of edges connecting pair of nodes as follows:
\begin{equation} \label{Eq-1}
E=\left\{\left(v_{i}, v_{j}\right), c_i \neq c_j \right\}.
\end{equation}
Note that nodes under the same camera are not connected between them.
\subsection{Graph Initialization: Feature Encoding} \label{sec:feat-enc}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth,keepaspectratio]{img/GraphInitialization.pdf}
\caption{Graph Initialization. \label{fig:graph-ini}}
\end{figure}
As stated in Section \ref{sec:intro}, the target of GNNs is to learn final state embeddings, for each node and edge in the graph, containing information about their neighbourhood, which can be used to generate an output, such as the node or edge prediction label. This section details the states initialization; update and classification are explained in the following subsections. All the nodes and edges in $G_t$ are initialized with their respective initial state embeddings (see Figure \ref{fig:graph-ini}).
The initial nodes embbedings $h_{v_{i}}$ rely only on appearance information. For each detection $d_i \in \mathcal{D}_t$, its bounding box image $\textbf{b}_i$ is fed to a Convolutional Neural Network (CNN) to extract its appearance descriptor. The latter goes through the learnable node encoder $\mathcal{E}_v$, which outputs the node embedding $h_{v_i}$ used to initialize node $v_i$. It can be defined as follows:
\begin{equation}
h_{v_{i}} = \mathcal{E}_{v}( \textrm{\small CNN} (\textbf{b}_i)).
\end{equation}
On the other hand, the initial edges embeddings rely on a concatenation of the appearance and spatial information related to the pair of nodes that the edge is connecting. Let us define the related appearance feature between two nodes as follows:
\begin{equation}
\begin{split}
\Delta f_{i,j} = [\; &||\; \textrm{\small CNN}(\textbf{b}_i) , \textrm{\small CNN}(\textbf{b}_j)\;||_2, \\
& \textit{\small cos\_similarity}(\textrm{\small CNN}(\textbf{b}_i),\textrm{\small CNN}(\textbf{b}_j))\;]
\end{split}
\end{equation}
where $[\cdot,\cdot]$ denotes the concatenation of the Euclidean distance and Cosine similarity between the appearance descriptors of a pair of bounding boxes $\textbf{b}_i$ and $\textbf{b}_j$. Both distances are considered, as they are the most commonly used for computing features distances, to obtain a higher dimension and more distinctive feature.
Since we only consider cross-camera associations (Equation \ref{Eq-1}), in order to obtain a relative spatial distance between detections from different cameras, a transformation to a common ground plane is required. More specifically, we need to obtain an approximation, as accurate as possible, of the foot position of each detection in a common framework.
Given a bounding box $\textbf{b}_i$ associated to the node $v_i$, the middle point of its base is defined by $(x_i + \frac{w_i}{2}, y_i)$, being $(x_i, y_i, w_i, h_i)$ the upper-left corner pixel coordinates, the width and height. This point is projected to the common ground plane as
\begin{equation}
(X_{i}, Y_{i}) = \textbf{H}_{c_i}(x_i + \frac{w_i}{2}, y_i),
\end{equation}
being $\textbf{H}_{c_i}$ the homography matrix that transforms coordinates from the image plane of the camera $c_i$ to coordinates in the common ground plane.
Hence, we compute the relative spatial distance information between two nodes as
\begin{equation}
\begin{split}
\Delta s_{i,j} = [\; &||\; (X_i,Y_i), (Y_j, Y_j)\; ||_1, \\
&||\; (X_i,Y_i), (X_j,Y_j)\; ||_2\;],
\end{split}
\end{equation}
the concatenation of the Manhattan and Euclidean distances, as they are the most commonly used for computing spatial distances, in order to obtain a more distinctive feature.
Finally, the concatenation of the distances, $ \Delta f_{i,j}$ and $ \Delta s_{i,j}$, is fed to
|
a learnable edge encoder $\mathcal{E}_e$ to obtain the initial edge embedding:
\begin{equation}
h_{(v_{i},v_{j})} = \mathcal{E}_{e}( [\Delta f_{i,j},\; \Delta s_{i,j}]).
\end{equation}
Both Multi Layer Perceptron (MLP) encoders (FCs + ReLU), $\mathcal{E}_{v}$ and $\mathcal{E}_{e}$, learn the optimal adjustment of the features to the target association task.
\subsection{Message Passing Network} \label{sec:mpn}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth,keepaspectratio]{img/MPN.pdf}
\caption{Block diagram describing the edge and node update steps of the MPN at a given step $l$. \label{fig:mpn}}
\end{figure}
MPNs propagate neural messages between neighbouring nodes and edges. Formally, each node and edge are initialized with a starting representation, usually called initial state. The last state is commonly used for prediction. Figure
\ref{fig:block-diagram} depicts the main blocks diagram of the method.
The goal of MPNs is to propagate neural messages between neighbouring nodes and edges along the graph $G_t$. This diffusion is accomplished by embedding updates, known as message passing steps, introduced by \cite{gilmer2017neural}. At each propagation step, each node's and edge's sent messages are computed; then, every node and edge aggregates their received messages; and, finally, their representation is updated by merging the incoming information with their own previous representation. Formally, each node and edge are initialized with a starting representation, usually called initial state (see Section \ref{sec:feat-enc}). The last state is commonly used for prediction (see Section \ref{classifier}).
Our formulation, is based on \cite{battaglia2018relational, kipf2016semi} and it divides each message passing step in two stages: edge update and node update. Both updates are iteratively performed over $L$ iterations, i.e., $L$ message passing steps. Note that the higher $L$ is, the farther the information is propagated along the graph. It plays a similar role to the receptive field of Convolutional Neural Networks (CNNs). For each iteration $l \in [1,L]$, the embedding of the edge connecting nodes $v_i$ and $v_j$ is updated as follows:
\begin{equation}
h_{(v_{i},v_{j})} ^{l} = \mathcal{U}_e ([ h_{v_i}^{l-1}, h_{v_j}^{l-1}, h_{(v_i,v_j)}^{l-1}]),
\end{equation}
where $\mathcal{U}_e$ is a learnable MLP encoder (FCs + ReLU).
On the other hand, the node embedding is updated by aggregating all the messages coming to the node $v_i$ from its adjacent nodes, i.e., its neighbors (see Figure \ref{fig:mpn}):
\begin{equation}
h_{v_i}^l = \sum_{j\in \mathcal{N}(v_i)} m_{(v_i,v_j)}^l,
\end{equation}
where $\mathcal{N}(\cdot)$ denotes the neighbouring nodes, and
\begin{equation}
m_{(v_i,v_j)}^l = \mathcal{U}_v([ h_{v_i}^{l-1}, h_{(v_i,v_j)}^{l}]),
\end{equation}
where $\mathcal{U}_e$ is another learnable MLP encoder (FCs + ReLU). The update functions $\mathcal{U}_e$ and $\mathcal{U}_v$ are learnt and shared across $G_t$. It is worth to mention that during all the iterations of the message passing procedure, the propagation of information happens simultaneously for all nodes and edges in $G_t$. Also note that during the first iteration $l=1$, $l-1 = 0$ denotes the initial state of the graph, just after graph construction (section \ref{sec:graph-const}) and features initialization (section \ref{sec:feat-enc}).
\subsection{Classifier} \label{classifier}
Our proposal is learning to predict which edges in the graph are active (being the same person identity). We train our model to predict $\hat{y}_{(v_i,v_j)}$, the value of the binary variable $y_{(v_i, v_j)}$ for each edge in $G_t$. It can be seen as an edge classification problem, using $y_{(v_i, v_j)}$ as labels. During inference, the graph is pruned according to each edge's prediction and a post-processing strategy is follow to fulfill some constraints (see Section \ref{seq:post}).
The classification of a given edge at a given iteration is computed by:
\begin{equation}
\hat{y}_{(v_i,v_j)}^l = \mathcal{C}(h_{(v_i,v_j)}^l),
\end{equation}
being $\mathcal{C}$ the learnable classifier, i.e., another MLP (FC + ReLU) followed by a sigmoid function, that outputs a single prediction value.
\subsubsection{Training}
To compute the training loss of the graph $G_t$, we use the classical binary Cross-Entropy loss (CE) \cite{goodfellow2016deep}, aggregated over all edges in $E$ (equation \ref{Eq-1}) and all iterations, similarly to \cite{braso2020learning}:
\begin{equation}
\mathcal{L}_{G_t} = \sum_{l=1}^L \sum_{(v_i,v_j)\in E} \textrm{CE}(\hat{y}^{l}_{(v_i,v_j)},y_{(v_i, v_j)} ).
\end{equation}
In the end, we learn a method capable to directly predict partitions in the graph by performing edge classification.
\subsubsection{Inference}
As previously mentioned, the goal is, at each frame $t$, to group the nodes in the graph $G_t$ corresponding to the same person identity, obtaining as many CCs as pedestrians appear.
To infer these components in the graph, we consider the output of the trained MPN model at the final iteration $l=L$. Thus, we obtain, for each edge in $G_t$, $\hat{y}_{(v_i,v_j)}^L \in [0,1]$, denoting the probability of the edge of being active. Then, this likelihood prediction is binarized, as usual in the literature, as follows:
\begin{equation}
\hat{y}_{(v_i,v_j)}^B=
\begin{cases}
0, & \text{if}\ \hat{y}_{(v_i,v_j)}^L \; < 0.5 \\
1, & \text{otherwise.}
\end{cases}
\end{equation}
$\hat{y}_{(v_i,v_j)}^B$ is used to classify edges as actives or non-actives. Non-active edges are prunned, while the active ones are maintained. The last stage of the inference is the Post-processing strategy, that is described hereunder.
\subsection{Post-processing \label{seq:post}}
Let us define the flow of a node as the number of edges connected to it. As each node can only belong to a single identity, we work under the assumption that each node can be connected to, at most, $M-1$ nodes, thus each pedestrian identity can be composed of, at most, detections of $M$ views.
Therefore, for each node $v_i$ the first following flow constraint must be satisfied:
\begin{equation} \label{eq:3}
\sum^{V}_{j=1} \hat{y}_{(v_i,v_j)}^B \leq (M-1), \; \forall j \neq i,
\end{equation}
meaning that each node can be connected to, at most, other $M-1$ nodes, being $M$ the number of total cameras (see figure \ref{fig:FlowConstr1}).
Graph theory \cite{biggs1986graph} defines \textit{cycles} in graphs as non-empty closed paths starting and ending at the same node. On the other hand, \textit{bridges} in a graph are edges not contained in any cycle, i.e., whose suppression increases the number of Connected Components in the graph. The bridges represent vulnerabilities in a connected network, and they may be useful for designing and network reasoning. An example of a bridge is represented in Figure \ref{fig:FlowConstr1}a as the edge connecting nodes 4 and 5. Note that this is the only edge in the graph whose suppression will affect the number of connected components.
\begin{figure}
\centering
\includegraphics[height=9cm,keepaspectratio]{img/FlowConstraint1.pdf}
\caption{\textit{Pruning} Post-processing example of a graph with $M=4$ cameras. (a) shows the initial state of the graph after inference and before Post-processing stage. Nodes 4 and 5 are not fulfilling the condition in equation \ref{eq:3} since they are connected to more than $M-1=3$ cameras. As both nodes have only one bridge edge between the edges under consideration to remove, the bridge edge (4,5) is removed and the final correct sub-graphs are obtained, see (b). \label{fig:FlowConstr1}}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=9cm,keepaspectratio]{img/FlowConstraint2.pdf}
\caption{\textit{Splitting} Post-processing example of a graph with $M=4$. (a) shows the initial state of the graph after inference and before Post-processing stage. All nodes fulfill equation \ref{eq:3} condition. However, the graph is not fulfilling the condition in equation \ref{eq:4} since it is a unique connected component of size 8. First candidate edges to remove are the bridges (2,4), (4,5) and (5,7). (b) shows the final sub-graphs after removing the minimum probability bridge. \label{fig:FlowConstr2}}
\end{figure}
To ensure the flow constraint in Equation \ref{eq:3}, we perform the following \textit{prunning} strategy:
\begin{algorithm}[H]
\caption{\textit{Prunning}}\label{alg:alg1}
\begin{algorithmic}[1]
\STATE Compute the flow of each node in $V$
\STATE \textbf{while} there are nodes violating equation \ref{eq:3} \textbf{do}:
\STATE \hspace{0.5cm} \textbf{if} there is one bridge \textbf{then}
\STATE \hspace{1cm} remove it
\STATE \hspace{0.5cm} \textbf{if} there is more than one bridge \textbf{then}
\STATE \hspace{1cm} remove bridge edge with $min(\hat{y}_{(v_i,v_j)}^L) $
\STATE \hspace{0.5cm} \textbf{if} there is no bridge \textbf{then}
\STATE \hspace{1cm} remove edge with $min(\hat{y}_{(v_i,v_j)}^L) $
\STATE \textbf{end}
\end{algorithmic}
\label{alg1}
\end{algorithm}
The second constraint that must be fulfilled refers to the size of each cluster. Since each identity should be represented by a unique connected component of nodes in the graph, it is logical to assume that
\begin{equation} \label{eq:4}
|p_i| \leq M,
\end{equation}
namely the maximum cardinality of each identity set should be, at most, the total number of cameras (see Figure \ref{fig:FlowConstr2}). To ensure the cardinality constraint we perform the following $splitting$ strategy:
\begin{algorithm}[H]
\caption{\textit{Splitting}}\label{alg:alg2}
\begin{algorithmic}[1]
\STATE Compute the size of each connected component in $G_t$
\STATE \textbf{while} there are components violating equation \ref{eq:4} \textbf{do}:
\STATE \hspace{0.5cm} \textbf{if} there is one bridge \textbf{then}
\STATE \hspace{1cm} remove it
\STATE \hspace{0.5cm} \textbf{if} there is more than one bridge \textbf{then}
\STATE \hspace{1cm} remove bridge edge with $min(\hat{y}_{(v_i,v_j)}^L) $
\STATE \hspace{0.5cm} \textbf{if} there is no bridge \textbf{then}
\STATE \hspace{1cm} remove edge with $min(\hat{y}_{(v_i,v_j)}^L) $
\STATE \textbf{end}
\end{algorithmic}
\label{alg1}
\end{algorithm}
\subsection{Connected Components} \label{sec:CC}
Once the final graph is computed, to obtain the clusters of nodes containing different view detections of the same person identity, we compute the Connected Components (CCs) over the graph (see Figure \ref{fig:block-diagram}). A component, i.e., partition, of a graph is said to be a CC if there is a path between every pair of nodes. Thus, each component will denote an identity, and each node in the component will correspond to a given identity in a certain camera view.
\section{Experiments}
\subsection{Evaluation framework}
\subsubsection{Datasets\label{sec:datasets}}
\begin{table}
\renewcommand{\arraystretch}{1.2}
\caption{ \label{Tab:data-sets}Definition of the sets of data used for training and inference. Note that S1, S2 and S3 cover all possible combinations of EPFL sequences.}
\begin{center}
\resizebox{0.49\textwidth}{!}{
\begin{tabular}{cccc|ccc}
\hline
& \multicolumn{3}{c}{TRAINING} & \multicolumn{3}{c}{INFERENCE}\tabularnewline
\hline
& Laboratory & Terrace & Basketball & Laboratory & Terrace & Basketball \tabularnewline
\hline \hline
\textbf{S1} & \checkmark & \checkmark & & & & \checkmark \tabularnewline
\hline
\textbf{S2} & \checkmark & & \checkmark & & \checkmark & \tabularnewline
\hline
\textbf{S3} & & \checkmark & \checkmark & \checkmark & & \tabularnewline
\hline
\end{tabular}}
\end{center}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth,keepaspectratio]{img/sample_views.PNG}
\caption{Sample frames from EPFL sequences: Basketball, Laboratory and Terrace. \label{fig:sample-views}}
\end{figure}
To evaluate the proposed approach, we require datasets that simultaneously provide multiple views of the same pedestrians and calibration information for camera-to-world mapping. We consider the EPFL\footnote{https://www.epfl.ch/labs/cvlab/data/data-pom-index-php/} multi-camera pedestrian videos dataset.
The EPFL dataset is divided in different sequences; we use the Terrace, Laboratory, and Basketball sequences, which are outdoor, indoor, and sport scenarios, respectively. All the sequences are captured by 4 cameras with overlapping FOVs and up to 9 pedestrians are present in them. Figure \ref{fig:sample-views} depicts sample frames from the 4 cameras of each sequence. Note that each sequence is totally independent from the others, in terms of scene location, lighting conditions, and the people who appear, so they correspond to totally different domains.
Table \ref{Tab:data-sets} details the sub-sets of data we have defined for the experiments. Note that S1-3 sets are all possible training-inference combinations of EPFL sequences. We consider these combinations to prove that the method's operation does not depend on the visual appearance of the scene.
\subsubsection{Evaluation Metrics\label{sec:eval-metrics}}
Since our objective is to group nodes in the graph corresponding to detections of the same person, we evaluate our data association by measuring the clustering performance.
We can evaluate the clustering performance using supervised metrics, as detections' identities are known.
In \cite{rosenberg2007v}, the following desirable objectives of cluster assignments were defined: Homogeneity: each cluster contains only samples of the same class; and Completeness: all samples of a given class are assigned to the same cluster. V-measure is also defined in \cite{rosenberg2007v} as the harmonic mean between Homogeneity and Completeness scores (as Precision and Recall are commonly combined into F-measure \cite{van1979information}). Homogeneity (H), Completeness (C) and V-measure (V-m) scores range from 0 to 1, where 1 represents perfect clustering.
These previous metrics are limited for the case of random labeling with a large number of clusters (i.e., scores may not be low). Therefore, we also consider metrics corrected-against-chance such as the Adjusted Rand Index (ARI) \cite{hubert1985comparing} that focuses on the assignment similarity, and the Adjusted Mutual Information (AMI) \cite{vinh2010information} that relies on the mutual information of two assignments \cite{banerjee2005clustering}. Both ARI and AMI range from 1 (perfect match) to 0 (random labeling).
All the considered metrics (H, C, V-m, ARI and AMI) score in $[0,1]$, but we provide them in the interval $[0,100]$.
\subsubsection{Implementation Details}
As input of the proposed and compared approaches, we consider the ground-truth bounding boxes to avoid biases related to detector's performance. Furthermore, to remove dependency on the CNN feature extractor, we consider several of them publicly available in the state of the art. An ablation study on their influence is performed in Section \ref{sec:ab-feat}. Each bounding box input image is resized according to the feature extractor employed, and also normalized by the mean and standard deviation of the ImageNet dataset \cite{russakovsky2015imagenet}. To reduce the overfitting and to improve generalization, we perform several random data augmentation techniques such as horizontal flip, color jitter and random erasing.
The proposed approach has been implemented using PyTorch and Pytorch Geometric frameworks, running on a NVIDIA TITAN RTX 24GB Graphics Processing Unit. To minimize the loss function and optimize the network parameters, we adopt the Stochastic Gradient Descend (SGD) solver. Since our model is trained from scratch, and in order to avoid training instability, we perform, as proposed in \cite{goyal2017accurate}, a gradual warmup strategy that increases the learning rate from 0 to the initial learning rate linearly during 5 epochs. The initial learning rate is heuristic set to $5e^{-3}$ and it decays along 20 epochs following the cosine annealing decay strategy, introduced in \cite{sgdr}, whose effectiveness with respect to the step decay has already been proven \cite{he2019bag,wightman2021resnet}. The batch size is set to 64, i.e., 64 graphs, since a graph per frame is computed (see Section \ref{sec:prob-form}).
In Table \ref{tab:sizes-fc} we report the specifications of each learnable encoder of the network, and the classifier.
\begin{table}
\begin{center}
\renewcommand{\arraystretch}{1.2}
\caption{Specification of each learnable encoder and the classifier of the network. Each type, input and output sizes of each layer are detailed. Note that the input size of the layer 0 of $\mathcal{E}_v$ depends on the output dimension of the $CNN$ feature extractor (we consider
|
0.01 $ & $300$ \\
RP6 & $ 1 $ & $ 1 $ & $ 2 $ & $ -2 $ &$ 0.1 $ & $ 0.1 $ & $ 0 $ & $ 0.8 $ & $200$ \\
\end{tabular}
\end{center}
\caption{Riemann problems. Initial condition, initial position of the discontinuity, $x_{c}$, final time, $t_{\mathrm{end}}$, and number of mesh cells on $x$-direction, $N_{x}$, for each Riemann problem.}
\label{tab:RP_IC}
\end{table}
The first test analysed, RP1, is the classical Sod problem presented for the first time in \cite{Sod78}. Figure \ref{fig:RP1_o1} shows a good agreement between the numerical and the exact solution for the shock, the contact, and the rarefaction waves.
\begin{figure}[!htbp]
\centering
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_Sod_o1_density}
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_Sod_o1_velocity}
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_Sod_o1_pressure}
\caption{Riemann problem 1 (Sod). 1D cut through the numerical results along the line $y=0$ for $\rho$, $u$ and $p$ at $t_{\mathrm{end}}=0.2$ using the first order method ($\mathrm{CFL}_{c}=3.35$, $c_{\alpha}=1$, $M\approx 0.93$).}
\label{fig:RP1_o1}
\end{figure}
\begin{figure}[!htbp]
\centering
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_Sod_lader_density}
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_Sod_lader_velocity}
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_Sod_lader_pressure}
\caption{Riemann problem 1 (Sod). 1D cut through the numerical results along the line $y=0$ for $\rho$, $u$ and $p$ at $t_{\mathrm{end}}=0.2$ using the LADER-ENO method ($\mathrm{CFL}_{c}=3.35$, $c_{\alpha}=1$, $M\approx 0.93$).}
\label{fig:RP1_lader}
\end{figure}
RP2 corresponds to a double rarefaction problem. Overall the shape of the exact solution is captured even if a finer mesh would be useful to better approximate the constant contact discontinuity between the two rarefactions, Figures \ref{fig:RP2_o1}-\ref{fig:RP2_lader}.
\begin{figure}[!htbp]
\centering
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_DR_cfl1_o1_e2_density}
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_DR_cfl1_o1_e2_velocity}
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_DR_cfl1_o1_e2_pressure}
\caption{Riemann problem 2 (Double rarefaction). 1D cut through the numerical results along the line $y=0$ for $\rho$, $u$ and $p$ at $t_{\mathrm{end}}=0.15$ using the first order method ($\mathrm{CFL}_{c}=0.4$, $c_{\alpha}=2$, $M\approx 1.37$).}
\label{fig:RP2_o1}
\end{figure}
\begin{figure}[!htbp]
\centering
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_DR_cfl1_LE_e2_density}
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_DR_cfl1_LE_e2_velocity}
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_DR_cfl1_LE_e2_pressure}
\caption{Riemann problem 2 (Double rarefaction). 1D cut through the numerical results along the line $y=0$ for $\rho$, $u$ and $p$ at $t_{\mathrm{end}}=0.15$ using the LADER-ENO method ($\mathrm{CFL}_{c}=0.4$, $c_{\alpha}=2$, $M\approx 1.37$).}
\label{fig:RP2_lader}
\end{figure}
The third test, RP3, corresponds to the Lax shock tube and is used to assess the ability of the method to deal with simple waves. The obtained results, presented in Figures \ref{fig:RP3_o1}-\ref{fig:RP3_lader}, match pretty well the exact reference solution.
\begin{figure}[!htbp]
\centering
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_Lax_cfl1_o1_density}
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_Lax_cfl1_o1_velocity}
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_Lax_cfl1_o1_pressure}
\caption{Riemann problem 3 (Lax). 1D cut through the numerical results along the line $y=0$ for $\rho$, $u$ and $p$ at $t_{\mathrm{end}}=0.14$ using the first order method on mesh M1 ($\mathrm{CFL}_{c}=2.78$, $M\approx 0.94$).}
\label{fig:RP3_o1}
\end{figure}
\begin{figure}[!htbp]
\centering
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_Lax_cfl1_lader_density}
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_Lax_cfl1_lader_velocity}
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_Lax_cfl1_lader_pressure}
\caption{Riemann problem 3 (Lax). 1D cut through the numerical results along the line $y=0$ for $\rho$, $u$ and $p$ at $t_{\mathrm{end}}=0.14$ using the LADER-ENO method on mesh M1 ($\mathrm{CFL}_{c}=2.78$, $M\approx 0.94$).}
\label{fig:RP3_lader}
\end{figure}
The fourth Riemann problem, RP4, presents three strong discontinuities travelling to the right originated from two shock colliding waves. Figures \ref{fig:RP4_o1}-\ref{fig:RP4_LBJ} show the solution obtained with the first and second order schemes. Note that the highly restrictive Barth and Jespersen limiter has been employed jointly with an artificial viscosity coefficient, $c_{\alpha}=5$, to keep the stability of the scheme.
\begin{figure}[!htbp]
\centering
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_4_cfl1_o1_density}
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_4_cfl1_o1_velocity}
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_4_cfl1_o1_pressure}
\caption{Riemann problem 4. 1D cut through the numerical results along the line $y=0$ for $\rho$, $u$ and $p$ at $t_{\mathrm{end}}=0.035$ using the first order method ($\mathrm{CFL}_{c}=0.42$, $c_{\alpha}=5$, $M\approx 1.97$).}
\label{fig:RP4_o1}
\end{figure}
\begin{figure}[!htbp]
\centering
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_4_cfl1_lader_BJ_density}
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_4_cfl1_lader_BJ_velocity}
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_4_cfl1_lader_BJ_pressure}
\caption{Riemann problem 4. 1D cut through the numerical results along the line $y=0$ for $\rho$, $u$ and $p$ at $t_{\mathrm{end}}=0.035$ using LADER-BJ method with reconstruction thorough primitive variables instead the conservative ones ($\mathrm{CFL}_{c}=0.42$, $c_{\alpha}=5$, $M\approx 1.97$).}
\label{fig:RP4_LBJ}
\end{figure}
RP5 is a severe test defined as a modification of the left half of the blast problem introduced in \cite{WC84}. It accounts for a left rarefaction wave, a right-travelling shock wave and a stationary contact discontinuity generated by an initial large pressure jump of order $10^{5}$ and a small velocity variation. The second order scheme has been run using two different limiter strategies. We observe that the minmod limiter, Figure \ref{fig:RP5_laderMM}, is more severe on damping the oscillation appearing after the rarefaction wave on the velocity field than the ENO-based reconstruction, Figure \ref{fig:RP5_lader1}, which captures better the right shock. The results obtained with the first order scheme are reported in Figure \ref{fig:RP5_o1}. The robustness of the developed methodology and its capability to deal with slowly moving contact discontinuities, for really high Mach numbers, are clearly proven.
\begin{figure}[!htbp]
\centering
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_5_cfl1_o1_density}
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_5_cfl1_o1_velocity}
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_5_cfl1_o1_pressure}
\caption{Riemann problem 5. 1D cut through the numerical results along the line $y=0$ for $\rho$, $u$ and $p$ at $t_{\mathrm{end}}=0.01$ using the first order scheme ($\mathrm{CFL}_{c}=2.7\cdot 10^{-3}$, $c_{\alpha}=2$, $M\approx 956.42$).}
\label{fig:RP5_o1}
\end{figure}
\begin{figure}[!htbp]
\centering
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_5_cfl1_lader_density}
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_5_cfl1_lader_velocity}
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_5_cfl1_lader_pressure}
\caption{Riemann problem 5.1D cut through the numerical results along the line $y=0$ for $\rho$, $u$ and $p$ at $t_{\mathrm{end}}=0.01$ using the LADER-ENO method ($\mathrm{CFL}_{c}=2.7\cdot 10^{-3}$, $c_{\alpha}=2$, $M\approx 513.68$).}
\label{fig:RP5_lader1}
\end{figure}
\begin{figure}[!htbp]
\centering
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_5_cfl1_lader_mimnod_density}
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_5_cfl1_lader_mimnod_velocity}
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_5_cfl1_lader_mimnod_pressure}
\caption{Riemann problem 5. 1D cut through the numerical results along the line $y=0$ for $\rho$, $u$ and $p$ at $t_{\mathrm{end}}=0.01$ using the LADER-minmod method ($\mathrm{CFL}_{c}=2.7\cdot 10^{-3}$, $c_{\alpha}=2$, $M\approx 691.58$).}
\label{fig:RP5_laderMM}
\end{figure}
The last Riemann problem considered, RP6, is characterised by two shock waves travelling in opposite directions. An excellent agreement with the exact solution is observed in Figures \ref{fig:RP6_o1}-\ref{fig:RP6_lader1}.
\begin{figure}[!htbp]
\centering
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_6_cfl1_o1_density}
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_6_cfl1_o1_velocity}
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_6_cfl1_o1_pressure}
\caption{Riemann problem 6. 1D cut through the numerical results along the line $y=0$ for $\rho$, $u$ and $p$ at $t_{\mathrm{end}}=0.8$ using the first order method ($\mathrm{CFL}_{c}=9.35$, $c_{\alpha}=2$, $M\approx 7.75$).}
\label{fig:RP6_o1}
\end{figure}
\begin{figure}[!htbp]
\centering
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_6_cfl1_lader_density}
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_6_cfl1_lader_velocity}
\includegraphics[trim= 5 5 5 5,clip,width=0.325\linewidth]{RP_6_cfl1_lader_pressure}
\caption{Riemann problem 6. 1D cut through the numerical results along the line $y=0$ for $\rho$, $u$ and $p$ at $t_{\mathrm{end}}=0.8$ using the first order method ($\mathrm{CFL}_{c}=9.35$, $c_{\alpha}=2$, $M\approx 7.29$).}
\label{fig:RP6_lader1}
\end{figure}
\subsection{2D circular explosion} \label{sec:2DCE}
The circular explosion problem presented here is based on an initial radial solution given by the Sod shock tube
\begin{equation}
\rho^{0}\left(\mathbf{x}\right) = \left\lbrace \begin{array}{lr}
1 & \mathrm{ if } \; r \le 0.5,\\
0.125 & \mathrm{ if } \; r > 0.5,
\end{array}\right. \qquad
\mathbf{u}^{0} \left(\mathbf{x}\right) = 0, \qquad
p^{0} \left(\mathbf{x}\right) = \left\lbrace \begin{array}{lr}
1 & \mathrm{ if } \; r \le 0.5,\\
0.1 & \mathrm{ if } \; r > 0.5,
\end{array}\right.
\end{equation}
see \cite{Toro,TT05,DPRZ16}. We consider the computational domain $\Omega=[-1,1]\times[-1,1]$ and periodic boundary conditions everywhere. The simulation is run until time $t_{\mathrm{end}}=0.25$ on a primal triangular mesh of $85344$ elements.
To get a reference solution, a one-dimensional PDE in the radial direction obtained from the compressible Euler equations when using convenient geometrical source terms, \cite{Toro}, is solved using a second order TVD scheme on a very fine mesh made of $10000$ elements. The results obtained with the first order scheme and the LADER-ENO methodology, Figures \ref{CE85_o1_t025}-\ref{CE85_ader_t025}, present a good agreement with the reference solution. \textcolor{black}{Figure \ref{CE85_comparative_t025} allows for a direct comparison of the solution obtained with both schemes along a 1D cut. The second order LADER method with ENO reconstruction provides a better approximation of the solution compared to the first order scheme, as expected.}
\begin{figure}[!htbp]
\centering
\includegraphics[trim= 10 5 5 5,clip,width=0.45\linewidth]{CE85_o1_3Ddensity_HD}\hspace{0.05\linewidth}
\includegraphics[trim= 10 5 5 5,clip,width=0.45\linewidth]{CE85_density_o1}
\vspace{0.05\linewidth}
\includegraphics[trim= 10 5 5 5,clip,width=0.45\linewidth]{CE85_pressure_o1}\hspace{0.05\linewidth}
\includegraphics[trim= 10 5 5 5,clip,width=0.45\linewidth]{CE85_velocity_o1}
\caption{Circular explosion. The left top image corresponds to the 3D plot of the obtained $\rho$ at the final time whereas the 1D plots containing the reference solution (black continuous line), a 1D cut (blue squared line) and the scatter plot (red dots), correspond to the $\rho$, $p$, and $\left|\mathbf{u}\right|$ fields obtained using the first order scheme (\textcolor{black}{$c_{\alpha }=1$}).}
\label{CE85_o1_t025}
\end{figure}
\begin{figure}[!htbp]
\centering
\includegraphics[trim= 10 5 5 5,clip,width=0.45\linewidth]{CE85_lader_3Ddensity_HD}\hspace{0.05\linewidth}
\includegraphics[trim= 10 5 5 5,clip,width=0.45\linewidth]{CE85_density_lader}
\vspace{0.05\linewidth}
\includegraphics[trim= 10 5 5 5,clip,width=0.45\linewidth]{CE85_pressure_lader}\hspace{0.05\linewidth}
\includegraphics[trim= 10 5 5 5,clip,width=0.45\linewidth]{CE85_velocity_lader}
\caption{Circular explosion. The left top image corresponds to the 3D plot of the obtained $\rho$ at the final time whereas the 1D plots containing the reference solution (black continuous line), a 1D cut (blue squared line) and the scatter plot (red dots), correspond to the $\rho$, $p$, and $\left|\mathbf{u}\right|$ fields obtained using the LADER-ENO scheme \textcolor{black}{($c_{\alpha }=1$)}.}
\label{CE85_ader_t025}
\end{figure}
\begin{figure}[!htbp]
\centering
\includegraphics[trim= 10 5 5 5,clip,width=0.33\linewidth]{CE85_density_comparative}\hfill
\includegraphics[trim= 10 5 5 5,clip,width=0.33\linewidth]{CE85_pressure_comparative}\hfill
\includegraphics[trim= 10 5 5 5,clip,width=0.33\linewidth]{CE85_velocity_comparative}
\caption{\textcolor{black}{Circular explosion. Comparison between the numerical solution obtained with the first order scheme (blue squares) and the LADER-ENO approximation (green circles).}}
\label{CE85_comparative_t025}
\end{figure}
\subsection{3D spherical explosion}
In this section, we study the behaviour of the method for a 3D spherical explosion benchmark based on the Sod problem.
The computational domain is defined to be the sphere of unit radius centered at the origin. Initial conditions read
\begin{equation}
\rho^{0}\left(\mathbf{x}\right) = \left\lbrace \begin{array}{lr}
1 & \mathrm{ if } \; r \le \frac{1}{2},\\[6pt]
0.125 & \mathrm{ if } \; r > \frac{1}{2},
\end{array}\right. \qquad
p^{0} \left(\mathbf{x}\right) = \left\lbrace \begin{array}{lr}
1 & \mathrm{ if } \; r \le \frac{1}{2},\\[6pt]
0.1 & \mathrm{ if } \; r > \frac{1}{2},
\end{array}\right. \qquad
\mathbf{u}^{0} \left(\mathbf{x}\right) = 0, \label{eq:IC_CE3D}
\end{equation}
with $r=\sqrt{x^{2}+y^{2}+z^{2}}$. Dirichlet boundary conditions are imposed and the domain is covered by $2280182$ tetrahedra.
The solution obtained using the LADER-ENO scheme with $\mathrm{CFL}=1$, \textcolor{black}{$c_{\alpha}=3$}, up to $t_{\mathrm{end}}=0.25$ is depicted in Figure \ref{fig:CE3D}. As reference solution we solve again the 1D code for Euler equations introduced in Section \ref{sec:2DCE} updated with appropriate source terms to account for three-dimensional effects.
The agreement observed for the 1D cuts of density, velocity magnitude and pressure prove the capability of the method to handle three-dimensional problems.
|
\begin{figure}[!htbp]
\centering
\includegraphics[trim= 5 0 5 0,clip,width=0.45\linewidth]{CE3D392_Sod_density3D}\hspace{0.05\linewidth}
\includegraphics[trim= 5 0 5 0,clip,width=0.45\linewidth]{CE3D392_Sod_density}
\vspace{0.05\linewidth}
\includegraphics[trim= 5 0 5 0,clip,width=0.45\linewidth]{CE3D392_Sod_pressure}\hspace{0.05\linewidth}
\includegraphics[trim= 5 0 5 0,clip,width=0.45\linewidth]{CE3D392_Sod_velocity}
\caption{LADER-ENO solution for the 3D spherical explosion test at final time. Surface mesh on the boundary and density contours on the interior surfaces obtained after taking away the first quadrant and
1D plots for density, velocity magnitude and pressure fields: 1D cut on $x\in\left[0,1\right], \, y=z=0$ (blue squares), scatter plot (red dots), reference solution (black line).}
\label{fig:CE3D}
\end{figure}
\subsection{First problem of Stokes}
To further analyse the behaviour of the developed method in the incompressible limit, we now consider the first problem of Stokes, \cite{SG16}.
The initial condition, defined in $\Omega=[-0.5,0.5]\times[-0.5,0.5]$, reads
\begin{equation}
\rho^{0}\left(\mathbf{x}\right) = 1,\qquad
p^{0} \left(\mathbf{x}\right) = \frac{1}{\gamma}, \qquad
{u}_{1}^{0} \left(\mathbf{x}\right) = 0, \qquad
{u}_{2}^{0} \left(\mathbf{x}\right) = \left\lbrace \begin{array}{lc}
-0.1 & \mathrm{ if } \; y \le 0,\\
0.1 & \mathrm{ if } \; y > 0
\end{array}\right.
\end{equation}
In the incompressible limit, this test case has an exact analytical solution for $u_{2}$ given by
\begin{equation}
{u}_{2} \left(\mathbf{x},t\right) = \frac{1}{10} \mathrm{erf}\left( \frac{x}{2\sqrt{\mu t}}\right).
\end{equation}
To complete the physical set up, we define $\gamma= c_{p} = 1.4$, $\lambda=0$, leading to \mbox{$M\approx 10^{-1}$}.
\textcolor{black}{Regarding boundary conditions, we impose the exact values for density and velocity in the $x$-direction, while on the top and bottom boundaries, we set periodic boundary conditions in $y$-direction.} Meanwhile, the exact values for density and velocity are employed in the remaining boundaries.
Finally, three different simulations are run attending to the value for the viscosity coefficient: $\mu= 10^{-2}$, $\mu=10^{-3}$, and $\mu=10^{-4}$. The simulations are run on a triangular primal mesh made of $1000$ elements up to time $t_{\mathrm{end}}=1$. The vertical velocity along $y=0$ is plotted in Figure~\ref{FSP_vvelocity} against the exact solution. We observe a good agreement between both curves for all three viscosities. Let us note that $\mu=10^{-4}$ is the only simulation run using the ENO reconstruction so that we completely avoid the small bump arising after the discontinuity if any limiting strategy is employed. In the other cases, such reconstruction can be neglected due to the high physical viscosity considered.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.325\linewidth]{./FS100_LE_e4}
\includegraphics[width=0.325\linewidth]{./FS100_Lsl_e3}
\includegraphics[width=0.325\linewidth]{./FS100_Lsl_e2}
\caption{Comparison of the vertical velocity $u_{2}$ along the 1D cut $y=0$ obtained for the first Stokes problem using LADER scheme against the exact solution at $t_{\mathrm{end}}=1$. $\mu=10^{-4}$ LADER-EN (left), $\mu=10^{-3}$ LADER without limiters (center), $\mu=10^{-2}$ LADER without limiters (right).}
\label{FSP_vvelocity}
\end{figure}
\subsection{Viscous shock}
Here we analyse a steady viscous shock with $M_s=2$ the shock Mach number.
Considering the particular case Pr$=0.75$, with Pr the Prandtl number, it is possible to find an exact solution of the compressible Navier-Stokes equations, derived by Becker in 1923, see \cite{Becker1923,BonnetLuneau,GPRmodel} for all the necessary details to setup this test case.
The computational domain $\Omega=[-0.5,0.5] \times [0,0.1]$ is discretized with 12500 triangular elements
of characteristic mesh spacing $h=1/250$. The shock wave is centered at $x=0$.
The values of the fluid in front of the shock wave are given by $\rho_0 =1$, $u_0=-2$, $v_0=w_0=0$, and $p_0=1/\gamma$ so that the corresponding sound speed is $c_0 = 1$ and the fluid is moving into the shock from the right to the left at shock Mach number $M_s=2$.
The Reynolds number based on a unitary reference
length ($L=1$) and on the flow speed $u_0$ is given by $Re_s=\frac{\rho_0 \, c_0 \, M_s \, L }{\mu}$.
The fluid parameters are chosen as $\gamma = 1.4$, $c_v = 2.5$, $\mu=2 \cdot 10^{-2}$ and $\lambda = 9 \frac{1}{3} \cdot 10^{-2}$, hence the corresponding shock Reynolds number is $Re_s=100$.
The simulation with the new hybrid FV/FE scheme proposed in this paper is run until time $t_{\mathrm{end}}=0.025$, \textcolor{black}{setting $c_\alpha=3$.}
The comparison between the numerical solution and the exact solution is shown in Figure \ref{fig.vshock} for the density $\rho$, the velocity $u$, and the pressure $p$. For all quantities, one can note a very good agreement.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.32\textwidth]{./VS-rho.pdf}
\includegraphics[width=0.32\textwidth]{./VS-u.pdf}
\includegraphics[width=0.32\textwidth]{./VS-p.pdf}
\caption{Numerical solution for the steady viscous shock obtained with the hybrid FV/FE scheme and exact solution of the compressible Navier-Stokes equations, Pr$=0.75$, $M=2$ and $Re=100$ at time $t=0.025$.}
\label{fig.vshock}
\end{center}
\end{figure}
\subsection{Lid-driven cavity flow}
A classical test for incompressible flows is the lid-driven cavity benchmark, which accounts for a well-known reference solution, \cite{GGS82}. Therefore this test may be the optimal candidate to assess the behaviour of the method in the incompressible limit. We define a square computational domain of unit length and set wall boundary conditions everywhere. In particular, we fix a purely horizontal velocity at the top boundary $u_{1}=1,\, u_{2}=0$ and consider homogeneous no-slip boundary conditions on the bottom and lateral boundaries. As initial conditions, we consider a unit density, $\rho=1$, the pressure $p=10^4$ and an initial fluid at rest. The viscosity is set to $\mu=10^{-2}$ so that $Re=100$ and $M \approx 8 \cdot 10^{-3}$ are the characteristic Reynolds and Mach numbers of this test attending to the lid velocity. \textcolor{black}{The artificial viscosity coefficient has been set to $c_{\alpha}=2$.}
In the left plot of Figure \ref{LDC_figure}, we show the Mach contour plot of the solution obtained with the LADER-ENO scheme overlapped by a sketch of the half dual elements employed.
The right plot reports the comparison between the approximated and the reference solution
for the horizontal and vertical velocities along the vertical and horizontal 1D cuts in the middle of the domain. An almost perfect match is observed.
\begin{figure}[!htbp]
\centering
\includegraphics[trim= 10 5 5 5,clip,width=0.44\linewidth]{./LDC_LEc0wbc_CNSPC_mach}\hspace{0.1\linewidth}
\includegraphics[trim= 10 5 5 5,clip,width=0.42\linewidth]{./LDC_LEc0wbc_CNSPC}
\caption{Solution of the lid-driven cavity test with $Re=10^{2}$. In the left subfigure the dual half grid considered is depicted over the Mach number contour plot. The right plot shows the obtained 1D velocity cuts along $x=0.5$ and $y=0.5$ (blue dashed line) and the reference solution (black circles) given by \cite{GGS82}.}
\label{LDC_figure}
\end{figure}
\subsection{Double shear layer}
In this section we apply the new hybrid finite volume / finite element method for all Mach number flows developed in this paper to the well-known double shear layer problem, see e.g. \cite{BCG89,TD15,Hybrid1}. The computational domain is given by $\Omega=[-1,1]^2$ and the initial condition reads
\begin{equation}
\rho^{0}\left(\mathbf{x}\right) = 1,\qquad
{u}_{1}^{0} \left(\mathbf{x}\right) = \left\lbrace \begin{array}{lc}
\tanh \left[\hat{\rho}(\hat{y}-\frac{1}{4})\right] & \mathrm{ if } \; \hat{y} \le \frac{1}{2},\\[6pt]
\tanh \left[\hat{\rho}(\frac{3}{4}-\hat{y})\right] & \mathrm{ if } \; \hat{y} > \frac{1}{2},
\end{array}\right.\quad
{u}_{2}^{0} \left(\mathbf{x}\right) = \delta \sin \left(2\pi \hat{x}\right), \quad
p^{0} \left(\mathbf{x}\right) = \frac{10^{5}}{\gamma},
\end{equation}
with the abbreviations $\hat{x}= \frac{x+1}{2}$ and $\hat{y}= \frac{y+1}{2}$. The remaining parameters of the setup of this test case are chosen as $\hat{\rho} = 30$, $\mu= 2\cdot 10^{-4}$, $\lambda =0$ and $\delta =0.05$, see also \cite{TD15,Hybrid1}. The characteristic Mach number of this test case is $M \approx 2 \cdot 10^{-3}$, hence we are again in the low Mach number regime. The domain is covered with $8192$ primal elements and the boundary conditions are periodic everywhere. In Figure~\ref{DSL64_Lnl}, we show contour plots of the vorticity at times $t\in\left\lbrace 0.8, 1.6, 2.4, 3.6\right\rbrace$.
Comparing our numerical solution with the one obtained in \cite{DPRZ16,TD15} we note a very good agreement, although the scheme presented in this paper is only second order accurate, while in \cite{DPRZ16,TD15} high order schemes have been employed.
\begin{figure}[!htbp]
\centering
\includegraphics[trim= 100 100 100 100,clip,width=0.4\linewidth]{DSL_Lsl_t08}\hspace{0.05\linewidth}
\includegraphics[trim= 100 100 100 100,clip,width=0.4\linewidth]{DSL_Lsl_t16}
\vspace{0.05\linewidth}
\includegraphics[trim= 100 100 100 100,clip,width=0.4\linewidth]{DSL_Lsl_t24}\hspace{0.05\linewidth}
\includegraphics[trim= 100 100 100 100,clip,width=0.4\linewidth]{DSL_Lsl_t36}
\caption{Contour plots of vorticity at times $t\in \left\lbrace 0.8,1.6,2.4,3.6 \right\rbrace $ (from top left to bottom right) for the double shear layer test problem. The second order LADER scheme has been employed for the discretization of nonlinear convective terms. }
\label{DSL64_Lnl}
\end{figure}
\subsection{Single Mach reflection problem}
Let us now consider the single Mach reflection problem that can be found in \cite{Toro} and for which experimental reference data are available in \cite{Toro} and \cite{albumfluidmotion} under the form of Schlieren images.
The test problem consists in a shock wave that is initially located in $x=-0.04$ and that travels to the
right at a shock Mach number of $M=1.7$, hitting a wedge that forms an angle of $\varphi=25^\circ$ with the $x$-axis.
The pressure, $p$, and density, $\rho$, ahead of the shock are set to $\rho_0=1$ and $p_0 = 1/\gamma_g$, respectively,
while for $x>-0.04$ we consider a fluid at rest. The post-shock values can be easily obtained
from the Rankine-Hugoniot relations of the inviscid compressible Euler equations.
The computational domain is $\Omega = [0,3] \times [0,2]$, from which the $25^\circ$ wedge is subtracted.
It is discretized using 1237328 triangular elements of characteristic mesh spacing $h=0.003$. \textcolor{black}{For this test we set $c_\alpha=1$.}
The pressure field obtained for a simulation run until $t_{\mathrm{end}}=1.2$ is
depicted in Figure~\ref{fig.smr}.
The flow field obtained with the novel hybrid FV/FE scheme agrees well with the numerical and experimental reference solutions shown in \cite{Toro}.
The shock wave is properly resolved and located in the correct position at $x=2$.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.85\textwidth]{./SMR-p.png}
\end{center}
\caption{Pressure contours obtained at $t_{\mathrm{end}}=1.2$ for the single Mach reflection problem using the new all Mach number hybrid FV/FE scheme presented in this paper. The shock is in the correct location at $x=2$. }
\label{fig.smr}
\end{figure}
\subsection{Shock-wedge interaction problem}
\label{sec.shockwedge}
In this section, we consider a flow that involves the interaction of a mild shock wave
with a two-dimensional wedge, see also \cite{DumbserKaeser07,SolidBodies}.
Experimental reference data for this test are available in form of Schlieren photographs, see
\cite{albumfluidmotion,schardin}.
The computational domain is given by $\Omega = [-2,6]\times[-3,3]$, excluding a wedge
of length $L=1$ and height $H=1$ with its tip located in the origin. On all three edges
of the wedge, we impose inviscid wall boundary conditions, while the upper and lower boundaries
are periodic. On the left and on the right boundary, we impose the initial condition
as Dirichlet boundary condition.
The initial condition for a right-moving shock wave with shock Mach number $M_s=1.3$, initially located in $x=-1$,
is setup according to the Rankine-Hugoniot relations, see \cite{DumbserKaeser07}. The pre-shock state (for $x>-1$)
is given by $\rho_R = 1.4$, $\mathbf{u}_R=0$, and $p_R = 1.0$.
A triangular mesh with a characteristic mesh spacing of $h=1/100$ is employed, leading to a total of 1080342 triangles. \textcolor{black}{For this test, we set $c_\alpha=0.5$.} The pressure contours obtained with our hybrid FV/FE method are depicted in Figure \ref{fig.shockwedge} at
several times. The location and shape of the shock and of the vortices shed behind the wedge compare qualitatively
with those shown in \cite{DumbserKaeser07,SolidBodies,albumfluidmotion,schardin}.
\begin{figure}
\begin{center}
\includegraphics[width=0.6\textwidth]{./Wedge-t15.png}
\includegraphics[width=0.6\textwidth]{./Wedge-t20.png}
\includegraphics[width=0.6\textwidth]{./Wedge-t25.png}
\end{center}
\caption{Shock-wedge interaction problem in 2D. Pressure contours
using the hybrid FV/FE method on an unstructured triangular
grid with mesh spacing $h=1/100$. Output times
from top to bottom: $t=1.5$, $t=2.0$, $t=2.5$.}
\label{fig.shockwedge}
\end{figure}
\subsection{Supersonic flow at $M=3$ over a circular blunt body}
This last numerical test problem deals with the supersonic flow over a circular cylinder at Mach number $M=3$. The computational domain is the part for which $x\leq 0$ of a circle of radius $R=2$ centered in $\mathbf{x}_c=(0.5,0)$, from which a circular blunt body of radius $r_b=0.5$ centered in the origin is subtracted. The domain is discretized using
a triangular mesh of characteristic mesh spacing $h=1/200$ composed of 348964 triangles.
\textcolor{black}{For this test, we set $c_\alpha=3$.}
The initial condition is $\rho=1.4$, $\mathbf{u}=(3,0)$, $p=1$ in the entire computational domain. On the blunt body, inviscid wall boundary conditions are imposed. On the left inflow boundary, we impose the initial condition as Dirichlet boundary condition, while outflow is set on the right boundary. The computational results obtained with our hybrid FV/FE scheme are depicted in Figure \ref{fig.bluntbody} at time $t=5$. The typical bow shock forms in front of the blunt body.
\begin{figure}[h]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.5\textwidth]{./BluntBody2D.png}
\end{tabular}
\end{center}
\caption{Pressure contours for the $M=3$ flow over a circular blunt body at time $t=5$.
}
\label{fig.bluntbody}
\end{figure}
\section{Conclusions}\label{sec:conclusions}
In this paper a novel asymptotic-preserving semi-implicit hybrid FV/FE algorithm has been proposed for the solution of all Mach number flows on staggered unstructured grids.
The initial semi-discretization in time of Navier-Stokes equations allows a
partial decoupling of the calculation of the linear momentum and of the density
with respect to the solution of the pressure correction system. The first stage of the method
involves the computation of the new density and an intermediate approximation
of the linear momentum and total energy density, which account for the contribution
of convection, diffusion, and gravity terms in the related conservative equations.
Moreover, the pressure gradient at the previous time step is included so that
the velocities would need only to be corrected with the pressure difference once
it is computed. A local ADER methodology is employed to achieve a second order
scheme in space and time, where we benefit from the dual mesh structure to
approximate the gradients involved in the half in time reconstruction, hence reducing
the stencil with respect to classical ADER methods. This procedure yields a
good intermediate approximation of the linear momentum and total energy density
to be provided for the computation of the pressure unknown on the projection stage.
The splitting proposed following \cite{TV12,TD17} leads to an efficient numerical
method in which the sound velocity is avoided on the eigenvalues computation of
the transport-diffusion equations, approximated using an explicit scheme. Then,
the pressure system is solved using classical implicit continuous finite element methods.
Accordingly, the time step computed through the CFL condition is only limited by
the flow velocity reducing the computational cost of the overall method.
A key point of the proposed algorithm is the Picard iteration procedure
that allows an iterative update of the linear momentum, enthalpy, and pressure
variables avoiding the solution of a complex nonlinear system for the pressure.
Once the pressure difference between two consecutive time steps is computed,
the linear momentum and energy are corrected. The proposed methodology has been
carefully validated by comparison of the obtained results with available
analytical and numerical solutions. The numerical tests ranging from the
incompressible limit to supersonic flows show the capability of
the method to address complex flow phenomena.
In the future, we plan to extend the hybrid FV-FE methodology in the context of shallow
water equations making use of the seminal ideas presented in
\cite{CasulliVOF,CasulliCheng1992, CasulliWalters2000,KramerStelling,TD14sw}.
Moreover, more complex PDE systems, including natural involution constraints,
like MHD equations, will be considered. To this end, the development of a
structure-preserving scheme verifying the divergence-free condition will be essential, \cite{Pow97,MOSSV00,Bal03,BDA14,DBTF19}.
\section*{Acknowledgements}
This work was financially supported by the Italian Ministry of Education, University
and Research (MIUR) in the framework of the PRIN 2017 project \textit{Innovative numerical methods for evolutionary partial differential equations and applications} and via the Departments of Excellence Initiative 2018--2022 attributed to DICAM of the University of Trento (grant L. 232/2016). Furthermore, LR and MEV have received funding by Spanish MCIU under project MTM2017-86459-R and by FEDER and Xunta de Galicia funds under the ED431C 2017/60 project. SB was also funded by INdAM via a GNCS grant for young researchers and by a \textit{UniTN starting grant} of the University of Trento. SB, LR and MD are members of the GNCS group of INdAM.
\bibliographystyle{plain}
|
\section{Introduction}
Research on learning from data distributed over multiple computational units (machines, users, devices) has grown in recent years, as data is commonly generated by multiple users, such as smart devices and wireless sensors.
Communication costs and bandwidth are often bottlenecks on the performance of learning algorithms in these settings \cite{Garg2014OnCC,pmlr-v23-balcan12a,Daum2012EfficientPF}.
The bottlenecks become even more severe in federated learning and analytics \cite{Kairouz2021AdvancesAO},
where many users coordinate with a server to learn a central model, while communication is expensive and operates at low rates.
This paper considers learning high-dimensional discrete distributions from user data in the distributed setting.
In this setting, several communication-efficient methods have been proposed, and their optimality under communication constraints has been established under various models \cite{Han2018DistributedSE,Barnes2019LowerBF,Han2021GeometricLB,Acharya2020InferenceUI,pmlr-v132-acharya21b,chen2021breaking,Chen2021PointwiseBF}.
However, the key challenge of \emph{heterogeneity}, i.e., that users' distributions can differ, is rarely considered.
Heterogeneity is common, as users inevitably have
unique characteristics \cite{NEURIPS2020_222afbe0}.
Meanwhile, heterogeneity can cause a significant performance drop for learning algorithms designed only for i.i.d data \cite{McMahan2017CommunicationEfficientLO,Li2020OnTC,Dobriban2018DistributedLR}.
To use all the data, one needs to learn some central structure, transferable to all individual users.
Then one may locally learn--or, finetune---some unique components for each user \cite{Du2021FewShotLV,Tripuraneni2021ProvableMO,Collins2021ExploitingSR,Xu2021LearningAB}.
To study this paradigm, we first need to introduce a suitable model of heterogeneity.
We consider, as an example, the heterogeneous frequencies of words across different texts, e.g., news articles, books, plays (tragedies and comedies), viewed as users.
Most words appear with nearly the same probabilities in different texts,
however, a few can have very different probabilities, such as ``sorrowful'' being common in tragedies and ``convivial'' being common in comedies.
Motivated by this, we formulate a model of sparse heterogeneity.
Specifically, suppose that the discrete distributions of all users differ from an underlying central distribution in at most $s \ge 0$ entries, where $s$ is much smaller than the dimension $d \ge 0$.
Sparse heterogeneity is relevant to applications such as recommendation systems \cite{Hu2019HERSMI,Qian2015StructuredSR,Liu2019RecommenderSW,Bastani2021PredictingWP} and medical risk scoring \cite{Subbaswamy2019FromDT,QuioneroCandela2009DatasetSI,Mullainathan2017DoesML}.
However, given data generated by multiple distributions with sparse heterogeneity, previous works \cite{Han2018DistributedSE,Acharya2020InferenceUI,pmlr-v132-acharya21b,Chen2021PointwiseBF} either do not use all the data,
or suffer from bias due to heterogeneity that does not vanish as the sample size increases.
Here we propose a novel \underline{\textbf{s}}parse \underline{\textbf{h}}eterogeneity-\underline{\textbf{i}}nspired collaboration and \underline{\textbf{f}}ine-\underline{\textbf{t}}uning method (SHIFT) where we first collaboratively learn the central distribution,
and then fine-tune the central estimate to individual distributions.
Our method makes full use of heterogeneous data, leading to a significant improvement in error rates compared to prior methods. See Table \ref{tab:comparsion} for an overview, explained in detail later.
\begin{table}[t]
\centering
\caption{\small Estimation error $\mathbb{E}[\|\widehat{\mathbf{p}}^t-\mathbf{p}^t\|_2^2]$ of various methods when $n$ is sufficiently large:
$\mathbf{p}^t$ is the test distribution,
$\widehat{\mathbf{p}}^t$ is the estimator,
$\boldsymbol{\delta}^t= T^{-1}\sum_{t^\prime \in[T]}\mathbf{p}^{t^\prime}-\mathbf{p}^t$ is a non-vanishing measure of heterogeneity.
See Section \ref{cont} for other notations.
Constants and logarithmic factors are omitted for clarity.
The ``data usage'' column indicates whether the estimate is obtained for each cluster separately or by pooling data. }
\label{tab:comparsion}
\begin{tabular}{lccc}
\toprule
Method & Estimation Error & Data Usage & Bound Type\\
\midrule
Unif. Group./Hash. \cite{{Han2018DistributedSE}} & $O\left(\frac{d}{2^b n}\right)$ & Separate & Upper \\
Unif. Group./Hash. \cite{{Han2018DistributedSE}} & $O\left(\|\boldsymbol{\delta}^t\|_2^2+\frac{d}{2^bTn}\right)$ & Pool & Upper \\
Localize-then-Refine$^*$ \cite{Chen2021PointwiseBF} & ${O}\left(\frac{\|\mathbf{p}^t\|_{1/2}}{2^b n}
\right)$ & Separate & Upper \\
Localize-then-Refine$^*$ \cite{Chen2021PointwiseBF} & ${O}\left(\|\boldsymbol{\delta}^t\|_2^2+\frac{\|\frac{1}{T}\sum_{t^\prime \in[T]}\mathbf{p}^{t^\prime}\|_{1/2}}{2^bT n}\right)$ & Pool & Upper\\
\midrule
SHIFT (Theorem \ref{thm:median-collab-final}) & $\tilde{O}\left(\frac{\max\{2^b,s\}}{ 2^b n}+\frac{d}{2^b Tn}\right)$ & $-$ & Upper \\
SHIFT (Theorem \ref{thm:collab-lower-bound}) & ${\Omega}\left(\frac{\max\{2^b,s\}}{2^b n}+\frac{d}{2^b Tn}\right)$ & $-$ & Lower \\
\bottomrule
\multicolumn{4}{l}{$^*$\,\footnotesize{This method \cite{Chen2021PointwiseBF} requires interactive communication protocols, while other methods are non-interactive.}}
\end{tabular}
\end{table}
\subsection{Contributions}\label{cont}
We consider the problem of learning $d$-dimensional distributions with $s$-sparse heterogeneity.
We assume there are $T$ clusters of user datapoints,
and allow each datapoint to be transmitted in a message with at most $b$ bits of information to the server.
Our setting embraces heterogeneous data and thus is a significant generalization of the models from \cite{Han2018DistributedSE,Barnes2019LowerBF,Han2021GeometricLB,Acharya2020InferenceUI}.
Our technical contributions are as follows:
\begin{itemize}
\item We propose the SHIFT method to learn heterogeneous distributions with collaboration and tuning, in a sample-efficient manner.
Our method can, in principle, be used with an arbitrary robust estimate of the probability of each entry/coordinate.
When entry-wise median and trimmed mean are used, we provide upper bounds on the estimation error of individual distributions in the $\ell_2$ and $\ell_1$ norms.
We show a factor of (the order of) $\min\{T,d/\max\{s,2^b\}\}$ improvement in sample complexity compared to previous works; showing the benefit of collaboration (large $T$) and sparsity (small $s$), despite communication constraints and heterogeneity.
\item To justify the optimality of our method, we prove minimax lower bounds on the estimation errors of individual distributions in the $\ell_2$ and $\ell_1$ norms, holding for all, possibly interactive, learning methods.
These lower bounds, combined with our upper bounds, imply that our median-based method is minimax optimal.
\item We support our method with experiments on both synthetic and empirical datasets, showing a significant improvement over previous methods.
\end{itemize}
\subsection{Related Works}\label{sec:related-work}
\paragraph{Learning with Heterogeneity.}
Learning with heterogeneity is commonly found in the broader context of multi-task learning \cite{RCaruana,Thrun1998LearningTL,Baxter2000AMO} and federated learning \cite{Achituve2021PersonalizedFL,Shamsian2021PersonalizedFL,Grimberg2021OptimalMA}, where a central model or representation is learned from multiple heterogeneous datasets.
These central representations can be useful for few-shot learning, i.e., for new problems with a small sample size \cite{Sun2017RevisitingUE,Goyal2019ScalingAB} due to their ability to adapt to new tasks efficiently.
In heterogeneous linear regression, \cite{Tripuraneni2021ProvableMO,Du2021FewShotLV} show improved sample complexities by assuming a low dimensional central representation, compared to the i.i.d. setting \cite{Grimberg2021OptimalMA,Maurer2016TheBO}.
Related results are proved in \cite{Collins2021ExploitingSR} for personalized federated learning.
\cite{Xu2021LearningAB} study a bandit problem where the unknown parameter in each dataset equals a global parameter plus a sparse instance-specific term. We study a different setting, learning distributions with sparse heterogeneity under communication constraints.
\paragraph{Estimating Distributions under Communication Constraints. }
Estimating discrete distributions has a rich literature \cite{lehmann2006theory,agresti2003categorical}. Under communication constraints, \cite{Han2018DistributedSE,Barnes2019LowerBF,Han2021GeometricLB,Acharya2020InferenceUI} consider the non-interactive scenario, and establish the minimax optimal rates, in terms of data dimension and communication budget, via potentially shared randomness, when all users' data is homogeneous.
The optimality for the general
interactive (or, blackboard) methods is developed by \cite{Acharya2020GeneralLB}.
A few works study the estimation of sparse distributions.
In particular, \cite{pmlr-v132-acharya21b} consider $s$-sparse distributions and establish minimax optimal rates under communication and privacy constraints, which are further improved by localization strategies in \cite{chen2021breaking}.
Complementary to minimax rates, \cite{Chen2021PointwiseBF} provides pointwise rates, governed by the half-norm of the distribution to be learned, instead of its dimension. Our setting embraces heterogeneous data, and thus is a generalization of the one studied in above works.
\paragraph{Robust Estimation \& Learning.} Robust statistics and learning study algorithms resilient to unknown data corruption \cite{huber1981robust,hampel2011robust, Anscombe1960RejectionOO,Tukey1960ASO,Huber1964RobustEO}.
The median-of-means method \cite{Lerasle2011ROBUSTEM,Minsker2015GeometricMA,Minsker2019DistributedSE} partitions the data into subsets, computes an estimate (such as the mean) from each, and takes their median.
Similarly, some works study robustness from the optimization perspective, proposing to robustly aggregate gradients of the loss functions \cite{Su2016FaultTolerantMO,Su2016NonBayesianLI,Blanchard2017ByzantineTolerantML,Yin2018ByzantineRobustDL}.
We adapt some analysis techniques from \cite{Minsker2019DistributedSE,Yin2018ByzantineRobustDL} to our significantly different setting of estimation with heterogeneity and communication constraints.
\subsection{Notations}
Throughout the paper, for an integer $d\geq 1$, we write $[d]$ for both $\{1,\dots,d\}$ and $\{e_1,\dots,e_d\}\subseteq \mathbb{R}^d$, where $e_k$ is the $k$-th canonical basis vector of $\mathbb{R}^d$. For a vector $\mathbf{v}\in\mathbb{R}^d$, we refer to the entries of $\mathbf{v}$ by both $[\mathbf{v}]_1,\dots,[\mathbf{v}]_d$ and $v_1,\dots,v_d$.
We denote
$\|\mathbf{v}\|_p = (\sum_{k\in[d]} |v_k|^p)^\frac{1}{p}$ for all $p>0$ with $\|\mathbf{v}\|_0$ defined additionally as the number of non-zero entries.
We let $\mathcal{P}_d:= \{\mathbf{p} = (p_1,\dots, p_d) \in [0, 1]^d: p_1 + \cdots + p_d = 1\}$ be the simplex of all $d$-dimensional discrete probability distributions. For $\mathbf{p}\in\mathcal{P}_d$, we denote by $\mathbb{B}_s(\mathbf{p})$ the $s$-distinct neighborhood $\{\mathbf{p}^\prime\in\mathcal{P}_d:\|\mathbf{p}^\prime-\mathbf{p}\|_0\leq s\}$.
For a random variable $X$, we denote $n$ i.i.d. copies of $X$ by $X^{[n]}$.
Given any index set $\mathcal{I}$, we write $|\mathcal{I}|$ for its cardinality and denote by $[\mathbf{v}]_{\mathcal{I}}$ the sub-vector $([\mathbf{v}]_k)_{k\in\mathcal{I}}$ indexed by $\mathcal{I}$.
We use the Bachmann-Landau asymptotic notations $\Omega(\cdot)$, $\Theta(\cdot)$, $O(\cdot)$ to hide constant factors, and use $\tilde{\Omega}(\cdot)$, $\tilde{O}(\cdot)$ to also hide logarithmic factors.
We denote the categorical distribution with class probability vector $\mathbf{p}\in\mathcal{P}_d$ by $\mathrm{Cat}(\mathbf{p})$. In our algorithm to be introduced shortly, we use $\widecheck{\cdot}$ and $\widehat{\cdot}$ to indicate the intermediate estimate and the final estimate, respectively.
\section{Problem Setup}\label{sec:setup}
We consider the problem of collaboratively learning distributions defined according to the following model of heterogeneity (see Figure~\ref{fig:problem-structure} for an illustration).
There are $T \ge 1$ clusters $\{\mathcal{C}^t\triangleq (X^{t,j})_{j\in[n]}:t\in[T]\}$ of user datapoints, each of which contains $n$ i.i.d. local datapoints.
Each datapoint $X^{t,j}$ is in a one-hot format, i.e., $X^{t,j} \in \{e_1,e_2,\ldots,e_d\}$, and
follows the categorical distribution $\mathrm{Cat}(\mathbf{p}^t)$ where $\mathbf{p}^t\in\mathcal{P}_d$ is unknown.
Thus, user datapoints in the same cluster $\mathcal{C}^t$ have an identical distribution $\mathbf{p}^t$,
while the distribution $\mathbf{p}^t$ can vary, i.e., be heterogeneous, across clusters $t\in[T]$.
The datapoint $X^{t,j}$ is encoded by its user into a message $Y^{t,j}$, and then transmitted to a central server. We assume that the message sent by each datapoint is encoded into no more than $b$ bits. We also assume $b$ is significantly smaller than $\log_2 d$ so that the communication is efficient.
The goal here is to collaboratively learn the distributions $\mathbf{p}^t$ from the collection of messages $\{Y^{t^\prime,[n]}\triangleq (Y^{t^\prime,j})_{j\in[n]}:t^\prime\in[T]\} $ despite heterogeneity. More precisely, we aim to design per-cluster estimators $\widehat{\mathbf{p}}^t$, $t\in[T]$, with $\widehat{\mathbf{p}}^t: \{Y^{t^\prime,[n]}:t^\prime\in[T]\} \to \mathcal{P}_d$ to minimize the $\ell_2$ errors
\begin{equation*}
\mathbb{E}[\|\widehat{\mathbf{p}}^t-{\mathbf{p}}^t\|_2^2],\quad \text{for all $t\in[T]$}.
\end{equation*}
We also study the widely-used $\ell_1$ error metric (in addition to the $\ell_2$ metric).
When $T=1$, i.e., all user datapoints are homogeneous and there is a single distribution to learn, the problem reduces to the one studied by \cite{Han2018DistributedSE,Barnes2019LowerBF,Han2021GeometricLB,Acharya2020InferenceUI,Acharya2020GeneralLB}.
\begin{figure}
\centering
\includegraphics[width = 0.5
\textwidth]{images/structure-2.png}
\caption{\small Learning distributions with heterogeneity and communication constraints.}
\label{fig:problem-structure}
\end{figure}
\paragraph{Model of Heterogeneity. }
In heterogeneous settings,
collaboration among the users is most beneficial
if the local distributions are related.
We model this by
assuming that the local distributions are sparse perturbations of an unknown
\emph{central distribution} $\mathbf{p}^\star\in \mathcal{P}_d$.
The distribution $\mathbf{p}^t$ of each cluster $t$ differs from $\mathbf{p}^\star$ in at most $s \ge 0$ entries:
\begin{equation}\label{eqn:sparse-hetero}
\|\mathbf{p}^t-\mathbf{p}^\star\|_0\leq s,\quad \forall\,t\in[T].
\end{equation}
The central distribution $\mathbf{p}^\star$ can be viewed as the central structure across heterogeneous clusters of datapoints.
The level of heterogeneity is controlled by the parameter $s$.
When $s$ is much smaller than $d$, the local distributions differ from the center in a small number of entries.
While motivated by word frequencies of different texts, our model of sparse heterogeneity is also relevant for recommendation systems, where the high-dimensional item-preference vectors of users can vary sparsely \cite{Hu2019HERSMI,Qian2015StructuredSR,Liu2019RecommenderSW,Bastani2021PredictingWP}; and for medical risk scoring, where hospitals can have similar characteristics, with a few systematic differences in diagnosis behavior, healthcare utilization, etc. \cite{Subbaswamy2019FromDT,QuioneroCandela2009DatasetSI,Mullainathan2017DoesML}.
\section{Algorithm}\label{sec:algorithm}
We now introduce our method for leveraging heterogeneous data to improve per-cluster estimation accuracy. We first discuss a hashing-based method to handle the communication constraint.
Since the communication between each user datapoint and the server is restricted to at most $b>0$ bits (where we may have $2^b\ll d$),
the datapoint $X^{t,j}$ needs to be encoded by an encoding function $W^{t,j}:\mathcal{X}\triangleq [d]\rightarrow \mathcal{Y}$.
The $b$-bit constraint enforces $|\mathcal{Y}|\leq 2^b$.
Then the encoded message $Y^{t,j}:=W^{t,j}(X^{t,j})$ is sent to the server, where it is decoded and used.
There are relatively sophisticated communication protocols under which the design of encoding functions can be \emph{interactive} \cite{Chen2021PointwiseBF,chen2021breaking,Acharya2020GeneralLB}, i.e., depend on previously sent messages.
Here we adopt a \emph{non-interactive} encoding-decoding scheme,
based on uniform hashing \cite{Acharya2020EstimatingSD,chen2021breaking} where $W^{t,j}$ depends only on $X^{t,j}$ and is independent of other messages.
Specifically, each datapoint $X^{t,j}$ is encoded via an independent random hash function $h^{t,j}:[d]\rightarrow [2^b]$.
Upon receiving all messages, the server counts the empirical frequencies of all symbols, leading to hashed estimates $\widecheck{\mathbf{b}}^t$.
The communication scheme based on uniform hashing is summarized below.
\begin{align*}
&{(\mathrm{Encoding}):\;}\text{Send the message $Y^{t,j}=h^{t,j}(X^{t,j})$ encoded by a hash function $h^{t,j}:[d]\rightarrow[2^b]$};\\
&{(\mathrm{Decoding}):\;}\text{Count $N_k^t(Y^{t,[n]})=|\{j\in [n]:h^{t,j}(k)=Y^{t,j}\}|$ and return $[\widecheck{\mathbf{b}}^t]_k=N_k^t/n$}.
\end{align*}
One can readily verify that $\mathbb{E}[\widecheck{\mathbf{b}}^t]=[(2^b-1)\mathbf{p}^t+1]/2^b\triangleq \mathbf{b}^t$;
and thus the hashed estimate $\widecheck{\mathbf{b}}^t$ is biased for $\mathbf{p}^t$. We also write $\mathbf{b}^\star=[(2^b-1)\mathbf{p}^\star+1]/2^b$ for the mean of a hashed datapoint sampled from the central distribution.
More details on the hashed estimator $\widecheck{\mathbf{b}}^t$ are given in Appendix \ref{app:uniform-hash}.
\subsection{The SHIFT Method}
We now introduce the SHIFT method, which consists of two stages: \emph{collaborative learning} and \emph{fine-tuning}.
The first stage estimates the central hashed distribution $\mathbf{b}^\star$ using all hashed estimates $\{\widecheck{\mathbf{b}}^t:t\in[T]\}$.
This is achieved via methods from robust statistics such as the median or trimmed mean.
The key insight here is that, since the heterogeneity is sparse, for each entry where the individual distributions mostly match with the central one, most datapoints (used to estimate that entry) are sampled from the probability of the central distribution.
Hence, to estimate those entries of the central distribution, we can treat the datapoints generated by heterogeneous users as outliers, and leverage robust statistical methods to mitigate their influence.
\begin{algorithm}[t]
\caption{SHIFT: Sparse Heterogeneity Inspired collaboration
and Fine-Tuning}\label{alg:meta-method}
\begin{algorithmic}
\STATE \textbf{input:} individual hashed estimators $\widecheck{\mathbf{b}}^1,,\dots,\widecheck{\mathbf{b}}^T$, threshold parameter $\alpha$
\vspace{1mm}
\STATE $\triangleright$ Stage I: Collaborative Learning
\vspace{1mm}
\STATE Estimate $\mathbf{b}^\star$ via robust statistical methods: $\widecheck{\mathbf{b}}^\star\,\leftarrow\,\mathrm{robust\_estimate}(\{\widecheck{\mathbf{b}}^t:t\in[T]\})$
\vspace{1mm}
\STATE $\triangleright$ Stage II: Fine-Tuning
\vspace{1mm}
\FOR{$k=1,\dots,d$}
\FOR{$t = 1,\dots,T$}
\STATE $[\widehat{\mathbf{b}}^t]_k\,\leftarrow\,[\widecheck{\mathbf{b}}^\star]_k$\textbf{\; if \;}$|[\widecheck{\mathbf{b}}^\star]_k-[\widecheck{\mathbf{b}}^t]_k|\leq \sqrt{\frac{\alpha[\widecheck{\mathbf{b}}^t]_k}{n}}$\textbf{\; else \;}$[\widecheck{\mathbf{b}}^t]_k$
\STATE $[\widehat{\mathbf{p}}^t]_k\,\leftarrow \,\mathrm{Proj}_{[0,1]}(\frac{2^b [\widehat{\mathbf{b}}^t]_k-1}{2^b-1})$
\ENDFOR
\ENDFOR
\STATE \textbf{output:} estimates $\widehat{\mathbf{p}}^1,\dots, \widehat{\mathbf{p}}^T$
\end{algorithmic}
\end{algorithm}
In the second stage---fine-tuning---we detect mismatched entries between individual hashed estimates $\widecheck{\mathbf{b}}^t$ and the central estimate $\widecheck{\mathbf{b}}^\star$.
Recall that the central and individual distributions differ in only a few entries.
For entries $k\in[d]$ such that $|[\widecheck{\mathbf{b}}^\star]_k-[\widecheck{\mathbf{b}}^t]_k|$ is below
$({\alpha[\widecheck{\mathbf{b}}^\star]_k}/{n})^{1/2}$ for some threshold parameter $\alpha$, we may expect that $p^t_k=p_k^\star$.
As a result, we expect the estimate $[\widecheck{\mathbf{b}}^\star]_k$ of $b^\star_k$ to be more accurate, as it is learned collaboratively using a larger sample size.
Thus, we assign $[\widecheck{\mathbf{b}}^\star]_k$ as the final estimate $[\widehat{\mathbf{b}}^t]_k$ of $b^\star_k$.
On the other hand, for the entries where the central and individual distributions differ, i.e., $p^t_k\neq p_k^\star$, the threshold is more likely to be exceeded.
In this case,
we keep the individual estimate $[\widecheck{\mathbf{b}}^t]_k$ as $[\widehat{\mathbf{b}}^t]_k$.
Finally, since the hashed distributions $\mathbf{b}^t$ are biased, we debias them in the final estimates of $\mathbf{p}^t$ where $\mathrm{Proj}_{[0,1]}(\cdot)$ in Algorithm \eqref{alg:meta-method} indicates truncating the input to the $[0,1]$ interval.
Our method does not require sample splitting, despite using two stages, leading to increased sample-efficiency.
\paragraph{Knowledge Transfer to New Clusters. } The collaboratively learned central distribution from Algorithm \ref{alg:meta-method} is adaptable to new clusters, which possibly only have a few datapoints.
This can be particularly beneficial for sample efficiency, because most entries of the target distribution are well-estimated through collaborative learning.
One can transfer those entries, and it suffices to estimate the few remaining entries, instead of the whole distribution.
See Theorem \ref{thm:transfer} for the details.
The knowledge transfer utility further motivates the importance of collaborative learning.
\subsection{Median-Based SHIFT}
In this section, we provide upper bounds on the error for the median-based SHIFT method, where $\mathrm{robust\_estimate}(\{\widecheck{\mathbf{b}}^t:t\in[T]\})$ in Algorithm \ref{alg:meta-method} is the entry-wise median.
Specifically, we let
\begin{equation*}
[\widecheck{\mathbf{b}}^\star]_k =\mathrm{median}\big(\{[\widecheck{\mathbf{b}}^t]_k:t\in[T]\}\big),\quad \text{for each $k\in[d]$}.
\end{equation*}
When there is no ambiguity, we write $\widecheck{\mathbf{b}}^\star =\mathrm{median}\big(\{\widecheck{\mathbf{b}}^t\}_{t\in[T]}\big)$.
We also provide results for the trimmed-mean-based SHIFT method, see Appendix \ref{app:trimmed-mean}.
By setting the threshold parameter $\alpha$ in Algorithm \ref{alg:meta-method} as $\alpha =\Theta(\ln(n))$, we prove the following upper bounds on the final individual $\ell_2$ estimation errors.
The results for the $\ell_1$ error are in Appendix \ref{app:median}.
\begin{theorem}\label{thm:median-collab-final}
Suppose $n=\Omega( 2^b\ln(n))$ and $\alpha =\Theta(\ln(n))$\footnote{To be precise, we require $\alpha=O(\ln(n))$ and $\alpha \geq c\ln(n)$ for some absolute constant $c$. The analogous statement applies in Theorem \ref{thm:transfer}.}. Then, for the median-based SHIFT method, for any $t\in[T]$,
\begin{equation*}\label{eqn:median-final-knowledge}
\setlength{\abovedisplayskip}{3pt}\setlength{\belowdisplayskip}{1pt}
\mathbb{E}\
|
left[\|\widehat{\mathbf{p}}^t-\mathbf{p}^t\|_2^2\right]=\tilde{O}\left(\frac{\max\{2^b,s\}}{2^bn}+\frac{d}{2^bTn}+\frac{d}{n^2}\right).
\end{equation*}
\end{theorem}
When $n=\Omega( 2^b\max\{T,\ln(n)\})$, the rate further becomes
\begin{equation}\label{eqn:ghoqenfq}
\setlength{\abovedisplayskip}{3pt}\setlength{\belowdisplayskip}{3pt}
\mathbb{E}\left[\|\widehat{\mathbf{p}}^t-\mathbf{p}^t\|_2^2\right]=\tilde{O}\left(\frac{\max\{2^b,s\}}{2^bn}+\frac{d}{2^bTn}\right).
\end{equation}
The upper bound in \eqref{eqn:ghoqenfq} consists of two terms. The first term---$\max\{2^b,s\}/(2^bn)$---is independent of the dimension $d$, and is smaller than the rate $d/(2^bn)$ obtained by the minimax optimal method using only homogeneous datapoints \cite{Han2018DistributedSE} by a factor ${d}/{\max\{2^b,s\}}$.
Thus, it brings a significant benefit under sparse heterogeneity and reasonable communication restrictions, i.e., when $\max\{2^b,s\}\ll d$.
Meanwhile, the second, dimension-dependent, term $d/(2^bTn)$ is $T$ times smaller than $d/(2^bn)$, since it depends on the \emph{total sample-size} $Tn$ used collaboratively, despite heterogeneity. Therefore, our method shows a factor of $\min\{T,{d}/{\max\{2^b,s\}}\}$ improvement in sample efficiency, compared to previous work designed for homogeneous datapoints.
For completeness, we also consider a heuristic application of estimators from prior works \cite{Han2018DistributedSE,Chen2021PointwiseBF}, in which datapoints from all clusters are pooled to learn a global distribution $T^{-1}\sum_{t\in[T]}\mathbf{p}^t$, which is then used by each cluster.
While this uses all datapoints, it inevitably introduces a non-vanishing bias $\boldsymbol{\delta}^t=\mathbf{p}^t-T^{-1}\sum_{t^\prime\in[T]}\mathbf{p}^{t^\prime}$ in estimating individual distributions, and can behave poorly when the bias is large. See Table \ref{tab:comparsion} for more details.
Finally, we discuss our results on knowledge transfer. The central estimator $\widecheck{\mathbf{b}}^\star$ is adaptable to a new cluster $\mathcal{C}^{T+1}$ of size $\tilde{n}$ in the following way.
We adjust the fine-tuning procedure in Algorithm \ref{alg:meta-method} to $[\widehat{\mathbf{b}}^{T+1}]_k\,\leftarrow\,[\widecheck{\mathbf{b}}^\star]_k$
if
$|[\widecheck{\mathbf{b}}^\star]_k-[\widecheck{\mathbf{b}}^{T+1}]_k|\leq \sqrt{{\alpha[\widecheck{\mathbf{b}}^{T+1}]_k}/{\tilde{n}}}$,
and
$[\widehat{\mathbf{b}}^{T+1}]_k\,\leftarrow\, [\widecheck{\mathbf{b}}^{T+1}]_k$
otherwise.
We then show the following result.
\begin{theorem}\label{thm:transfer}
Let $\widecheck{\mathbf{b}}^{T+1}$ be the hashed estimate of any new cluster $\mathcal{C}^{T+1}$ with $\tilde{n}$ datapoints such that $n\geq \tilde{n}=\Omega(2^b\max\{T,\ln(\tilde{n})\})$.
Let the threshold parameter be $\alpha =\Theta(\ln( \tilde{n}))$. Then,
the median-based SHIFT method has error bounded by
\begin{equation*}\label{eqn:median-final-transfer}
\setlength{\abovedisplayskip}{3pt}\setlength{\belowdisplayskip}{2pt}
\mathbb{E}\left[\|\widehat{\mathbf{p}}^{T+1}-\mathbf{p}^{T+1}\|_2^2\right]=\tilde{O}\left(\frac{\max\{2^b,s\}}{2^b\tilde{n}}+\frac{d}{2^bTn}\right).
\end{equation*}
\end{theorem}
Similarly, one can see that the adaptation to new clusters with the median-based SHIFT method achieves a factor of $\min\{Tn/\tilde{n},d/\max\{2^b,s\}\}$ improvement in sample-efficiency compared to estimating the distribution of the new cluster without knowledge transfer.
\subsubsection{Highlights of Theoretical Analysis}
In this section, we introduce the key ideas behind the proof of Theorems \ref{thm:median-collab-final} and \ref{thm:transfer}. Our analysis is novel compared to previous analyses for methods with homogeneous datapoints.
The final individual estimation errors relate to the error of estimating the central hashed distribution $\mathbf{b}^\star$.
However, we only expect high accuracy at the center for entries with few individual misalignments.
To quantify the influence of heterogeneity, for any $0< \eta \leq 1$, we define the set
of \emph{$\eta$-well-aligned entries} as
\begin{equation*}
\mathcal{I}_\eta:=\{k\in[d]:|\mathcal{B}_k|< \eta T\},\quad \text{where}\quad \mathcal{B}_k\triangleq \{t\in[T]:b_k^t\neq b_k^\star,\,i.e.,\,p_k^t\neq p_k^\star\}
\end{equation*}
is the set of clusters whose distribution differs from $\mathbf{p}^\star$ in the $k$-th entry.
We aim to estimate the $\eta$-well-aligned entries accurately by using robust statistical methods.
Further, we argue that there are few poorly-aligned entries,
and they affect the final per-cluster error only mildly.
By the pigeonhole principle, the number of entries that are not $\eta$-well-aligned is upper bounded by $|\mathcal{I}_\eta^c|\triangleq |[d]\backslash\mathcal{I}_\eta|\leq \frac{s T}{\eta T}= \frac{s}{\eta}.$
Therefore, given an estimator $\widecheck{\mathbf{b}}^\star$ that is accurate for the $\eta$-well-aligned entries,
the entries of $\mathbf{b}^\star$ can be estimated accurately except for at most $s/\eta$ entries.
The following technical lemma bounds the error for each entry $k\in\mathcal{I}_\eta$.
\begin{lemma}\label{lem:median-entry}
Suppose $\widecheck{\mathbf{b}}^\star =\mathrm{median}\big(\{\widecheck{\mathbf{b}}^t\}_{t\in[T]}\big)$. Then for any $0< \eta \leq 1/5$ and $k\in\mathcal{I}_\eta$, it holds that
\begin{equation*}
\setlength{\abovedisplayskip}{4pt}
\mathbb{E}[([\widecheck{\mathbf{b}}^\star]_k-[{\mathbf{b}}^\star]_k)^2]=\tilde{O}\left( \frac{|\mathcal{B}_k|^2b_k^\star(1-b_k^\star)}{T^2n}+\frac{b_k^\star(1-b_k^\star)}{Tn}+\frac{1}{n^2}\right).
\setlength{\belowdisplayskip}{1pt}
\end{equation*}
\end{lemma}
Lemma \ref{lem:median-entry} provides an upper bound, in terms of the frequency ${|\mathcal{B}_k|}/{T}$ of misalignment (which is smaller than $\eta$), and a variance term $b_k^\star(1-b_k^\star)$.
This result cannot be obtained by directly applying the standard Chernoff or Hoeffding bounds to random variables distributed in $[0,1]$ as in previous works \cite{Chen2021PointwiseBF} for two reasons: 1) the datapoints are heterogeneous, 2) the variance $b_k^\star(1-b_k^\star)$ here can be small, compared to what it can be for general random variables in $[0,1]$, implying more concentration than what follows from Hoeffding's inequality.
To address these issues, we analyze the concentration of the empirical $(1/2\pm{|\mathcal{B}_k|}/{T})$-quantiles to mitigate the influence of heterogeneity, and we also use Bernstein's inequality, which is variance-dependent \cite{Uspensky1937IntroductionTM}, to obtain bounds relying on both the sample size $Tn$ and the variance $b_k^\star(1-b_k^\star)$.
Also, the constant $1/5$, upper bounding the heterogeneity, is not essential and is chosen for clarity.
It can be replaced with any number less than half; in which case estimating the central probability distribution becomes possible, as the information conveyed by homogeneous datapoints dominates.
Lemma \ref{lem:median-entry} reveals that well-aligned entries of the central distribution are accurately estimated. Thus one can use the central estimate for the entries where the central distribution $\mathbf{p}^\star$ aligns with the target distribution $\mathbf{p}^t$.
The remaining entries, that are neither well-aligned nor satisfy $p_k^\star=p_k^t$, can be estimated by the local estimator.
We argue that a properly chosen threshold parameter $\alpha$ filters out the only desired entries to be estimated individually, with high probability, leading to Theorems \ref{thm:median-collab-final} and \ref{thm:transfer}.
While estimating $\mathbf{p}^\star$ is not our main goal, one can readily obtain from Lemma \ref{lem:median-entry} the following bound for estimating $\mathbf{p}^\star$ by summing up the errors for all entries $k\in[d]=\mathcal{I}_{\eta}$ with $\eta =\max_{k\in[d]}|\mathcal{B}_k|/T$.
Corollary \ref{cor:median-uniform-hetero} reveals that the central distribution can be accurately estimated if the mismatch of distributions happens uniformly across all entries, i.e., each entry differs in $O(sT/d)$ clusters.
\begin{corollary}\label{cor:median-uniform-hetero}
Let $\widehat{\mathbf{p}}^\star=\mathrm{Proj}_{[0,1]}(\frac{2^b \widecheck{\mathbf{b}}^\star-1}{2^b-1})$ be obtained by the debiasing operation from Algorithm \ref{alg:meta-method}.
Suppose $|\mathcal{B}_k|=O(sT/d)$ for any $k\in [d]$, with $\eta =\max_{k\in[d]}|\mathcal{B}_k|/T$.
Then the median-based SHIFT method enjoys
\begin{equation*}
\mathbb{E}[\|\widehat{\mathbf{p}}^\star-\mathbf{p}^\star\|_2^2]=\tilde{O}\left(\frac{s^2}{d2^bn}+\frac{d}{2^bTn}+\frac{d}{n^2}\right).
\end{equation*}
\end{corollary}
\section{Lower Bounds}\label{sec:lower-bounds}
To complement our upper bounds, we now provide
minimax lower bounds for estimating distributions under heterogeneity. Since our setting contains $T$ heterogeneous clusters of datapoints,
our minimax error metric is slightly different from the one studied in \cite{Han2018DistributedSE,Barnes2019LowerBF,Han2021GeometricLB,Acharya2020InferenceUI}. Using the $\ell_2$ error as the loss, the lower bound metric is defined as
\begin{equation}\label{eqn:vhoeqwfq}
\inf_{\substack{(W^{t^\prime,[n]})_{t^\prime \in[T]}\\\widehat{\mathbf{p}}^{t} }}\sup_{\substack{\mathbf{p}^\star \in\mathcal{P}_d\\ \{\mathbf{p}^{t^\prime}:t^\prime\in[T]\}\subseteq\mathbb{B}_s(\mathbf{p}^\star)}}
\mathbb{E}\left[\left\|\widehat{\mathbf{p}}^t-\mathbf{p}^t\right\|_2^2\right],
\end{equation}
where the supremum is taken over all possible central distributions $\mathbf{p}^\star\in \mathcal{P}_d$ and individual distributions $\{\mathbf{p}^t:t\in[T]\}$ in $\mathbb{B}_s(\mathbf{p}^\star)\triangleq \{\mathbf{p}\in\mathcal{P}_d:\|\mathbf{p}-\mathbf{p}^\star\|_0\leq s\}$, and the infimum is taken over all estimation methods $\hat\mathbf{p}^t$ that use all heterogeneous messages $\{Y^{t^\prime,j}\triangleq W^{t^\prime,j}(X^{t^\prime,j}):j\in[n], t^\prime\in[T]\}$ encoded (possibly interactively) by any encoding functions
$\{W^{t^\prime,j}:j\in[n], t^\prime\in [T] \}$ with output in $[2^b]$, e.g., the random hashing maps. The measure \eqref{eqn:vhoeqwfq} characterizes the best possible worst-case performance of estimating distributions under our model of heterogeneity.
Since the supremum is taken over all distributions $\mathbf{p}^\star,\mathbf{p}^1,\dots,\mathbf{p}^T$ in $\mathcal{P}_d$ such that $\|\mathbf{p}^t-\mathbf{p}^\star\|_0\leq s$ for all $t\in[T]$, we consider two representative cases therein:
First, in the \emph{homogeneous} case where $\mathbf{p}^1=\cdots=\mathbf{p}^T=\mathbf{p}^\star \in\mathcal{P}_d$, the setting reduces to the single-cluster problem but with $nT$ datapoints, and with the goal of estimating $\mathbf{p}^\star$, leading to the lower bound $\Omega(d/(2^bTn))$.
Second, in the \emph{$s/2$-sparse} case where $\|\mathbf{p}^\star\|_0\leq s/2$ and $\|\mathbf{p}^t\|_0\leq s/2$ for all $t\in[T]$, we have $\{\mathbf{p}^t:t\in[T]\}\subseteq\mathbb{B}_s(\mathbf{p}^\star)$. By constructing independent priors for $\{\mathbf{p}^t:t\in[T]\}$ and $\mathbf{p}^\star$, one can show that only datapoints generated by $\mathbf{p}^t$ itself are informative for estimating $\mathbf{p}^t$. In this case, we show the lower bound $\Omega(\max\{2^b,s\}/(2^bn))$. Combining the two cases, we find the following lower bound. The formal argument is provided in Appendix \ref{app:lower-bounds}.
\begin{theorem}\label{thm:collab-lower-bound}
For any---possibly interactive---estimation method, and for any $t\in[T]$, we have
\begin{equation}\label{eqn:collab-lower-bound}
\setlength{\abovedisplayskip}{3pt}\setlength{\belowdisplayskip}{0.5pt}
\inf_{\substack{(W^{t^\prime,[n]})_{t^\prime \in[T]}\\ \widehat{\mathbf{p}}^{t}\, }}
\sup_{\substack{\mathbf{p}^\star \in\mathcal{P}_d\\ \{\mathbf{p}^{t^\prime}:t^\prime\in[T]\}\subseteq\mathbb{B}_s(\mathbf{p}^\star)}}
\mathbb{E}[\|\widehat{\mathbf{p}}^t-\mathbf{p}^t\|_2^2]=\Omega\left(\frac{\max\{2^b,s\}}{2^bn}+\frac{d}{2^bTn}\right).
\end{equation}
\end{theorem}
By a similar argument but with an additional $(T+1)$-st cluster of $\tilde{n}$ users, we obtain a lower bound for adapting to a new cluster.
\begin{theorem}\label{thm:transfer-lower-bound}
For any---possibly interactive---estimation method, and a new cluster $\mathcal{C}^{T+1}$, we have
\begin{equation}\label{eqn:transfer-lower-bound}
\setlength{\abovedisplayskip}{3pt}\setlength{\belowdisplayskip}{1pt}
\inf_{\substack{(W^{t^\prime,[n]})_{t^\prime \in[T]}\\W^{T+1,[\tilde{n}]},\widehat{\mathbf{p}}^{T+1} }} \sup_{\substack{\mathbf{p}^\star \in\mathcal{P}_d\\ \{\mathbf{p}^{t^\prime}:t^\prime\in[T+1]\}\subseteq\mathbb{B}_s(\mathbf{p}^\star)}}\mathbb{E}[\|\widehat{\mathbf{p}}^{T+1}-\mathbf{p}^{T+1}\|_2^2]=\Omega\left(\frac{\max\{2^b,s\}}{2^b\tilde{n}}+\frac{d}{2^bTn}\right).
\end{equation}
\end{theorem}
Theorem \ref{thm:collab-lower-bound} and \ref{thm:transfer-lower-bound}, combined with the upper bounds in Section \ref{sec:algorithm}, imply that our method is minimax optimal up to logarithmic terms.
We provide similar lower bounds for the $\ell_1$ error in Appendix \ref{app:lower-bounds}.
\section{Experiments}\label{sec:experi}
We test SHIFT on synthetic data as well as on the Shakespeare dataset \cite{caldas2018leaf}.
As a baseline method, we use the estimator in \cite{Han2018DistributedSE} that is minimax optimal under homogeneity.
We apply the baseline method both locally and globally.
In the local case, the estimator $\widehat{\mathbf{p}}^t$ for each cluster is computed without datapoints from other clusters.
In the global case, we pool data from all clusters,
and compute estimators $\widehat{\mathbf{p}} = \widehat{\mathbf{p}}^1 = \cdots = \widehat{\mathbf{p}}^T$.
The performance measure for estimating $\widehat{\mathbf{p}}^t$, $t \in [T]$ is taken as $T^{-1}\sum_{t = 1}^T \Vert \mathbf{p}^t - \widehat{\mathbf{p}}^t \Vert_2^2.$
\subsection{Synthetic Data}\label{sec:synthetic}
We set the uniform distribution, $\mathbf{p}^\star = (1/d, \dots, {1}/{d})$ as the central distribution.
In Appendix \ref{app:experiments}, we also experiment on the truncated geometric distribution, and compare our method with the localization-refinement method \cite{Chen2021PointwiseBF}.
Among the $d$ entries of $\mathbf{p}^\star$, we draw $s$ entries uniformly at random
and assign new values for them uniformly at random over $[0,1]$, with re-normalization to preserve their sum.
We repeat this procedure $T$ times to obtain sparsely perturbed distributions $\mathbf{p}^1, \dots, \mathbf{p}^T \in \mathcal{P}_d$.
Then, $n$ i.i.d. datapoints $X^{t, 1}, \dots, X^{t, n} \sim \mathrm{Cat}(\mathbf{p}^t)$ are generated for each cluster $t \in [T]$.
We set the dimension to $d = 300$ and run the simulation by varying $n, T, s$.
As we see from \eqref{eqn:ghoqenfq}, the error of our method depends on $s$ only when $2^b < s$.
For this reason, we let $b = 2$ in our experiments.
\begin{figure}[t]
\centering
\includegraphics{images/synthetic-uniform.pdf}
\caption{Average $\ell_2$ estimation error in synthetic experiment. (Left): Fixing $s = 5, \,T = 30$ and varying $n$.
(Middle): Fixing $s = 5,\,n = 100,000$ and varying $T$.
(Right): Fixing $T = 30, \,n = 100,000$ and varying $s$.
The standard error bars are obtained from 10 independent runs.}
\label{fig:synthetic}
\end{figure}
We run SHIFT with the entry-wise median and entry-wise trimmed mean as the robust estimate.
We set the threshold parameter $\alpha = \ln(n)$ and the trimming proportion $\omega = 0.1$.
In Appendix \ref{app:experiments}, we provide results for other choices of the hyperparameters $\alpha, \omega$ and discuss a heuristic for choosing $\alpha$.
Figure~\ref{fig:synthetic} illustrates that our method outperforms the baseline method for most choices of $n, T, s$.
Specifically, as Theorem \ref{thm:median-collab-final} reveals, the $\ell_2$ error of our method decreases as the number of clusters $T$ increases.
On the other hand, when the baseline methods are applied globally, without considering heterogeneity,
they show a bias that does not disappear as the sample size $n$ or the number of clusters $T$ increases.
This shows that the fine-tuning step in SHIFT is crucial for the estimation of heterogeneous distributions.
Finally, the right panel of Figure~\ref{fig:synthetic} shows that our method is effective only when $s$ is small compared to the dimension $d$; which highlights the crucial role of sparsity.
When $s$ is close to $d$, the distributions $\mathbf{p}^1, \dots \mathbf{p}^T$ could be considerably different without any meaningful central structure, making collaboration less useful than local estimation.
\subsection{Shakespeare Dataset}
The Shakespeare dataset was proposed as a benchmark for federated learning in \cite{caldas2018leaf}.
The dataset consists of dialogues of 1,129 speaking roles in Shakespeare's 35 different plays.
In our experiment, we study the distribution of $k$-grams ($k$-tuples of consecutive letters from the 26-letter English alphabet, see Chapter 3 of \cite{leskovec2020mining}) appearing in the dialogues.
We consider each play as a cluster $\mathcal{C}^t$ and estimate the distribution $\mathbf{p}^t \in \mathcal{P}_{d}$, $d=26^k$ of $k$-grams.
Since the ground-truth distribution $\mathbf{p}^t$ is unknown, we regard the empirical frequency as $\mathbf{p}^t$.
\begin{table}[h]
\centering
\begin{tabular}{lcccc}
\toprule
\multicolumn{1}{c}{$k = 2$} & $b = 2$ & $b = 4$ & $b = 6$ & $b = 8$ \\
\hline
Minimax (local) & $640 \pm 6.0$ & $142 \pm 1.2$ & $40 \pm 0.40$ & $14 \pm 0.13$ \\
Minimax (global) & $33 \pm 1.8$ & $17 \pm 0.37$ & $14 \pm 0.081$ & $13 \pm 0.037$ \\
SHIFT (median) & $47 \pm 2.4$ & $21 \pm 0.66$ & $14 \pm 0.17$ & $11 \pm 0.10$ \\
SHIFT (trimmed mean) & $36 \pm 2.2$ & $19 \pm 0.51$ & $13 \pm 0.24$ & $10 \pm 0.062$ \\
\hline
\multicolumn{1}{c}{$k = 3$} & $b = 2$ & $b = 4$ & $b = 6$ & $b = 8$ \\
\hline
Minimax (local) & $15000 \pm 21$ & $3000 \pm 5.9$ & $720 \pm 2.1$ & $180 \pm 0.39$ \\
Minimax (global) & $4400 \pm 5.7$ & $100 \pm 1.4$ & $ 38 \pm 0.35$ & $23 \pm 0.090$ \\
SHIFT (median) & $7300 \pm 9.6$ & $180 \pm 2.1$ & $ 53 \pm 1.0$ & $20 \pm 0.18$ \\
SHIFT (trimmed mean) & $5100 \pm 6.3$ & $140 \pm 2.3$ & $ 43 \pm 0.66$ & $ 18 \pm 0.18$ \\
\bottomrule
\end{tabular}
\caption{Average $\ell_2$ error for estimating distributions of $k$-grams in the Shakespeare dataset. Numbers are scaled by $10^{-5}$.}
\label{tbl:res}
\end{table}
To verify the heterogeneity, we run the chi-squared goodness-of-fit test for each pair of distributions from distinct clusters $\mathbf{p}^u$ and $\mathbf{p}^v$.
Resulting p-values are essentially zero within machine precision, which suggests that the distributions of $k$-grams are strongly heterogeneous.
We also perform entry-wise tests comparing $[\mathbf{p}^u]_i$ and $[\mathbf{p}^v]_i$ for all $u \neq v \in [T], i \in [d]$.
In total, 25.8\% of the tests are rejected at the 5\% level of significance.
This is again consistent with heterogeneity.
We draw $n = 20,000$ datapoints with replacement from each cluster and test SHIFT with communicati-on budgets $b \in \{2, 4, 6, 8\}$.
We set the fine-tuning threshold $\alpha = \ln(n)$ and the trimming proportion $\omega = 0.1$, which we choose following the heuristic discussed in Appendix \ref{app:experiments}.
We repeat the experiment ten times by randomly drawing different datapoints, and report the average $\ell_2$ error of estimation in Table \ref{tbl:res}.
The standard deviations are small even over ten repetitions.
SHIFT shows competitive performance on the empirical dataset, even though we do not rigorously know if the sparse heterogeneity model \eqref{eqn:sparse-hetero} applies.
\section{Conclusion and Future Directions}
We formulate the problem of learning distributions under sparse heterogeneity and communication constraints. We propose the SHIFT method, which first learns a central distribution, and then fine-tunes the estimate to adapt to individual distributions.
We provide both theoretical and experimental results to show its sample-efficiency improvement compared to classical methods that target only homogeneous data.
Many interesting directions remain to be explored, including investigating if there is a point-wise optimal method with rate depending on $\{\mathbf{p}^t:t\in[T]\}$ and $\mathbf{p}^\star$; and designing methods for other information
constraints, such as local differential privacy constraints.
\section*{Acknowledgements}
During this work, Xinmeng Huang was supported in part by the NSF TRIPODS 1934960, NSF DMS 2046874 (CAREER), NSF CAREER award CIF-1943064; Donghwan Lee was supported in part by ARO W911NF-20-1-0080, DCIST, Air Force Office of Scientific Research Young Investigator Program (AFOSR-YIP) \#FA9550-20-1-0111 award.
|
\section{Introduction}
\label{sec1}
Sequential, multiple assignment, randomized trial (SMART) designs have gained considerable attention in the field of precision medicine by providing an empirically rigorous experimental approach for comparing sequences of treatments tailored to the individual patient, i.e., dynamic treatment regime (DTR) (\citealp{LAVORI2000605}; \citealp{murphy2005experimental}; \citealp{lei2012smart}). A DTR is a treatment algorithm implemented through a sequence of decision rules which dynamically adjusts treatments and dosages to a patient's unique changing need and circumstances (\citealp{murphy2001marginal}; \citealp{murphy2003optimal}; \citealp{robins2004optimal};
\citealp{nahum2012experimental}; \citealp{chakraborty2013statistical}; \citealp{chakraborty2014dynamic}; \citealp{laber2014dynamic}). SMART studies are motivated by scientific questions concerning the construction of an effective DTR. The sequential randomization in a SMART gives rise to several DTRs which are embedded in the SMART by design. Many SMARTs are designed to compare these embedded DTRs and identify those showing greatest potential for improving a primary clinical outcome. The construction of evidence-based DTRs promises an alternative to adhoc one-size-fits-all decisions pervasive in patient care.
The advent of SMART designs poses interesting statistical challenges in the planning phase of the trials. In particular, performing power analyses to determine an appropriate sample size of individuals to enroll becomes mathematically difficult due to the complex correlation structure between the DTRs embedded in the design. Previous work in planning the sample size for SMARTs includes sizing pilot SMARTs (small scale versions of a SMART) so that each sequence of treatments has a pre-specified number of individuals with some probability by the end of the trial (\citealp{almirall2012designing}; \citealp{hwanwoo2016}). The central questions motivating this work are feasibility of the investigators to carry out the trial and acceptability of the treatments by patients. These methods do not provide a means to size SMARTs for comparing DTRs in terms of a primary clinical outcome.
Alternatively, \citeauthor{CrivelloEvaluation2007} (2007(a)) proposed a new objective for SMART sample size planning. The question they address is how many individuals need to be enrolled so that the best DTR has the largest sample estimate with a given probability (\citealp{CrivelloStatMethod2007}(b)). Such an approach based on estimation alone fails to account for the uncertainty in the mean DTR sample estimates. In particular, some DTRs may be statistically indistinguishable from the true best for the given data in which case they should not necessarily be excluded as suboptimal. Furthermore, the true best DTR may only be superior in efficacy to other DTRs by an amount which is clinically insignificant at the cost of providing treatments that are more burdensome and costly compared to other DTRs. \cite{CrivelloEvaluation2007} also discussed sizing SMARTs to attain a specified power for testing hypotheses which compare only two treatments or two embedded DTRs as opposed to comparing all embedded DTRs. The work of \cite{CrivelloEvaluation2007} focused mainly on a particular common two-stage SMART design whereas our method is applicable to arbitrary SMART designs.
One of the main goals motivating SMARTs is to identify the optimal DTRs. It follows that investigators are interested in sizing SMARTs based on the ability to exclude DTRs which are inferior to the optimal DTR by a clinically meaningful amount. This cannot be done with existing methods. In this paper, we fill the current methodological gap by developing a rigorous power analysis framework that leverages the multiple comparisons with the best (MCB) methodology in order to construct a set of optimal DTR which excludes DTRs inferior to the best by a specified amount (\citealp{hsu1981simultaneous}; \citealp{hsu1984}; \citealp{hsu1996multiple}). Our method is applicable to an arbitrary SMART design. There are two main justifications for using multiple comparisons with the best. 1) MCB involves fewer comparisons compared to methods which involve all pairwise comparisons, and thus, it yields greater power for the same number of enrolled individuals with all else being equal (\citealp{ertefaie2015}); 2) MCB permits construction of a set of more than one optimal DTR from which the physician and patient may choose.
In Section 2, we give a brief overview of SMARTs and relevant background on estimation and MCB. In Section 3, we present our power analysis framework. In Section 4, we look at properties of our method. In Sections 5 and 6, we demonstrate the validity of our method through extensive simulation studies. In Section 7, we will apply our method to retrospectively compute the power in the Extending Treatment Effectiveness of Naltrexone SMART study. In the accompanying appendix, we provide additional details about estimation and our simulation study. In the future, an R package called ''smartsizer'' will be made freely available to assist investigators with sizing SMARTs.
\section{PRELIMINARIES}
\subsection{Sequential Multiple Assignment Randomized Trials}
In SMART design, individuals proceed through multiple stages of randomization such that some or all individuals may be randomized more than once. Additionally, treatment assignment is often tailored to the individual's ongoing response status (\citealp{nahum2012experimental}). For example, in the Extending Treatment Effectiveness of Naltrexone (EXTEND) SMART study (see Figure \ref{fig:EXTEND-figure} for the study design and \citealp{nahum2017smart} for more details about this study), individuals were initially randomized to two different criteria of non-response: lenient or stringent. Specifically, all individuals received the same fixed dosage of naltrexone (NTX) -- a medication that blocks some of the pleasurable effects resulting from alcohol consumption. After the first two weeks, individuals were evaluated weekly to assess response status.
Individuals assigned to the lenient criterion were classified as non-responders as soon as they had five or more heavy drinking days during the first eight weeks of the study, whereas those assigned to the stringent criterion were classified as non-responders as soon as they had two or more heavy drinking days during the first eight weeks. As soon as participants were classified as non-responders, they transitioned to the second stage where they were randomized to two subsequent rescue tactics: either switch to combined behavioral intervention (CBI) or add CBI to NTX (NTX + CBI). At week 8, individuals who did not meet their assigned non-response criterion were classified as responders and re-randomized to two subsequent maintenance interventions: either add telephone disease management (TDM) to the NTX (NTX + TDM) or continue NTX alone. Note that the second stage treatment options in the SMART are tailored to the individual's response status. That is, individuals are randomized to different subsequent interventions depending on their early response status. This leads to a total of eight DTRs that are embedded in this SMART by design. For example, one of these DTRs recommends to start the treatment with NTX and monitor drinking behaviors weekly using the lenient criterion (i.e., 5 or more heavy drinking days) to classify the individual as a non-responder. As soon as the individual is classified as a non-responder, add CBI (NTX + CBI); if at week eight the individual is classified as a responder, add TDM (NTX + TDM). Many SMARTs are motivated by scientific questions pertaining to the comparison of the multiple DTRs that are embedded in the SMART by design.
For example, determining an optimal EDTR in the EXTEND may guide in evaluating a patient's initial response to NTX and in selecting the best subsequent treatment.
In the next section, we discuss how the multiple comparison with the best (MCB) procedure (\cite{hsu1981simultaneous}, \cite{hsu1984}, \cite{hsu1996multiple}) can be used to address scientific questions concerning the optimal EDTR.
\subsection{Determining a Set of Best DTRs using Multiple Comparison with the Best}
\label{MCB}
The MCB procedure can be used to identify a set of optimal EDTRs which cannot be statistically distinguished from the true best EDTR.
In order to carry out MCB, we must be able to estimate the mean outcome of each EDTR.
Let the true mean outcome vector of EDTRs be denoted by $\boldsymbol\theta=(\theta_1,...,\theta_N)^t$ where $N$ is the total number of DTRs embedded in the SMART design. Assume $\hat{\boldsymbol\theta}$ is a consistent estimator for $\boldsymbol\theta$ such that \begin{equation} \sqrt{n}(\hat{\boldsymbol\theta}-\boldsymbol\theta)\overset{d}{\rightarrow} N(\boldsymbol0,\boldsymbol\Sigma), \label{eq:1} \end{equation} where $\boldsymbol\Sigma$ is the asymptotic covariance matrix and $n$ is the total number of individuals enrolled in the SMART. Proposition 1 in Appendix A of the supplementary material presents two estimators for $\boldsymbol\theta$ which satisfy \eqref{eq:1}.
The MCB procedure entails comparing the mean outcome of each EDTR to the mean outcome of the best EDTR standardized by the standard deviation of their difference. In particular, $\mathrm{EDTR}_i$ is considered statistically indistinguishable from the true best EDTR for the available data if and only if the standardized difference $\dfrac{\hat{\theta}_i-\hat{\theta}_j}{\sigma_{ij}}\geq -c_{i,\alpha}$ for all $j \neq i$ where $\sigma_{ij}=\sqrt{\text{Var}(\hat{\theta}_i-\hat{\theta}_j)}$ and $c_{i,\alpha}>0$ is chosen such that the set of best EDTRs includes the true best EDTR with at least a specified probability $1-\alpha$. Then, the set of best can be written as $\hat{\mathcal{B}}:=\{\mathrm{EDTR}_i:\hat{\theta}_i\geq \underset{j\neq i}{\max}[\hat{\theta}_j-c_{i,\alpha}\sigma_{ij}]\}$ where $c_{i,\alpha}$ depends on $\alpha$ and the covariance matrix $\boldsymbol\Sigma$.
The above $\alpha$ represents the type I error rate for excluding the best EDTR from $\hat{\mathcal{B}}$. To control the type I error rate, it suffices to consider the situation in which the true mean outcomes are all equal.
Then, a sufficient condition for the type I error rate to be at most $\alpha$ is to choose $c_{i,\alpha}$ so that the set of best DTRs includes each EDTR with probability at least $1-\alpha$:
\begin{equation}\mathbb{P}(\mathrm{EDTR}_i \in \hat{\mathcal{B}} \mid \theta_1=\cdots = \theta_N) =1-\alpha \text{ for all } i =1,...,N. \end{equation}
It can be shown it is sufficient for $c_{i,\alpha}$ to satisfy:
\begin{equation}\int \mathbb{P}\left(Z_j \leq z+c_{i,\alpha} \sigma_{ij} , \text{ for all } j =1,...,N\right)\mathrm{d}\phi(z)=1-\alpha, \label{eq:c}\end{equation}
where $\phi(z)$ is the marginal cdf of $Z_i$ and $(Z_1,...,Z_N)^t \sim N\left(\boldsymbol0, \boldsymbol\Sigma \right)$. Observe that $c_{i,\alpha}>0$ is a function of $\boldsymbol\Sigma$ and $\alpha\leq 0.5$, but not of the number of individuals $n$. The integral in \eqref{eq:c} is analytically intractable, but the $c_{i,\alpha}$ may be determined using Monte Carlo methods.
The MCB procedure has both applied and theoretical advantages which justify its use to compare DTRs that are embedded in a SMART. In particular, in the event that more than one EDTR is included in the set of best, MCB provides provides clinicians with a set of optimal treatment protocols to choose from in order to individualize a patient's treatment. Identifying a set of DTRs is useful in the situation where the treatment suggested by one DTR may not be feasible for the patient due to lack of insurance coverage, drug interactions, and/or allergies. In addition to the practical advantages, MCB provides a set with fewer DTRs compared to other methods since fewer comparisons yields increased power to exclude inferior DTRs from the set of best.
Another statistical advantage of MCB is that it incorporates the uncertainty in sample estimates by providing a set of optimal DTRs instead of simply choosing the largest sample estimate.
Although MCB has theoretical and applied merits over alternative approaches, it comes with the price of additional statistical challenges such as that imposed by the correlation structure between EDTRs expressed through $\boldsymbol\Sigma$ and the numerical inconvenience of the $\max$ operator. The correlation structure between EDTR outcomes arises, in part, due to overlapping interventions in distinct DTRs and because patients' treatment histories may be consistent with more than a single DTR. For example, patients in distinct EDTRs of the EXTEND trial all receive naltrexone. Also, patients who are are classified as responders in stage 2 and subsequently randomized to NTX will be consistent with two EDTRs: one where non-responders are offered CBI and one where non-responders are offered NTX + CBI. Hence, deriving a convenient univariate test statistic unfeasible and many computations analytically intractable. In order to overcome these challenges, we will derive a form of the power function for which Monte Carlo simulation can be applied in the following section.
\section{Methods}
\label{method}
Let $\Delta_{\mathrm{min}}>0$ be the minimum detectable difference, $\alpha$ be the type I error rate, and $\boldsymbol\Sigma$ be the asymptotic covariance matrix of $\sqrt{n}\hat{\boldsymbol\theta}$ where $n$ is the total number of individuals enrolled in the SMART. Furthermore, let $1-\beta$ denote the desired minimum power to exclude EDTRs with true outcome $\Delta_{\mathrm{min}}$ or more away from that of the true best outcome. Let $\boldsymbol \Delta$ be the vector of effect sizes. So, $\boldsymbol\Delta_i = \theta_N-\theta_i$.
We wish to exclude all $i$ from the set of best for which $\boldsymbol\Delta_i\geq\Delta_{\mathrm{min}}$. We have that
\begin{align}
\mathrm{Power} &=\mathbb{P}_{\boldsymbol\theta,\boldsymbol\Sigma, n}\left(\bigcap_{i:\boldsymbol\Delta_i\geq \Delta_{\mathrm{min}}} \left\{\hat{\theta}_i < \underset{j\neq i}{\max} [\hat{\theta}_j-c_{i,\alpha}\sigma_{ij}]\right\}\right).\label{eq:bound}
\end{align}
However, the $\max$ operator makes \eqref{eq:bound} analytically and computationally complicated, so we will instead bound the RHS of the following inequality:
\begin{equation}\mathbb{P}_{\boldsymbol\Sigma, n}\left(\bigcap_{i:\boldsymbol\Delta_i\geq \Delta_{\mathrm{min}}}\left\{\hat{\theta}_i<\underset{j\neq i}{\max}[\hat{\theta}_j-c_{i,\alpha}\sigma_{ij}]\right\} \right)\geq\mathbb{P}_{\boldsymbol\Sigma, n}\left(\bigcap_{i:\boldsymbol\Delta_i\geq \Delta_{\mathrm{min}}} \left\{\hat{\theta}_i < \hat{\theta}_N-c_{i,\alpha}\sigma_{iN} \right\}\right)\label{eq:4}.
\end{equation}
This inequality follows from noting that for all $i$,
\begin{equation}
\left\{\hat{\theta}_i<\hat{\theta}_N-c_{i,\alpha}\sigma_{iN}\right\}\subset \left\{\hat{\theta}_i <\underset{j\neq i}{\max}[\hat{\theta}_j-c_{i,\alpha}\sigma_{ij}\right\}. \end{equation}
Theoretically, the bound obtained using \eqref{eq:4} may be conservative, but it is often beneficial to be conservative when conducting sample size calculations because of unpredictable circumstances such as loss to follow up, patient dropout, and/or highly skewed responses.
Since the normal distribution is a location-scale family, the power only depends on the vector of effect sizes $\boldsymbol\Delta$ and not on $\boldsymbol\theta$. Henceforth, we will call the RHS of \eqref{eq:4} the power function and write $\mathrm{Power}_{\alpha,n}\left({\bSigma}, \boldsymbol\Delta, \Delta_{\mathrm{min}}\right)$. It follows that \begin{align}
\mathrm{Power}_{\alpha,n}\left({\bSigma}, \boldsymbol\Delta, \Delta_{\mathrm{min}}\right) &= \mathbb{P}_{\boldsymbol\Sigma, n}\left(\bigcap_{i:\boldsymbol\Delta_i\geq \Delta_{\mathrm{min}}} \left\{\hat{\theta}_i < \hat{\theta}_N-c_{i,\alpha}\sigma_{iN} \right\} \right)\nonumber\\
&=\mathbb{P}_{\boldsymbol\Sigma, n}\left(\bigcap_{i:\boldsymbol\Delta_i\geq \Delta_{\mathrm{min}}}\left\{\dfrac{\sqrt{n}(\hat{\theta}_i-(\hat{\theta}_N-\Delta_i))}{\sigma_{iN}\sqrt{n}} < -c_{i,\alpha} +\dfrac{\Delta_i\sqrt{n}}{\sigma_{iN}\sqrt{n}}\right\} \right) \nonumber\\
&=\mathbb{P}_{\boldsymbol\Sigma, n}\left(\bigcap_{i:\boldsymbol\Delta_i\geq \Delta_{\mathrm{min}}}\left\{W_i< -c_{i,\alpha} +\dfrac{\Delta_i \sqrt{n}}{\sigma_{iN}\sqrt{n}}\right\}\right), \label{eq:7}
\end{align}
where $\boldsymbol W = (W_1,...,W_{M})^t \sim N\left(\boldsymbol0,\tilde{\boldsymbol\Sigma}\right)$ and $\displaystyle \tilde{\boldsymbol\Sigma}_{ij}=\text{Cov}\left(\dfrac{\sqrt{n}(\hat{\theta}_i-(\hat{\theta}_N-\Delta_i))}{\sigma_{iN}\sqrt{n}}, \dfrac{\sqrt{n}(\hat{\theta}_j-(\hat{\theta}_N-\Delta_j))}{\sigma_{jN}\sqrt{n}}\right)$, and $M$ is the number of indices $i:\Delta_i\geq\Delta_{\mathrm{min}}$.
Note that $\displaystyle\boldsymbol W, c_{i,\alpha}, \text{ and } \sigma_{iN}\sqrt{n}=\sqrt{\Sigma_{ii}+\Sigma_{NN}-2\Sigma_{iN}}$ do not depend on $n$ since $\boldsymbol\Sigma$ does not depend on $n$. If the standardized effect sizes $\delta_i$ are specified instead of $\Delta_i$, $\Delta_i$ may be computed as $\delta_i \sigma_{iN} \sqrt{n}$.
It follows that the power may be computed by simulating normal random variables and substituting the probability in \eqref{eq:7} with the empirical mean $\mathbb{P}_n$ of the indicator variable.
Recall the main point of this paper is to assist investigators in choosing a sufficient number of individuals to enroll in a SMART. To this end, we will derive a method for inverting $\mathrm{Power}_{\alpha,n}\left({\bSigma}, \boldsymbol\Delta, \Delta_{\mathrm{min}}\right)$ with respect to $n$. In particular, we are interested in finding the minimum $n$ such that $\mathrm{Power}_{\alpha,n}\left({\bSigma}, \boldsymbol\Delta, \Delta_{\mathrm{min}}\right) \geq 1-\beta$.
We proceed by rewriting the power function expression:
\begin{align}
\mathrm{Power}_{\alpha,n}\left({\bSigma}, \boldsymbol\Delta, \Delta_{\mathrm{min}}\right) & = \mathbb{P}_{\boldsymbol\Sigma, n}\left(\bigcap_{i:\boldsymbol\Delta_i\geq \Delta_{\mathrm{min}}} \left\{\hat{\theta}_i < \hat{\theta}_N-c_{i,\alpha}\sigma_{iN} \right\} \right)\nonumber\\
&= \mathbb{P}_{\boldsymbol\Sigma,n}\left(\bigcap_{i:\boldsymbol\Delta_i\geq \Delta_{\mathrm{min}}} \left\{\dfrac{\sqrt{n}(\hat{\theta}_i-\hat{\theta}_N+\Delta_i+c_{i,\alpha}\sigma_{iN})}{\Delta_i}< \sqrt{n}\right\}\right)\nonumber\\
&=\mathbb{P}_{\boldsymbol\Sigma,n}\left(\bigcap_{i:\boldsymbol\Delta_i\geq \Delta_{\mathrm{min}}} \left\{X_i< \sqrt{n}\right\}\right) \nonumber\\
&=\mathbb{P}_{\boldsymbol\Sigma,n}\left(\bigcap_{i:\boldsymbol\Delta_i\geq \Delta_{\mathrm{min}}} \left\{X_i< c^*_{1-\beta}\right\}\right),\label{eq:9}
\end{align}
where $\boldsymbol X = (X_1,...,X_{M})^t\displaystyle \sim N\left(\begin{pmatrix}c_{1,\alpha}\sigma_{1N}\sqrt{n}/\Delta_1\\c_{2,\alpha}\sigma_{2N}\sqrt{n}/\Delta_2\\\vdots\\c_{M,\alpha}\sigma_{MN}\sqrt{n}/\Delta_{M}\end{pmatrix},\tilde{\boldsymbol\Sigma}\right)$, $\tilde{\boldsymbol\Sigma}_{ij}=\mathrm{Cov}\left(\dfrac{\sqrt{n}\left(\hat{\theta}_i-\hat{\theta}_N\right)}{\Delta_i}, \dfrac{\sqrt{n}\left(\hat{\theta}_j-\hat{\theta}_N\right)}{\Delta_j}\right)$, $M$ is the number of indices $i:\Delta_i\geq\Delta_{\mathrm{min}}$, and $c^{*}_{1-\beta}$ is the $1-\beta$ equicoordinate quantile for the probability in \eqref{eq:9}. It follows from \eqref{eq:9} that $n = (c^{*}_{1-\beta})^2 \label{eq:10}$. Here, we write the quantile $c_{1-\beta}^*$ with an asterisk to distinguish it from the quantile $c_{i,\alpha}$ which controls the type I error rate. If the standardized effect sizes $\delta_i$ are specified instead of $\Delta_i$, $\Delta_i$ may be computed as $\delta_i \sigma_{iN} \sqrt{n}$.
The constant $c^{*}_{1-\beta}$ may be computed using Monte Carlo simulation to find the inverse of equation \eqref{eq:9} after first computing the $c_{i,\alpha}$'s. The above procedure works because the $c_{i,\alpha}$'s do not change with $n$, so the distribution of $\boldsymbol X$ is constant as a function of $n$. Our approach for computing $n$ is an extension of the sample size computation method in the appendix of \cite{hsu1996multiple} to the SMART setting when $\boldsymbol\Sigma$ is known. Algorithm 1 and Algorithm 2 use Monte Carlo simulation to calculate the power and sample size, respectively, as a function of the covariance matrix and effect sizes.
\begin{algorithm}[t]
\caption{SMART Power Computation}\label{powercomp}
\begin{algorithmic}[1]
\STATE Given $\boldsymbol\Sigma =\mathrm{Var}(\sqrt{n}\hat{\boldsymbol\theta})$, compute $c_{i,\alpha}$ for $i=1,..,N$.
\STATE Given $\boldsymbol \Delta$ and $\Delta_{\mathrm{min}}$, generate $\boldsymbol W^{(k)} =\left (W_1^{(k)},...,W_{M}^{(k)}\right)^t \sim N\left(\boldsymbol0,\tilde{\boldsymbol\Sigma}\right),$ for $k =1,...,m$, \\where $\displaystyle \tilde{\boldsymbol\Sigma}_{ij}=\text{Cov}\left(\dfrac{\sqrt{n}(\hat{\theta}_i-(\hat{\theta}_N-\Delta_i))}{\sigma_{iN}\sqrt{n}}, \dfrac{\sqrt{n}(\hat{\theta}_j-(\hat{\theta}_N-\Delta_j))}{\sigma_{jN}\sqrt{n}}\right)$, $m$ is the number of Monte Carlo repetitions, and $M$ is the number of indices $i:\Delta_i\geq\Delta_{\mathrm{min}}$.
\STATE Compute the Monte Carlo probability \\$\displaystyle \mathrm{Power}_{MC,n,\alpha}\left(\boldsymbol\Sigma,\boldsymbol\Delta,\Delta_{\mathrm{min}}\right) \approx \mathbb{P}_m \left[\mathbbm{1}\left(\bigcap_{i:\boldsymbol\Delta_i \geq \Delta}\left\{W_i < -c_{i,\alpha}+\dfrac{\Delta_i\sqrt{n}}{\sigma_{iN}\sqrt{n}}\right\}\right)\right] \text{ for some } m \in \mathbb{N}$\\$ \text{ where } \mathbb{P}_m \text{ denotes the empirical average.}$
\end{algorithmic}
\end{algorithm}
Algorithm 1 and Algorithm 2 will be implemented in an R package ''smartsizer'' freely available to download for assisting investigators in planning SMARTs.
In the next section we will explore properties of the power function. We will see that the power is highly sensitive to the choice of the covariance matrix $\boldsymbol\Sigma$. This underscores the importance of using prior information when conducting power calculations in order to obtain valid sample size and power predictions. Such information may be obtained from pilot SMARTs and physicians' knowledge of the variability in treatment response.
\section{Power Function Properties}
\label{properties}
Having derived an expression for the power function and provided an algorithm for its computation, we wish to explore some of its properties. In particular, it is important to examine how sensitive the power is to the choice of $\boldsymbol\Sigma$ when $\boldsymbol\Sigma$ is known. We will address the case in which $\boldsymbol\Sigma$ is unknown in Section 5.
For simplicity, we consider the most conservative case in which the effect sizes are all equal: $\Delta_i =\Delta$ for all $i$.
In Figures \ref{fig:3D-Plot}-\ref{fig:contour-exchangeable}, we evaluated the power function over a grid of values for $\boldsymbol\Sigma$ using Equation 3.6 and Algorithm 1. These plots suggest the trend that higher correlations and lower variances tend to yield higher power for both the exchangeable and non-exchangeable $\boldsymbol\Sigma$. The correlation between best and non-best DTRs appears to have a greater influence on power than the correlation between two inferior DTRs as we see in Figure \ref{fig:3D-Plot}.
\begin{algorithm}[t]
\caption{SMART Sample Size Estimation}\label{samplesize}
\begin{algorithmic}[1]
\STATE Given $\boldsymbol\Sigma =\mathrm{Var}(\sqrt{n}\hat{\boldsymbol\theta})$, compute $c_{i,\alpha}$ for $i=1,..,N$.
\STATE Given $\boldsymbol\Delta$ and $\Delta_{\mathrm{min}}$, generate $\boldsymbol X^{(k)} = \left(X_1^{(k)},...,X_{M}^{(k)}\right)^t\displaystyle \sim N\left(\begin{pmatrix}c_{1,\alpha}\sigma_{1N}\sqrt{n}/\Delta_1\\c_{2,\alpha}\sigma_{2N}\sqrt{n}/\Delta_2\\\vdots\\c_{M,\alpha}\sigma_{MN}\sqrt{n}/\Delta_{M}\end{pmatrix},\tilde{\boldsymbol\Sigma}\right),$ \\for $k =1,...,m,$ where $\tilde{\boldsymbol\Sigma}_{ij}=\mathrm{Cov}\left(\dfrac{\sqrt{n}\left(\hat{\theta}_i-\hat{\theta}_N\right)}{\Delta_i}, \dfrac{\sqrt{n}\left(\hat{\theta}_j-\hat{\theta}_N\right)}{\Delta_j}\right)$, $m$ is the number of Monte Carlo repetitions, and $M$ is the number of indices $i:\Delta_i\geq\Delta_{\mathrm{min}}$.
\STATE Find the empirical $1-\beta$ equicoordinate quantile $c^{*}_{1-\beta}$ of the simulated $\boldsymbol X^{(k)}$ for each $k=1,...,m$.
\STATE Then, the sample size is $n \approx \left(c^*_{1-\beta}\right)^2$
\end{algorithmic}
\end{algorithm}
It is analytically difficult to prove monotonicity for a general $\boldsymbol\Sigma$ structure. However, as Figure \ref{fig:contour-exchangeable} suggests, it can be proven the power function is a monotone non-decreasing function of the correlation and a monotone non-increasing function of the variance for an exchangeable covariance matrix. We conjecture this property is true in general for $n$ sufficiently large. We state the result below.
\begin{theorem}
Let $\boldsymbol\Sigma$ be exchangeable: $\boldsymbol\Sigma = \sigma^2\boldsymbol I_{N} + \rho\sigma^2 \left( \mathbbm{1}_N \mathbbm{1}_{N}'-\boldsymbol I_N \right)$ where $\boldsymbol I_N =\mathrm{diag}(1,...,1)$ and $\mathbbm{1}_N = (1,...,1)'$.
Then, the power is a monotone increasing function of $\rho$ and a monotone decreasing function of $\sigma^2$.
\end{theorem}
In order to simplify computations, we propose choosing a covariance matrix with fewer parameters than the posited covariance matrix. That is, suppose $E$ denotes the space of covariance matrices with fewer parameters, and that the posited covariance matrix for the SMART is $\boldsymbol\Sigma_{\mathrm{True}}$. For example, $E$ may be the space of exchangeable covariance matrices. We suggest choosing $\boldsymbol\Sigma_{\mathrm{Exchangeable}} = \underset{\boldsymbol\Sigma\in E}{\arg\min} \left\lVert \boldsymbol\Sigma_{\mathrm{True}}- \boldsymbol\Sigma\right\rVert$. For example, $\lVert\cdot\rVert$ may be the Frobenius norm. In our simulation study, we will evaluate the power when choosing the nearest exchangeable matrix in the Frobenius norm.
We also investigated the power as a function of the effect size assuming all effect sizes are equal by computing over a grid of $\Delta$ values. Figure \ref{fig:power-by-delta} shows how the power is a monotone increasing function of the uniform effect size $\Delta$ which makes intuitive sense.
\section{Simulation Study}
We have explored how the power changes in terms of a known covariance matrix and effect size. In this section, we present simulation studies for two different SMART designs in which we evaluate the assumption of a known covariance matrix. In practice, the true covariance matrix is estimated consistently by some $\hat{\boldsymbol\Sigma}$. The designs are based on those discussed in \cite{ertefaie2015}. For each SMART, we simulate 1000 datasets across a grid of sample sizes $n$.
We computed the sets of best EDTRs using the estimates $\hat{\boldsymbol\theta}$ and $\hat{\boldsymbol\Sigma}$ obtained from the AIPW estimation method after correctly specifying an appropriate marginal structural model and conditional means (see Section 1.2, Proposition 1 of Appendix A in the supplementary material, and \cite{ertefaie2015} for more details). For each $n$, the empirical power was calculated as the fraction of data sets which excluded all EDTRs with true mean outcome $\Delta_{\mathrm{min}}$ or more away from the best EDTR.
\subsection{SMART Design: Example 1}
In design 1, the stage-2 randomization is tailored based on response to the stage-1 treatment assignment. The tailoring variable is the indicator $V \in \{R,NR\}$ where a response corresponds to the intermediate outcome $O_2$ being positive. Non-responders to the first stage treatment are subsequently re-randomized to one of two intervention options while responders continue on the initial treatment assignment. See Figure \ref{fig:SMART-design-1} in Appendix B of the supplementary material for more details. We generated 1000 data sets for each sample size $n = 50,100,150,200,250,300,350,400,450,500$ according to the following scheme:
\begin{enumerate}
\itemsep0em
\item \begin{enumerate}
\itemsep0em
\item Generate $O_{11}, O_{12} \sim N(0,1)$ (baseline covariates)
\item Generate $A_1 \in \{-1,+1\}$ from a Bernoulli distribution with probability $0.5$ (first-stage treatment option indicator)
\end{enumerate}
\item \begin{enumerate}
\itemsep0em
\item Generate $O_{21} \mid O_{11} \sim N(0.5 O_{11},1)$ and $O_{22} \mid O_{12} \sim N(0.5O_{12}, 1)$ (intermediate outcomes)
\item Generate $A_2^{NR} \in \{-1,+1\}$ from a Bernoulli distribution with probability $0.5$ (second-stage treatment option indicator for non-responders)
\end{enumerate}
\item \small $Y \mid O_{11},O_{12},O_{21},O_{22},A_1,A_2^{NR} \sim N\left(1+O_{11}-O_{12}+O_{22}+O_{21}+A_1(\delta+\frac{O_{11}}{2})+I(O_{21}>0)A_2^{NR}\dfrac{\delta}{2},1\right)$\\where $\delta=0.25$
\end{enumerate}
The parameter estimates $\hat{\beta}_{\mathrm{AIPW}}$ were computed using augmented-inverse probability weighting (AIPW) (see Appendices A and B in the supplementary material and \citealp{ertefaie2015} for more details).
Then, $\hat{\boldsymbol\theta}_{\mathrm{AIPW}} = \boldsymbol D \hat{\boldsymbol\beta}_{\mathrm{AIPW}}$ where
\begin{equation}
\boldsymbol D = \begin{pmatrix}1&1&1\\1&-1&1\\1&1&-1\\1&-1&-1\end{pmatrix}.
\end{equation} The rows of $\boldsymbol D$ correspond to each of the four EDTRs as listed in the supplementary material.
The true $\boldsymbol\theta$ was $(1.312,0.812,1.188,0.688)$. The minimum effect size $\Delta_{\mathrm{min}}$ was set to $0.5$ and the vector of effect sizes was set to $(0, 0.5, 0.124, 0.624)$ for estimating the anticipated power. We computed the set of best DTRs using the multiple comparison with the best procedure as outlined in Section 2.4 for each data set and sample size. The empirical power was calculated as the fraction of data sets which excluded all EDTRs with true mean outcome $\Delta_{min}$ or more away from the best EDTR (in this case $\mathrm{EDTR}_2$ and $\mathrm{EDTR}_4$), for each $n$.
The true covariance matrix for this SMART was estimated by averaging 1000 simulated datasets each of 10000 individuals. $\boldsymbol\Sigma_{\mathrm{Exchangeable}}$ is the closest matrix in Frobenius norm to $\boldsymbol\Sigma_{\mathrm{True}}$ of the form
\begin{equation*}
\begin{pmatrix} \sigma^2 &\rho\sigma^2&\rho\sigma^2&\rho\sigma^2\\\rho\sigma^2&\sigma^2&\rho\sigma^2&\rho\sigma^2\\\rho\sigma^2&\rho\sigma^2&\sigma^2&\rho\sigma^2\\\rho\sigma^2&\rho\sigma^2&\rho\sigma^2&\sigma^2\end{pmatrix}
\end{equation*}
\subsubsection{Example 1: results}
\label{sim1results}
The simulation results are summarized in the plot on the left-hand side of Figure \ref{fig:power-plot}. The plot shows the sample size calculation is highly sensitive to the choice of $\boldsymbol\Sigma$. Choosing $\boldsymbol\Sigma = \boldsymbol I_4$ will greatly underestimate the required sample size, predicting around 100 individuals compared to the true 400-450 individuals needed to achieve $80\%$ power. This implies it is important not to ignore the variance of DTRs.
Consequently, it is very important to utilize prior information such as that obtained from pilot SMARTs and physician's knowledge of treatment response variability in order to make an informed choice for the covariance matrix and to obtain accurate sample size predictions.
\subsection{SMART Design: Example 2}
In the second SMART design, stage-2 randomization depends on both prior treatment and intermediate outcomes (\citealp{ertefaie2015}). In particular, Individuals are randomized at stage-2 if and only if they are non-responders whose stage-1 treatment option corresponded to $A_1 =-1$ (call this condition B) (see Appendix B Figure \ref{fig:SMART-design-2} for more details).
We generated 1000 data sets for each sample size $100,150,200,250,300,350,400,450,500$ according to the following scheme:
\begin{enumerate}
\itemsep0em
\item \begin{enumerate}
\itemsep0em
\item Generate $O_{11}, O_{12} \sim N(0,1)$ (baseline covariates)
\item Generate $A_1 \in \{-1,+1\}$ from a Bernoulli distribution with probability $0.5$ (first-stage treatment option indicator)
\end{enumerate}
\item \begin{enumerate}
\itemsep0em
\item Generate $O_{21} \mid O_{11} \sim N(0.5 O_{11},1)$ and $O_{22} \mid O_{12} \sim N(0.5O_{12}, 1)$ (intermediate outcomes)
\item Generate $A_2^{B} \in \{1,2,3,4\}$ from a Multinomial distribution with probability $0.25$ (second-stage treatment option indicator for individuals satisfying condition B)
\end{enumerate}
\item $Y \mid O_{11},O_{12},O_{21},O_{22},A_1,A_2^{B} \sim$ Normal with unit variance and mean equal to \\\small \begin{equation*}\begin{split}1&+O_{11}-O_{12}+O_{21}+O_{22}+I(A_1=-1)(\delta+O_{11})\\&+I(O_{21}>0)I(A_1=-1)[-\delta/4I(A_2=1)+\delta/2I(A_2=2)+0I(A_2=3)+\delta/2O_{21}I(A_2=2)]\end{split}\end{equation*}\\where $\delta=2.00$
\end{enumerate}
The parameter estimates $\hat{\beta}_{\mathrm{AIPW}}$ were computed using AIPW (see Appendices A and B in the supplementary material and \citealp{ertefaie2015} for more details).
Then, $\hat{\boldsymbol\theta}_{\mathrm{AIPW}} = \boldsymbol D \hat{\boldsymbol\beta}_{\mathrm{AIPW}}$ where
\begin{equation}\boldsymbol D =\begin{pmatrix}1&0&0&0&0\\1&1&0&0&0\\1&1&1&0&0\\1&1&0&1&0\\1&1&0&0&1\end{pmatrix}.
\end{equation}
The true $\boldsymbol\beta$ value is $(1.00,\delta,-\delta/8,\delta/4,0)$ where $\delta = 2.00$ and the true $\boldsymbol\theta$ value is $(1.00,3.00,2.75,3.50,3.00)$.
Note the fourth EDTR is the best.
We let the vector of effect sizes $\Delta = (2.50,0.50, 0.75, 0.00, 0.50)$ and the desired detectable effect size $\Delta_{\mathrm{min}}=0.5$. The set of best was computed for each data set. For each sample size, the empirical power is the fraction of $1000$ data sets which exclude $\mathrm{EDTR}_1, \mathrm{EDTR}_2,\mathrm{EDTR}_3,\mathrm{EDTR}_5$.\\
\subsubsection{Example 2: Results}
Our simulation results are summarized in the plot on the right-hand size of Figure \ref{fig:power-plot}. The true covariance matrix for this SMART was computed by averaging 1000 simulated datasets each of 10000 individuals. The power plots show that the predicted power is similar to the empirical power when assuming both the correct $\boldsymbol\Sigma_{\mathrm{True}}$ and for $\boldsymbol\Sigma$ of the form \begin{equation*}\begin{pmatrix}\sigma_1^2 &\rho_1\sigma_1\sigma_2&\rho_1\sigma_1\sigma_2&\rho_1\sigma_1\sigma_2&\rho_1\sigma_1\sigma_2\\
\rho_1\sigma_1\sigma_2 &\sigma_2^2&\rho_2\sigma_2^2&\rho_2\sigma_2^2&\rho_2\sigma_2^2\\
\rho_1\sigma_1\sigma_2&\rho_2\sigma_2^2&\sigma_2^2&\rho_2\sigma_2^2&\rho_2\sigma_2^2\\
\rho_1\sigma_1\sigma_2&\rho_2\sigma_2^2&\rho_2\sigma_2^2&\sigma_2^2&\rho_2\sigma_2^2\\
\rho_1\sigma_1\sigma_2&\rho_2\sigma_2^2&\rho_2\sigma_2^2&\rho_2\sigma_2^2&\sigma_2^2
\end{pmatrix}
\end{equation*}
with parameters $\rho_1,\
|
rho_2,\sigma_1^2,\sigma_2^2$ chosen to minimize the Frobenius distance to $\boldsymbol\Sigma_{\mathrm{True}}$ (see Appendix B of the supplementary materials for more details). The anticipated sample size is approximately 500 individuals for $\boldsymbol\Sigma_{\mathrm{True}}$.
\section{Illustration: EXTEND Retrospective Power Computation}
In this section, we will apply our power analysis method to examine how much power there was to distinguish between DTRs $\Delta_{\mathrm{min}}$ away from the best in the EXTEND SMART when using MCB. Please see Section 2.1 for more details about the EXTEND SMART and Figure \ref{fig:EXTEND-figure} for a diagram depicting the EXTEND SMART. The outcome of interest was the Penn Alcohol Craving Scale (PACS) and the lower PACS were considered better outcomes. The covariance matrix $\hat{\boldsymbol\Sigma}$ and the vector of EDTR outcomes $\boldsymbol\theta$ were estimated using both IPW and AIPW (see Appendix A and Proposition 1 of the supplementary material for more details about estimation procedures). The covariance matrices are given below:
\begin{align*}
\hat{\boldsymbol\Sigma}_{\mathrm{IPW}} = \mathrm{Var}(\sqrt{n}\hat{\boldsymbol\theta}_{\mathrm{IPW}}) &= \begin{pmatrix} 145.86&54.88&77.24&-13.74&101.55&10.57&32.93&-58.05\\
54.88&163.02&1.13&109.27&12.66&120.8&-41.09&67.05\\
77.24&1.13&125.54&49.42&33.34&-42.77&81.64&5.53\\
-13.74&109.27&49.42&172.43&-55.55&67.46&7.62&130.63\\
101.55&12.66&33.34&-55.55&138.36&49.47&70.16&-18.73\\
10.57&120.8&-42.77&67.46&49.47&159.71&-3.87&106.37\\
32.93&-41.09&81.64&7.62&70.16&-3.87&118.86&44.84\\
-58.05&67.05&5.53&130.63&-18.73&106.37&44.84&169.94
\end{pmatrix}\\
\hat{\boldsymbol\Sigma}_{\mathrm{AIPW}} = \mathrm{Var}(\sqrt{n}\hat{\boldsymbol\theta}_{\mathrm{AIPW}})&=\begin{pmatrix} 113.35&32.52&82.01&1.19&103.8&22.97&72.46&-8.36\\
32.52&143.74&-13.93&97.28&25.91&137.12&-20.55&90.67\\
82.01&-13.93&123.63&27.69&72.32&-23.63&113.94&17.99\\
1.19&97.28&27.69&123.78&-5.58&90.52&20.92&117.02\\
103.8&25.91&72.32&-5.58&112.1&34.21&80.62&2.73\\
22.97&137.12&-23.63&90.52&34.21&148.36&-12.39&101.76\\
72.46&-20.55&113.94&20.92&80.62&-12.39&122.09&29.08\\
-8.36&90.67&17.99&117.02&2.73&101.76&29.08&128.11
\end{pmatrix}
\end{align*}
The EDTR outcome vectors $\hat{\boldsymbol\theta}_{\mathrm{IPW}}$ and $\hat{\boldsymbol\theta}_{\mathrm{AIPW}}$ are summarized in Table 1.
The set of best when performing estimation using AIPW excluded $\mathrm{EDTR}_6$ and $\mathrm{EDTR}_8$, both of which had effect size greater than $2$ away from the best EDTR, $\mathrm{EDTR}_1$. The set of best when using IPW failed to exclude any of the inferior EDTR (\citealp{ertefaie2015}). In order to evaluate the power there was to exclude $\mathrm{EDTR}_6$ and $\mathrm{EDTR}_8$ in EXTEND when using AIPW, we set the minimum detectable effect size $\Delta_{\mathrm{min}}$ to $2$.
At an $\alpha$ level of $0.05$, the power to exclude all DTRs inferior to the best one by $2$ or more was $27\%$ for IPW and $46\%$ for AIPW. AIPW yields greater power than IPW because AIPW yields smaller standard errors compared with IPW (\citealp{ertefaie2015}). Our method estimates that a total of $717$ individuals would need to be enrolled to achieve a power of $80\%$ using IPW and a total of $482$ individuals would need to be enrolled when using AIPW.
In Figure \ref{fig:EXTEND-power-step}, we computed the power over a grid of $\Delta$ values to see how the power changes as a function of effect size.
In Figure \ref{fig:EXTEND-power-equal}, we show how the power changes as a function of a uniform effect size. Specifically, we assume $\mathrm{EDTR}_1$ is the best and set the effect sizes of $\mathrm{EDTR}_2,..., \mathrm{EDTR}_8$ to be equal. We then vary this uniform effect size. In this case, we ignore the actual effect sizes of the true EDTR estimates $\hat{\boldsymbol\theta}$. In both Figures \ref{fig:EXTEND-power-step} and \ref{fig:EXTEND-power-equal}, we see the trend that AIPW yields greater power when compared with IPW.
\section{Discussion}
One of the main goals of SMART designs is determination of an optimal dynamic treatment regimen. When planning SMART designs, it is hence crucial to enroll a sufficient number of individuals in order to be able to detect the best DTR and to be able to exclude DTRs inferior to the best one by a clinically significant quantity.
In this paper, we introduced a novel method for carrying out power analyses for SMART designs which leverages multiple comparison with the best and Monte Carlo simulation. Our methodology directly addresses the central goal of SMARTs.
We explored the sensitivity of our method to varying parameters in the covariance matrix. We saw that the power prediction is greatly affected by the particular choice of the covariance matrix. This underscores the importance of relying on previous data such as a pilot SMART to estimate $\boldsymbol\Sigma$. We saw in simulation studies that our method yields valid estimates of power and it appears similar results may be achieved by choosing exchangeable ``close'' covariance matrices to the true covariance matrix.
We illustrated our method on the EXTEND SMART to see how much power there was to exclude inferior DTRs from the set of best and the necessary sample size to achieve $80\%$ power.
Future work will involve developing ways of choosing $\boldsymbol\Sigma$ using pilot SMART data and for sizing pilot SMARTs with the ability to estimate the covariance matrix to a specified accuracy.
\section*{Acknowledgments}
Conflict of Interest: None declared.
\section{Figures}
\begin{figure}[h]
\centering
\includegraphics[width = 15cm]{{figure1EXTEND}.pdf}
\caption{This diagram show the structure of the EXTEND trial.}
\label{fig:EXTEND-figure}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width = 10cm, trim = 0cm 3cm 0cm 0cm]{{figure2threeDPlot}.pdf}
\caption{
This plot shows the 3D contours of the power (denoted by shade/color) as a function of $\rho_1,\rho_2,\sigma^2$ where $\boldsymbol\Sigma = \begin{pmatrix}1&\rho_1&0&0\\\rho_1&1&0&0\\0&0&1&\rho_2 \sigma\\0&0&\rho_2\sigma&\sigma^2\end{pmatrix}$ and $i=4$ is the best DTR. $\boldsymbol\Delta = (0.25, 0.25, 0.25, 0)$ and $\Delta_{\mathrm{min}} = 0.25$
}
\label{fig:3D-Plot}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width = 8cm, trim = 0cm 1cm 0cm 0cm]{{figure3contour}.pdf}
\caption{$\boldsymbol\Sigma =\begin{pmatrix} 1&0&0&0\\0&1&0&\rho_1\\0&0&1&\rho_2\\0&\rho_1&\rho_2&1\end{pmatrix}$ and the fourth EDTR is best. The effect sizes are set to $\Delta = 0.25$.
Note that the power appears monotone with respect to $\rho_1$ and $\rho_2$. The finger-shaped boundary is due to the feasible region of values for $\rho_1$ and $\rho_2$ such that $\boldsymbol\Sigma$ is positive definite.
}
\label{fig:best-non-best-contour}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width = 9cm]{{figure4contourexchangeable}.pdf}
\caption{Monotonicity in exchangeable $\boldsymbol\Sigma$ case. $\boldsymbol\Sigma = \begin{pmatrix}\sigma^2&\rho\sigma^2&\rho\sigma^2&\rho\sigma^2\\\rho\sigma^2&\sigma^2&\rho\sigma^2&\rho\sigma^2\\\rho\sigma^2&\rho\sigma^2&\sigma^2&\rho\sigma^2\\\rho\sigma^2&\rho\sigma^2&\rho\sigma^2&\sigma^2\end{pmatrix}$
}
\label{fig:contour-exchangeable}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width = 15cm]{{figure5power-by-delta}.pdf}
\caption{The plot of power against the uniform effect size for $\boldsymbol\Sigma = \boldsymbol I_4,\boldsymbol\Sigma_{\mathrm{TRUE}}$ shows that as the effect size $\Delta \to \infty$, $\mathrm{Power}(\Delta) \to 1$ and similarly $\mathrm{Power}(\Delta) \to 0$ as $\Delta \to 0$. $\boldsymbol\Sigma_{\mathrm{TRUE}}$ is the true covariance matrix form simulation design 1. Furthermore, the plot demonstrates the power is a monotone increasing function of the effect size $\Delta$. A proof can be derived using continuity and monotonicity of the probability measure.}
\label{fig:power-by-delta}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width = 8cm]{{figure6SMART-design-1-predicted}.pdf}
\includegraphics[width = 8cm]{{figure6SMART-design-2-predicted}.pdf}
\caption{The plots shows the power as a function of the sample size n with a horizontal line where the power is $80\%$. The plot shows the power curves for $\boldsymbol\Sigma = \boldsymbol I_4, \boldsymbol\Sigma=\boldsymbol\Sigma_{\mathrm{True}}$,
and $\boldsymbol\Sigma = \boldsymbol\Sigma_{\mathrm{Exchangeable}}$ and the empirical power curve.}
\label{fig:power-plot}
\end{figure}
\clearpage
\begin{figure}[t]
\centering
\includegraphics[width = 10cm]{{figure7EXTEND-power-by-delta-step3}.pdf}
\caption{This plot shows the power as a function of $\Delta_{\mathrm{min}}$ in the EXTEND trial when performing estimation with IPW and AIPW, respectively. There are 250 individuals in EXTEND}
\label{fig:EXTEND-power-step}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width = 10cm]{{figure8EXTEND-power-by-equal-delta5}.pdf}
\caption{This plot shows the power as a function of the uniform effect size in the EXTEND trial when performing estimation with IPW and AIPW, respectively. There are 250 individuals in EXTEND}
\label{fig:EXTEND-power-equal}
\end{figure}
\begin{table}[H]
\centering
\caption{Extend trial: parameter estimates and standard errors}
\begin{tabular}[H]{c|c|cccccccccc}
\hline
& Parameter& $\theta_1$ & $\theta_2$ & $\theta_3$ & $\theta_4$ & $\theta_5$ & $\theta_6$ & $\theta_7$ & $\theta_8$\\
\hline
IPW & Estimate & 7.56 & 9.53 & 8.05 & 10.02 & 7.71 & 9.68 & 8.19 & 10.17\\
&SD & 0.76 & 0.81 & 0.71 & 0.83 & 0.74 & 0.80 & 0.69 & 0.82 \\
\hline
AIPW & Estimate & 7.65 & 9.44 & 7.83 & 9.62 & 8.06 & 9.85 & 8.24 & 10.03\\
&SD & 0.67 & 0.76 & 0.70 & 0.70 & 0.67 & 0.77 & 0.70 & 0.72\\
\hline
\end{tabular}
\label{tab:1}
\end{table}
\newpage
\section*{Supplementary Material}
\section*{Appendix A.}
\section{Notation}
We focus on notation for two-stage SMART designs, but the methods in this paper are applicable to an arbitrary SMART. Let $\mathrm{EDTR}_i$ denote the $i$th EDTR. Let $O_j$ and $A_j$ denote the observed covariates and treatment assignment, respectively, at stage $j$. Let $\bar{O}_j$ and $\bar{A}_j$ denote the covariate and treatment histories up to and including stage $j$, respectively. Let $\mathcal{T}$ the \textit{treatment trajectory} be the vector of counterfactual treatment assignments for an individual. For example, in a two-stage SMART with response as a tailoring variable, $\mathcal{T}$ may be of the form $\mathcal{T}=(A_1,A_2^{\mathrm{R}}, A_2^{\mathrm{NR}})$ where $A_2^{\mathrm{R}}$ is the stage two treatment assignment had the individual responded and $A_2^{\mathrm{NR}}$ is the stage two treatment assignment had the individual not responded. The reason these are counterfactual treatment assignments is that for an individual who responds to the stage 1 treatment, $A_2^{\mathrm{NR}}$ would be unobserved. Hence, the treatment history $\bar{A}_2$ would be $(A_1,A_2)$ while the treatment trajectory $\mathcal{T}$ would be $(A_1,A_2^{\mathrm{R}}, A_2^{\mathrm{NR}})$ and would include the unobserved counterfactual. Let $Y$ denote the observed outcome of an individual at the end of the study. A tailoring variable may be written as $V$ and is a function of the observed covariates and treatment assignments at each stage. Let $S$ be an indicator for randomization at stage 2. Then, the data structure for a two-stage SMART may be written $(O_1,A_1,O_2,S,A_2,Y)$ (\citealp{ertefaie2015}).
\section{Estimation}
We summarize the estimation procedures IPW (inverse probability weighting) and AIPW (augmented inverse probability weighting) introduced in \cite{ertefaie2015}.
In order to perform estimation with IPW/AIPW, a marginal structural model (MSM) must be specified. An MSM models the response as a function of the counterfactual random treatment assignments captured in the treatment trajectory vector $\mathcal{T}$, while ignoring non treatment covariates. For example, in a two-stage SMART, the MSM is: $$m(\mathcal{T};\boldsymbol\beta) = \beta_0+\beta_1 A_1 +\beta_2 A_2^{R} +\beta_3 A_2^{NR}+\beta_4 A_1 A_2^{R}+\beta_5 A_1 A_2^{NR}.$$
Subsequently, the IPW and AIPW estimators $\hat{\boldsymbol\theta}_{\mathrm{IPW}}$, $\hat{\boldsymbol\theta}_{\mathrm{AIPW}}$ may be obtained by solving their respective estimating equations: \begin{equation}
\mathbb{P}_n\sum_{k=1}^K\dot{m}(\mathcal{T};\beta)w_2(V,\bar{A}_2,k)(Y-m(\mathcal{T};\beta)=0\tag{IPW}
\end{equation}
\begin{equation}\begin{split}
\mathbb{P}_n\sum_{k=1}^K & \dot{m}(\mathcal{T};\boldsymbol\beta)\big[w_2(V,A_2,k)(y-m(\mathcal{T},\boldsymbol\beta))\\&-\left(w_2(V,\bar{A}_2,k)-w_1(A_1,k)\right)\left(\mathbb{E}[Y \mid \bar{A}_2=\mathrm{EDTR}_k^V,\bar{O}_2]-m(\mathcal{T};\boldsymbol\beta)\right)\\&-(w_1(A_1,k)-1)(\mathbb{E}[\mathbb{E}[Y \mid \bar{A}_2=\mathrm{EDTR}_k^V,\bar{O}_2]\mid A_1 = \mathrm{EDTR}_{k,1},O_1]-m(\mathcal{T};\boldsymbol\beta)\big) = 0
\end{split}\tag{AIPW}\end{equation}
where $\mathbb{P}_n$ denotes the empirical average,
$\dot{m}(\mathcal{T},\boldsymbol \beta) = \dfrac{\partial m}{\partial\boldsymbol \beta}$, $\mathrm{EDTR}_k^V = \left(\mathrm{EDTR}_{k,1},\mathrm{EDTR}_{k,2}^V\right)$, $w_1(a_1,k) = \dfrac{I_{\mathrm{EDTR}_{k,1}}(a_1)}{p(A_1 = a_1)}$ for $A_1 = a_1$, and $w_2(v,\bar{a}_2, k)=\dfrac{I_{\mathrm{EDTR}_{k,1}}(a_1)I_{\mathrm{EDTR}_{k,2}^v}(a_2)}{p(A_1=a_1)p(A_2=a_2 \mid A_1=a_1, V = v)}$ for $V = v$ and $\bar{A}_2 = \bar{a}_2$.
AIPW is doubly-robust in the sense that it will still provide unbiased estimates of the MSM coefficients $\boldsymbol\beta$ when either the conditional means or the treatment assignment probabilities are correctly specified.
The following proposition from \cite{ertefaie2015} is included for the sake of completeness.
\begin{prop}
Let $\hat{\theta}^{\diamond}=D\hat{\beta}^{\diamond}$, where $D$ is a $K \times p$ matrix with the $k$th row of $D$ being the contrast corresponding to the $k$th EDTR. Then, under standard regulatory assumptions, $\sqrt{n}(\hat{\theta}^{\diamond}-\theta) \to N(0,\boldsymbol\Sigma^{\diamond}=D[\Gamma^{-1}\Lambda^{\diamond}\Gamma'^{-1}]D')$ where $\Gamma = -\mathbb{E}[\sum_{i=1}^K \dot{m}(\mathcal{T};\beta)\dot{m}'(\mathcal{T};\beta)]$ and $\Lambda^{\diamond}=\mathbb{E}[U^{\diamond}U'^{\diamond}]$ with
\begin{align*}
U^{\mathrm{AIPW}}&=\begin{aligned}
\sum_{k=1}^K& \dot{m}(\mathcal{T};\boldsymbol\beta)\big[w_2(V,A_2,k)(y-m(\mathcal{T},\boldsymbol\beta))-\left(w_2(V,\bar{A}_2,k)-w_1(A_1,k)\right)\left(\mathbb{E}[Y \mid \bar{A}_2=\mathrm{EDTR}_k^V,\bar{O}_2]-m(\mathcal{T};\boldsymbol\beta)\right)\\&-(w_1(A_1,k)-1)(\mathbb{E}[\mathbb{E}[Y \mid \bar{A}_2=\mathrm{EDTR}_k^V,\bar{O}_2]\mid A_1 = \mathrm{EDTR}_{k,1},O_1]-m(\mathcal{T};\boldsymbol\beta)\big)\end{aligned}\\
U^{\mathrm{IPW}}&=\sum_{k=1}^K\dot{m}(\mathcal{T};\beta)w_2(V,\bar{A}_2,k)(Y-m(\mathcal{T};\beta).
\end{align*}\\
The asymptotic variance $\boldsymbol\Sigma^{\diamond}$ may be estimated consistently by replacing the expectations with expectations with respect to the empirical measure and $(\beta,\gamma)$ with its estimate $(\hat{\beta}^{\diamond},\hat{\gamma})$ and may be denoted as $\hat{\boldsymbol\Sigma}^{\diamond}=D[\hat{\Gamma}^{-1}\hat{\Lambda}^{\diamond}\hat{\Gamma}'^{-1}]D'$
\end{prop}
\section*{Appendix B.}
\section{Proof of Theorem 4.1}
Note that $\mathrm{Cov}\left(\dfrac{\sqrt{n}(\hat{\theta}_i-\hat{\theta}_N)}{\sigma_{iN}},\dfrac{\sqrt{n}(\hat{\theta}_j-\hat{\theta}_N}{\sigma_{jN}}\right)=\dfrac{\Sigma_{ij}-\Sigma_{iN}-\Sigma_{jN}+\Sigma_{NN}}{\sigma_{iN}\sigma_{jN}}$.
Assume $\boldsymbol\Sigma$ is exchangeable, e.g., $\boldsymbol\Sigma = \sigma^2\boldsymbol I_{N} + \rho\sigma^2 \left( \mathbbm{1}_N \mathbbm{1}_{N}'-\boldsymbol I_N \right)$ where $\mathbbm{1}_N$ is a vector of $N$ $1's$ and $I_N$ is the $N$ by $N$ identity matrix.
Then, for all $i,j$, $\mathrm{Cov}\left(W_i,W_j\right)=\mathrm{Cov}\left(\dfrac{\sqrt{n}(\hat{\theta}_i-\hat{\theta}_N)}{\sigma_{iN}},\dfrac{\sqrt{n}(\hat{\theta}_j-\hat{\theta}_N}{\sigma_{jN}}\right) = \dfrac{\rho\sigma^2 -2\rho \sigma^2 +\sigma^2}{2\sigma^2(1-\rho)}=\dfrac{\sigma^2(1-\rho)}{2\sigma^2 (1-\rho)}=\dfrac{1}{2}$ for all $\rho \in \left(-\dfrac{1}{N-1},1\right)$ and for all $\sigma^2$. Also, $c_{i,\alpha}$ is constant across all values of $\rho$ and $\sigma^2$.
It follows from Slepian's inequality and monotonicity of the probability measure that \\
$\displaystyle\mathrm{Power}_{\alpha,n}\left({\bSigma}, \boldsymbol\Delta, \Delta_{\mathrm{min}}\right) = \mathbb{P}_{\boldsymbol\Sigma}\left(\bigcap_{i=1}^{N-1}\left\{W_i <-c_{i,\alpha}+\dfrac{\Delta_i\sqrt{n}}{\sqrt{2\sigma^2(1-\rho)}}\right\}\right)$
is monotone increasing in $\rho$ and monotone decreasing in $\sigma^2$.
\\
\section{Simulation Study}
In this section of the supplementary material, we give additional details on how estimation was performed in the simulation study. The SMART designs in the simulation studies are based off the SMART designs of the simulation studies of \cite{ertefaie2015}. We estimated $\boldsymbol\theta$ and $\boldsymbol\Sigma$ using AIPW\\
For SMART design 1, the MSM is $m(\mathcal{T};\beta)=\beta_0+\beta_1A_1+\beta_2A_2^{\mathrm{NR}}.$
The true conditional means are:
\begin{align*}
\mathbb{E}[Y \mid \bar{A}_2=\mathrm{EDTR}_k^V,\bar{O}_2] &= \gamma_0 +\gamma_1o_{11}+\gamma_2o_{12}+\gamma_3o_{21}+\gamma_4o_{22} + a_1(\gamma_5+\gamma_6o_{11})+\gamma_7I(O_{21}>0))a_2\\
\mathbb{E}[Y\mid A_1 = \mathrm{EDTR}_{k,1},O_1] &= \gamma_8+\gamma_9o_{11}+\gamma_{10}o_{12}+\gamma_{11}a_1+\gamma_{12}a_1o_{11}
\end{align*}
For SMART design 2, the MSM is:
\begin{equation*}
m(\mathcal{T};\boldsymbol\beta) = \beta_0+\beta_1I(A_1=-1)+I(A_1 = -1)[\beta_2I(A_2^B = 1)+\beta_3I(A_2^B=2)+\beta_4I(A_2^B=3)].
\end{equation*}
The true conditional means are:
\begin{align*}
\mathbb{E}[Y \mid \bar{a}_2=\mathrm{EDTR}_k^V,\bar{o}_2,\boldsymbol\gamma] = &\gamma_0+\gamma_1o_{11}+\gamma_2o_{12}+\gamma_3o_{21}+\gamma_4o_{22}+I(a_1=-1)(\gamma_5+\gamma_6o_{11}) \\
&+I(o_{21}>0)I(a_1=-1)[\gamma_7I(a_2=1)+\gamma_8I(a_2=2)+\gamma_9I(a_2=3)+\gamma_{10}o_{21}I(a_2=2)]\\
\mathbb{E}[Y\mid a_1 \in \mathrm{EDTR}_{k,1},o_1,\boldsymbol\gamma] = &\gamma_{11}+\gamma_{12}o_{11}+\gamma_{13}o_{12}+\gamma_{14}I(a_1=-1)+\gamma_{15}I(a_1=-1)o_{11}
\end{align*}
\begin{figure}[t]
\centering
\includegraphics[width = 15cm]{{figure9Balanced-SMART}.pdf}
\caption{SMART Design 1}
\label{fig:SMART-design-1}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width = 15cm]{{figure10Unbalanced-SMART}.pdf}
\caption{SMART Design 2}
\label{fig:SMART-design-2}
\end{figure}
\subsection{Simulation SMART design 1: Determination of the Closest Exchangeable Matrix}
Let $\boldsymbol\Sigma$ be exchangeable and $\lVert\cdot\rVert$ denote the Frobenius norm. Then, $\boldsymbol\Sigma$ is of the form $\boldsymbol\Sigma = \sigma^2\boldsymbol I_{N} + \rho\sigma^2 \left( \mathbbm{1}_N \mathbbm{1}_{N}'-\boldsymbol I_N \right)$. Hence,
$$\left \lVert \boldsymbol\Sigma-\boldsymbol\Sigma_{\mathrm{True}}\right\rVert^2 = \left\lVert \sigma^2\boldsymbol I_{N} + \rho\sigma^2 \left( \mathbbm{1}_N \mathbbm{1}_{N}'-\boldsymbol I_N \right) - \boldsymbol\Sigma_{\mathrm{True}}\right\rVert^2 := \sum_{i = 1}^N (\sigma^2-\sigma_i^2)^2 + \underset{i\neq j}{\sum\sum}(\rho\sigma^2 - \sigma_{ij})^2$$
Then, \begin{align*} \dfrac{\partial}{\partial \sigma^2}\left\lVert \boldsymbol\Sigma-\boldsymbol\Sigma_{\mathrm{True}}\right\rVert^2 &= 2\sum_{i=1}^N(\sigma^2-\sigma^2_i)+2\rho\underset{i\neq j}{\sum\sum}(\rho\sigma^2-\sigma_{ij})=0 \\ \text{and }\dfrac{\partial}{\partial \rho}\left\lVert \boldsymbol\Sigma-\boldsymbol\Sigma_{\mathrm{True}}\right\rVert^2 &=2\sigma^2 \underset{i\neq j}{\sum\sum}(\rho\sigma^2-\sigma_{ij})=0. \end{align*}
Hence,
$$\sigma^2 = \dfrac{1}{N}\sum_{i=1}^N\sigma_i^2 \text{ and } \rho\sigma^2 = \dfrac{1}{N(N-1)}\underset{i \neq j}{\sum\sum} \sigma_{ij}$$
\subsection{Simulation 2: Determination of the closest exchangeable matrix}
We assumed a block exchangeable matrix for $\boldsymbol\Sigma$ of the form \begin{equation*}\boldsymbol\Sigma_{\mathrm{Exchangeable}}=\begin{pmatrix}\sigma_{1w}^2 &\rho_1\sigma_{1w}\sigma_{2w}&\rho_1\sigma_{1w}\sigma_{2w}&\rho_1\sigma_{12}\sigma_{2w}&\rho_1\sigma_{1w}\sigma_{2w}\\
\rho_1\sigma_{1w}\sigma_{2w} &\sigma_{2w}^2&\rho_2\sigma_{2w}^2&\rho_2\sigma_{2w}^2&\rho_2\sigma_{2w}^2\\
\rho_1\sigma_{1w}\sigma_{2w}&\rho_2\sigma_{2w}^2&\sigma_{2w}^2&\rho_2\sigma_{2w}^2&\rho_2\sigma_{2w}^2\\
\rho_1\sigma_{1w}\sigma_{2w}&\rho_2\sigma_{2w}^2&\rho_2\sigma_{2w}^2&\sigma_{2w}^2&\rho_2\sigma_{2w}^2\\
\rho_1\sigma_{1w}\sigma_{2w}&\rho_2\sigma_{2w}^2&\rho_2\sigma_{2w}^2&\rho_2\sigma_{2w}^2&\sigma_{2w}^2
\end{pmatrix}
\end{equation*}
Note that
$$\left\lVert \boldsymbol\Sigma_{\mathrm{Exchangeable}}-\boldsymbol\Sigma_{\mathrm{True}}\right\rVert=(\sigma_{1w}-\sigma_1^2)^2+\sum_{i=2}^5(\sigma_{2w}^2-\sigma_2^2)^2 +2\sum_{j=2}^5(\rho_1\sigma_{1w}\sigma_{2w}-\sigma_{1j})^2+\underset{i\neq j \in \{2,3,4,5\}}{\sum\sum}(\rho_2\sigma_{2w}^2-\sigma_{ij}).$$
Hence,
\begin{align*}
\dfrac{\partial}{\partial\sigma_{1w}^2} \lVert\boldsymbol\Sigma-\boldsymbol\Sigma_{\mathrm{close}}\rVert &= 2(\sigma_{1w}^2-\sigma_1^2) + 4\rho_1\sigma_{2w}\sum_{j=2}^5(\rho_1\sigma_{1w}\sigma_{2w}-\sigma_{1j}) = 0\\
\dfrac{\partial}{\partial\rho_1}\lVert\boldsymbol\Sigma-\boldsymbol\Sigma_{\mathrm{close}}\rVert &=4\sigma_{1w}\sigma_{2w}\sum_{j=2}^5(\rho_1\sigma_{1w}\sigma_{2w}-\sigma_{1j}) = 0\\
\dfrac{\partial}{\partial\sigma_{2w}^2}\lVert\boldsymbol\Sigma-\boldsymbol\Sigma_{\mathrm{close}}\rVert &=2\sum_{i = 2}^5(\sigma_{2w}^2-\sigma_i^2)+4\rho_1\sigma_{1w}\sum_{j=2}^5(\rho_1\sigma_{1w}\sigma_{2w}-\sigma_{1j})+2\rho_2\underset{i\neq j, 1}{\sum\sum}(\rho_2\sigma_{2w}^2-\sigma_{ij})\\
\dfrac{\partial}{\partial\rho_2}\lVert\boldsymbol\Sigma-\boldsymbol\Sigma_{\mathrm{close}}\rVert &= 2\sigma_{2w}^2 \underset{i\neq j,1}{\sum\sum}(\rho_2\sigma_{2w}^2-\sigma_{ij})
\end{align*}
Hence,
\begin{align*}
\sigma_{1w}^2 &= \sigma_{1w}\\
\sigma_{2w}^2 &= \dfrac{1}{4}\sum_{j=2}^5 \sigma_j^2\\
\rho_1\sigma_{1w}\sigma_{2w}&= \dfrac{1}{4}\sum_{j=2}^5\sigma_{1j}\\
\rho_2\sigma_{2w}^2 &= \dfrac{1}{12}\underset{j\neq i \in \{2,3,4,5\} }{\sum\sum}\sigma_{ij}
\end{align*}
\clearpage
\bibliographystyle{biorefs}
|
\section{Introduction}
The booming Internet of Things (IoT) network has raised growing demand for real-time transmission of information, while the dense deployment of communication devices increasing rapidly creates a heavily interfering environment.
\textcolor{blue}{The Global System for Mobile Communications Association (GSMA)\cite{2021mobile} predicted that the number of connected IoT devices will increase from 13.1 billion in 2020 to 24 billion in 2025. Massive machine-type devices are connected to the cellular network, which increases the pressure on the network.
For massive machine-type communications, the existing frequency resources are not enough to realize orthogonal access for all devices.}
Device-to-device (D2D) communications, which is a key technology for supporting IoT scenarios, can reduce the pressure of the base station (BS) and enhance spectral efficiency through devices communicating directly in a frequency reuse pattern instead of via the BS.
Taking into account the scarcity of spectrum resources, most D2D communications operate in a frequency reuse model, yielding significant interference for each other when they are activated at the same time. It is obvious that D2D communication requires a careful schedule of D2D links. The main research on D2D scheduling of wireless D2D networks focuses on the analysis and optimization of sum-rate\cite{FPLinQ}, throughput\cite{CachingPolicy}, the user fairness\cite{Heuristic} or traffic density\cite{8688635}, but these metrics are insufficient to characterize the freshness of information.
The freshness of information is becoming more and more important in real-time services provided by wireless D2D communications, such as real-time monitoring. When a device needs to transmit information, but the D2D link is not scheduled, the freshness of the information decreases. Therefore, it is of great importance to design the D2D scheduling scheme in wireless D2D communications to get more fresh information.
Recently, the age of information (AoI) is proposed to characterize the freshness of information in \cite{Real-time}. AoI can measure the time elapsed since the latest received update was generated from the perspective of the receiver. \cite{Real-time} analyzes the AoI performance in some queuing systems such as $M/M/1$ by adjusting the arrival rate under a first-come-first-served (FCFS) policy. Following this, plenty of research has been conducted with regard to the analysis and minimization of the AoI in various systems, wherein methods include utilizing last-come-first-served (LCFS) policy and discarding obsolete packets\cite{OntheAge}, adjusting packet generation policy\cite{Update}, introducing packet deadlines to avoid prolonged blockage\cite{8323423} and preemption policy to ensure the package is up-to-date\cite{8445919} in the single source-destination system.
To extend the analysis of AoI to a more general and large-scale system, multiple sources at a single-server queue scenario is considered in\cite{8469047} and parallel-server queue such as $M/M/c$ under preemption policy are analyzed in\cite{8437907}. Considering the actual channel interference, AoI in the wireless communication system is also studied:\cite{8006541} analyzes the influence of code redundancy in erasure channel, carrier sense multiple access(CSMA) system under battery lifetime constraints are considered in\cite{10.1145/3397166.3409125}, online or offline scheduling policies to minimize the weighted sum average AoI are proposed in \cite{Scheduling} under the network only a single link can be scheduled to transmit in each slot.
Nevertheless, there are few works on the analysis and minimization of AoI in systems such as D2D where multiple direct communication links share the same spectrum and interfere with each other when they are activated simultaneously. Existing researches utilize stochastic geometry to model the spatial relationship of D2D devices\cite{mankar,OptimizingH,9042825}.
With this tool, \cite{mankar} statistically analyzes the distribution of the peak AoI and the conditional success probability in D2D scenarios, where two different queuing types are considered. \cite{OptimizingH} proposes a decentralized scheduling policy aiming at minimizing the peak AoI by receivers utilizing the channel state information(CSI) of the links around their stopping sets. \cite{9042825} considers the uplink network peak AoI under time-triggered and event-triggered traffic in an IoT scenario with mutual interference.
However, scheduling problems for the wireless networks with complex interference are usually non-convex and NP-hard\cite{7492912}, which drives researchers to employ machine learning and artificial intelligence to find a local optimal solution\cite{HORNIK1989359}. \cite{8664604} exploits a spatial convolution neural network to solve the sum-rate maximize problem in an all frequency reuse dense wireless network. \cite{9376717} compares several different deep reinforcement learning methods for AoI minimization under resource constraints of multi-user networks using hybrid automatic repeat requests. \cite{9097584} considers an IoT system where the application can be updated if all devices relevant to it are transmitted in a certain frame and using a deep Q network to learn the scheduling policy. \cite{wang2021distributed} propose a distributed reinforcement learning (RL) approach for the sampling policy to minimize the weighted sum of AoI cost and energy consumption, which allows the edge devices to cooperatively find the optimal policy with the local observations.
Motivated by this, we develop a novel D2D link scheduling policy based on deep learning for minimizing the overall average AoI of wireless D2D communications in this paper. Different from \cite{mankar} where the author set all links to the same activation probability and focus on the overall analysis of statistical spatial distribution, our scheduling policy can generate activation probability for each link to minimize AoI. \cite{OptimizingH} makes the scheduling decisions decentralized by scaling the problem excessively and unable to make decisions with other links jointly, while our method is absorbed in the optimization considering the global information of all links. Numerical results reveal that our deep learning algorithm performs better than these methods in AoI minimization task. Besides, our joint scheduling algorithm can assign different importances to AoI and throughput for differentiated demands. The main contributions of this paper are summarized as follows:
\begin{itemize}
\item We analyze the transmission success probability in spatio-temporal correlated queue where the correlation is caused by spatial coupled interference and temporal dependent buffer state in the D2D network, in which path-loss and Rayleigh fading are taken into account. We derive the expression of the overall average AoI and throughput of the network under stationary randomized policy and formulate the AoI-throughput joint scheduling problem.
\item Inspired by \cite{8664604}, we exploit a deep learning approach to solve the non-convex scheduling problem. A neural network structure is proposed to learn the mapping from specific geographic location information (GLI) to the corresponding scheduling decisions bypassing the CSI estimation. To enable the implicit loss function to be trained by the back-propagation, we derive a numerical solution to the gradients of this implicit expression.
\item We utilize the proposed neural network algorithm to explore the trade-off between network throughput and AoI by taking the importance weight as a network input.
After the neural network is well-trained, scheduling policy parameters can be output for disparate weight, the packet generating rate and D2D devices distribution.
\item Numerical results show that the performance of the proposed AoI-based scheduling policy by using the neural network is close to that of the traditional algorithm under different network parameters, while the computational complexity is greatly reduced and CSI \textcolor{blue}{and path-loss are} not required. Besides, for a single link, AoI and throughput have no trade-off, while for a large-scale network, AoI tends to infinity when throughput is maximized. Scheduling to minimize AoI places greater emphasis on fairness, but sacrifices throughput performance. \textcolor{blue}{Finally, the optimal AoI is not achieved at the most frequent packet generation.}
\end{itemize}
The remainder of the paper is organized as follows: In Section \ref{system}, we establish the system model and introduce the stationary randomized policy. In Section \ref{problem}, we derive the expression of the D2D network state and formulate the joint scheduling optimization problem. In Section \ref{aoi}, we propose a deep learning method to solve this problem. Numerical results and conclusion are presented in Section \ref{numerical} and \ref{conclusion} respectively.
\section{System Model} \label{system}
\begin{figure}[t]
\centering
\includegraphics[width=10cm]{layout2.eps}
\caption{Multiple D2D pairs in wireless D2D communications.}
\label{fig4}
\end{figure}
Consider a wireless network with $N$ independent D2D device links transmitting time-sensitive information, in which the devices are randomly distributed in a two-dimensional square region, and the distance between the transmitter and the receiver of each link varies, as shown in Fig. \ref{fig4}. We assume the network is static, that is, after the location initialization, the network topology remained unchanged. This assumption is reasonable since in many practical systems, devices are fixed after the deployment, and spatial dynamics are insignificant compared with the time scale.
The system operates in time-slotted fashion with time slot index $k\in\{1,2,\dots,K\} $. At the beginning of each time slot, new packets are generated in transmitters according to the Bernoulli process with the arrival rate $\xi \in (0,1]$. Each transmitter stores the up-to-date packet in a unit-size buffer and discard the undelivered one, which is called a last-come-first-serve policy with packet replacement (LCFS-PR) \cite{OntheAge}. Let $a_i(k)$ be an indicator function that is equal to 1 when link $i$ is scheduled to activate and $a_i(k)=0$ otherwise. Denote $\textbf{A}(k)=\{a_1(k), a_2(k),\dots, a_N(k) \}$ as the scheduling decisions of all $N$ links in time slot $k$. When scheduled to transmit, the transmitter will send a packet containing the real-time information after the packet generation process at the beginning of the slot. We assume that each time slot is long enough to transmit one packet and packets that fail to decode on the receiver will revert to the buffer. For the case where the buffer is empty, i.e., the buffered packet has been successfully transmitted and no fresher packet is generated, the link will not be activated. We use $n_i(k)$ to indicate the buffer state of link $i$ at slot $k$ with $n_i(k) = 1$ for non-empty and $n_i(k) = 0$ for empty. For empty buffer, each link is fixed inactive regardless of the value of $a_i(k)$
We consider full frequency reuse, that is, all D2D links share the same frequency band \cite{8664604, yang2021spatiotemporal}.
\textcolor{blue}{This model is based on the fact that the number of orthogonal spectrum resource blocks is insufficient to allocate interference-free communication environments for all D2D links. In addition, due to the close distance of D2D communication devices, multiple links sharing the same resource block with appropriate interference management technology can improve spectral efficiency.}
Under the assumption of frequency reuse, when a subset of links is scheduled simultaneously, interference with each other reduces the successful probability of decoding at the receiver. Given a specific scheduling decision $\textbf{A}(k)$, the signal-to-interference-plus-noise ratio (SINR) measured at the receiver $i$ in the time slot $k$ can be written as\cite{OptimizingH}
\begin{equation}
\mathrm{SINR}_i(k) = \frac {P_{tx}h_{ii}\lVert{d_{ii}}\rVert ^{-\alpha}a_i(k)n_i(k)}{\sum_{j\neq{i}}{P_{tx}h_{ji}\lVert{d_{ji}}\rVert ^{-\alpha}a_j(k)n_j(k)}+\sigma^2} , \label{0}
\end{equation}
where $P_{tx}$ is the transmission power of each transmitter, $h_{ji}$ and $d_{ji}$ are channel coefficients and distance from the transmitter of link $j$ to the receiver of link $i$ respectively. Assuming $h_{ji}$ is Rayleigh fading and modeling it as an independent random variable across links and time slots with $h_{ji} \sim \mathrm{exp}(1)$. $\alpha$ is the path-loss exponent and $\sigma^2$ denotes the power of additive white Gaussian noise (AWGN).
\subsection{Conditional success probability} \label{mu}
We consider that a transmission is successful when the SINR at the corresponding receiver is greater than a threshold value $\beta_i$. The transmission success probability $\mu_i(k)$ at link $i$ during time slot $k$ is given by
\begin{equation}
\mu_i(k) \triangleq \mathbb{P}[\mathrm{SINR}_i(k)>\beta_i] \label{1} .
\end{equation}
From \eqref{1}, it is clear that $\mu_i(k)$ depends on four parts: scheduling decision, buffer state, Rayleigh fading, and the spatial locations of the interfering transmitters, while buffer state is with respect to the evolution of the system and the past scheduling decisions. Given a particular scheduling decision $\textbf{A}(k)$ and the buffer state $\textbf{N}(k)=\{n_1(k), n_2(k),\dots, n_N(k) \}$, the randomicity of the transmission success probability at link $i$ is only determined by Rayleigh fading. Thus, we can derive the conditional success probability for activated links as follows
\begin{align}
&\mu_i(k|\textbf{A}(k), \textbf{N}(k))=\mathbb{P}[\mathrm{SINR}_i(k)>\beta_i|\textbf{A}(k),\textbf{N}(k)] \nonumber\\
=&\mathbb{P}\left[h_{ii}>\frac{\beta_i(\sum_{j\neq{i}}{P_{tx}h_{ji}\lVert{d_{ji}}\rVert ^{-\alpha}a_j(k)n_j(k)}+\sigma^2)}{P_{tx}\lVert{d_{ii}}\rVert^{-\alpha}}\bigg|\textbf{A}(k),\textbf{N}(k)\right] \nonumber\\
=&\mathbb{E}\left[e^{\frac{-\beta_i\sigma^2}{P_{tx}\lVert{d_{ii}}\rVert^{-\alpha}}}\prod_{j\neq{i}}{\mathrm{exp}\left(-\frac{h_{ji}a_j(k)n_j(k)\beta_i\lVert{d_{ji}}\rVert^{-\alpha}}{\lVert{d_{ii}}\rVert^{-\alpha}}\right)}\bigg|\textbf{A}(k),\textbf{N}(k)\right] \nonumber\\
=&\rho_i\prod_{j\neq{i}}{\frac{1}{1+a_j(k)n_j(k)/D_{ji}}} \label{2},
\end{align}
where $\rho_i=\mathrm{exp}\left(-{\beta_i\sigma^2}/{P_{tx}\lVert{d_{ii}}\rVert^{-\alpha}}\right)$ represents the impact of AWGN on the success probability, and $D_{ji}={{\lVert{d_{ii}}\rVert^{-\alpha}}/\beta_i\lVert{d_{ji}}\rVert^{-\alpha}}$. The last equality derives the expectation of $h_{ji}$ by noticing that Rayleigh fading of different channels are independent and identically distributed (i.i.d.).
We can see from \eqref{2} that the conditional success probability is determined by the scheduling decisions $\textbf{A}(k)$ and the buffer state $\textbf{N}(k)$, where $\textbf{N}(k)$ is determined by the past system evolution of all links. This results in an extremely complex spatio-temporal coupling correlation among the $\mu_i(k)$.
The scheduling decision of any link will affect the transmission success probability of all other links, and then affect the buffer states in the subsequent time slot, which in turn affects the evolution of this link. It is hard to precisely evaluate the impact of specific decisions on the system and the subsequent evolution of the network dynamics. Therefore, we utilize the stationary randomized policy in Section \ref{policy} to schedule D2D links, which is similar to the random access network and slotted ALOHA protocol\cite{yang2021spatiotemporal}, to simplify the subsequent analysis.
\subsection{Age of Information}
The Age of Information represents the age of the latest information utilized by the receiver. The instantaneous AoI will be updated when a new packet is successfully transmitted. Let $b_i(k)$ indicate whether the transmission is successful. The value of $b_i(k)$ is subject to $a_i(k)$, $n_i(k)$ and $\mu_i(k)$. If the link $i$ is scheduled to transmit at the time slot $k$ and the buffer is non-empty, i.e., $a_i(k)=1$ and $n_i(k)=1$, then $b_i(k)=1$ with probability $\mu_i(k|\textbf{A}(k),\textbf{N}(k))$ and $b_i(k)=0$ with probability $1 - \mu_i(k|\textbf{A}(k),\textbf{N}(k))$. As for the link $i$ not activated to transmit, i.e., $a_i(k)n_i(k)=0$, then $b_i(k)=0$. The conditional expectation of $b_i(k)$ can be obtained as $\mathbb{E}\{b_i(k)|\textbf{A}(k)\}=a_i(k)\mu_i(k|\textbf{A}(k),\textbf{N}(k))$. Substituting \eqref{2} into this equation, we can have
\begin{equation}
\mathbb{E}\{b_i(k)|\textbf{A}(k),\textbf{N}(k)\}=\rho_ia_i(k)n_i(k)\prod_{j\neq{i}}{\frac{1}{1+a_j(k)n_j(k)/D_{ji}}}. \label{e3}
\end{equation}
Let $g_i(k)$ be the instantaneous AoI of the receiver at link $i$ at the time slot $k$. $g_i(k)$ will be updated at the beginning of each slot based on the transmission condition of the previous slot and the buffer state after the potential packet arrival. Denote $\delta_i(k)$ as the instantaneous AoI from the perspective of the up-to-data packet generated in link $i$, which evolves as follows:
\begin{align}
\delta_i(k)=\begin{cases} 0 &, \mathrm{if}\ e_i(k)=1 \\ \delta_i(k-1)+1 &, \mathrm{if}\ e_i(k)=0, \end{cases} \label{n1}
\end{align}
where $e_i(k)$ denotes the packet generation process with $\mathbb{P}[e_i(k)=1] = \xi$. If the the receiver successfully decodes the packet transmitted at the time slot $k$, i.e., $b_i(k)=1$, the instantaneous AoI will be reset to $\delta_i(k) + 1$ at the next time slot $k+1$. In contrast, if the link $i$ does not complete a successful transmission, then $g_i(k+1)=g_i(k)+1$. Therefore, the evolution of $g_i(k)$ follows
\begin{align}
g_i(k+1)=\begin{cases} \delta_i(k) + 1 &, \mathrm{if}\ b_i(k)=1 \\ g_i(k)+1 &, \mathrm{otherwise}. \end{cases} \label{e4}
\end{align}
Note that $g_i(k)$ updates at the next slot while $\delta_(k)$ updates at the generation slot. Fig. \ref{fig-age} depicts an example of the age evolution in link $i$. $X_i$ is the interval of the packet generation. $Y_i$ and $S_i$ are queue empty time and packet transmission time respectively.
\begin{figure}[t]
\centering
\includegraphics[width=10cm]{age.pdf}
\caption{An example of the AoI evolution of link $i$. Solid and dashed lines represent the AoI from the view of receiver and the up-to-date packet respectively. }
\label{fig-age}
\end{figure}
We use the average AoI, denoted by $\Delta_i = \lim_{K \to\infty}\mathbb{E}\left[\sum_{k=1}^{K}g_i(k)\right]/K$, to characterize the overall freshness of the time-sensitive information that the receiver at link $i$ cares about. The expectation in the equation is with respect to the randomness of the scheduling decisions and the channel during the whole $K$ time slots. To evaluate the freshness of all $N$ links in the whole D2D network, we use the overall average AoI to reflect fairness, given by
\begin{equation}
\varDelta^{\mathrm{Ave}} = \frac{1}{N}\mathbb{E}\left[\sum_{i=1}^{N}\Delta_i\right] = \lim_{K \to\infty}\frac{1}{KN}\mathbb{E}\left[\sum_{i=1}^{N}\sum_{k=1}^{K}g_i(k)\right] \label{e6}.
\end{equation}
\subsection{Throughput}
We assume that the packets delivered by each link are standardized into uniform packet size\cite{5061954}. The throughput is normalized to $1$ for each successful transmission and $0$ for the failure delivery trial, which is equivalent to the indicator $b_i(k)$ defined above. We denote the long-term average throughput of link $i$ as $T_i = \lim_{K \to\infty}\mathbb{E}\left[\sum_{k=1}^{K}b_i(k)\right]/K$ and the overall average throughput of all links as follows:
\begin{equation}
T^{\mathrm{Ave}} = \frac{1}{N}\mathbb{E}\left[\sum_{i=1}^{N}T_i\right] = \lim_{K \to\infty}\frac{1}{KN}\mathbb{E}\left[\sum_{i=1}^{N}\sum_{k=1}^{K}T_i(k)\right] \label{e66}.
\end{equation}
\subsection{Scheduling Policy and Optimization Objective} \label{policy}
In this paper, We focus on the design of the scheduling policy for the decision of $\textbf{A}(k)$ to minimize the overall average AoI $\Delta^{Ave}$ and maximize the overall average throughput $T^{Ave}$ simultaneously, which is a multi-objective optimization problem.
Due to the coupling spatio-temporal correlation of transmission success probability in the D2D network, making scheduling decisions needs to collect and process $N^2$ cross-channel state information and $N$ instantaneous AoI and buffer states generated by $N$ direct links. For large-scale networks, real-time dynamic decision-making can cause enormous communication and computation overhead, which is expensive for both centralized and distributed scheduling in time and power \cite{8340813}. Besides, the instantaneous scheduling decision will have an impact difficult to evaluate on the subsequent state of the system due to this coupling correlation.
Therefore, we employ a stationary randomized policy to schedule the D2D links by considering a static problem. This policy has been discussed in some other scheduling scenarios \cite{Scheduling,8943134} and its related practical implements are random access networks and slotted ALOHA. This policy means that, at each time slot, the link $i$ has a time-invariant activation probability $p_i$ independent of other links. The stationary randomized policy is denote by $\pi = \{p_1, p_2,\dots, p_N\}$ and $\Pi$ is the class of all these policies, $\pi\in\Pi$.
Then the scheduling decisions $\textbf{A}(k)$ under this policy turn into random variables with Bernoulli distribution, i.e., $a_i(k)=1$ with probability $p_i$ across all time slots while $a_i(k)=0$ with probability $1-p_i$.
Note that $\pi$ is a static scheduling policy and does not need to collect and process system information in each time slot, although it sacrifices some potential performance.
We define this multi-objective optimization problem under the stationary randomized policy as:
\begin{subequations}\label{151}
\begin{align}
&\min_{\pi\in\Pi} \ \ \Delta^{Ave} \\
&\max_{\pi\in\Pi} \ \ T^{Ave} \ \ \ \label{15a} \\
&\ \mathrm{s.t.}\ \ 0<p_i\leq1, \ \forall i \in \{1,2,\dots,N\},\label{15b}
\end{align}
\end{subequations}
The goal of solving this problem is to obtain all the Pareto optimal vectors $\{\Delta^{Ave}, T^{Ave}\}$, which is defined as the vectors that no other vectors can have a lower $\Delta^{Ave}$ and a higher $T^{Ave}$ simultaneously. The set of all the Pareto optimal vectors is the Pareto front (PF), which is equivalent to the trade-off curve of information freshness and throughput. To obtain the PF, we convert the multi-objective optimization problem into a scalar optimization problem by the weighted sum approach\cite{MOEA} which considers the affine combination of $\Delta^{Ave}$ and $1/T^{Ave}$ as the objective function. Thus, the scalar optimization problem can be obtained as
\begin{subequations}\label{obj}
\begin{align}
\min_{\pi\in\Pi}\ \ &\mathcal{P}(\lambda)=\lambda\Delta^{Ave}+(1-\lambda)\frac{1}{T^{Ave}} \label{eq15a} \\
\mathrm{s.t.}\ \ &0<p_i\leq1, \ \forall i \in \{1,2,\dots,N\},\label{eq15b}
\end{align}
\end{subequations}
where $\lambda$ and $1-\lambda$ are the weights for $\Delta^{Ave}$ and $1/T^{Ave}$ respectively, $\lambda\in[0,1]$.
We can obtain the scheduling policy parameters for the D2D network under different importance assignments between information freshness and throughput by solving this problem with different $\lambda$. For each $\lambda\in[0,1]$, the optimal solution of the scalar optimization problem forms a Pareto optimal vector.
It is worth noting that although this is a static problem, its optimization is still difficult. Specifically, the spatio-temporal correlation of link states makes it tough to obtain the precise closed-form expression of this problem, and even if the expression is obtained, the problem is still non-convex. In section \ref{problem} and \ref{dl}, we respectively derive an implicit expression by introducing the mean-field assumption and solving this problem with a deep learning algorithm.
\section{Problem formulation} \label{problem}
In this section, we derive the transmission success probability, buffer non-empty probability, overall average AoI, and overall average throughput under the mean-field assumption and formulate the optimization problem. We also remark a special case when the packet arrival rate $\xi = 1$ where the buffers are fixed stuffed and introduce an iterative approach to solve this special case for comparison.
\subsection{Queue state derivation}
With the employment of stationary randomized policy, the evolution and dynamics of the network will be determined by the randomicity of policy decisions. However, as mentioned in section \ref{mu}, the interaction between links causes the spatio-temporal correlation of node states, which makes it difficult to carry out statistical analysis.
We introduce the mean-field assumption \cite{15561801, yang2021spatiotemporal} to enable the subsequent derivation. This assumption grants an interacting particle system can utilize the time average state to represent its time-varying properties under mutual correlation. Specifically, we assume the buffer states of each link are i.i.d. over other links and time-varying according to its steady-state non-empty probabilities\cite{8688635}, denoted by $\{\nu_i\}_{i=1}^N = \lim_{K \to\infty}\sum_{k=1}^{K}n_i(k)/K \in (0,1]$. Accordingly, the evolution of the queue state of each link can be regarded as independent of each other, which allows us to analyze the transmission process of each link separately.
\emph{1) Transmission success probability:}Based on the non-empty probabilities, we can derive the expectation of the transmission success probability under the steady-state as follows
\begin{align}
\mu_i\triangleq\lim_{k\to\infty}\mathbb{E}[\mu_i(k)|\pi]=&\lim_{k\to\infty}\mathbb{E}\left[\rho_i\prod_{j\neq{i}}{\frac{1}{1+a_j(k)n_j(k)/D_{ji}}}\bigg|\pi\right] \nonumber \\
=&\rho_i\prod_{j\neq{i}}{\left(1-\frac{p_j\nu_j}{1+D_{ji}}\right)}, \label{mupi}
\end{align}
where the equality holds by taking the expectation of the i.i.d. random variable $a_j(k)$ and $n_j(k)$. Therefore, under the stationary randomized policy and mean-field assumption, the packet transmission process and transmission time of link $i$ can be considered as Bernoulli process and geometric distribution subject to parameter $p_i\mu_i$, respectively.
\emph{2) Buffer non-empty probability:}To obtain the transmission success probability for a particular policy parameter $\pi$, we still need to calculate the buffer non-empty probability $\{\nu_i\}_{i=1}^{N}$. Benefiting from the independence resulting from the mean-field assumption, we can derive the buffer non-empty probability seperately from the respective evolution. As depicted in Fig. \ref{fig-age}, the time ratio of the packet transmission time $S_i$ to the interval time of two success transmission $Y_i + S_i$ is the buffer non-empty probabilitys: $\nu_i = \mathbb{E}[S_i]/\mathbb{E}[Y_i+S_i]$.
Note that $Y_i$ is the buffer empty time which is equivalent to the time slots number without packet generation consecutively, i.e., $\mathbb{E}[Y_i]= \mathbb{E}[X_i] - 1$. $X_i$ and $S_i$ are both geometric distribution subject to parameter $\xi$ and $p_i\mu_i$ respectively, and $\nu_i$ can be derived as follows:
\begin{align}
\nu_i = \frac{\frac{1}{p_i\mu_i}}{\frac{1}{\xi} + \frac{1}{p_i\mu_i}} = \frac{\xi}{\xi + (1 - \xi)p_i\mu_i} \label{nu}
\end{align}
\emph{3) Age of information and throughput:} Based on \eqref{mupi} and \eqref{nu}, we can derive the expressions of the two metrics we adopt in this paper as follows:
\begin{align}
\varDelta^{\mathrm{Ave}}& = \frac{1}{N}\sum_{i=1}^{N}\left[\frac{1}{\xi} + \frac{1}{p_i\mu_i}-1\right] \label{aoi_all}\\
T^{\mathrm{Ave}}& = \frac{1}{N}\sum_{i=1}^{N}\left[\frac{\xi p_i \mu_i}{\xi + (1 - \xi)p_i\mu_i}\right]. \label{thr_all}
\end{align}
\begin{proof}
See Appendix \ref{app1}.
\end{proof}
\subsection{Problem formulation}
Substituting \eqref{mupi}-\eqref{thr_all} into \eqref{obj}, we formulate the optimization problem as follows:
\begin{subequations}\label{problem2}
\begin{align}
\min_{\pi\in\Pi}\ \ &\mathcal{P}(\lambda)=\frac{\lambda}{N}\sum_{i=1}^{N}\left[\frac{1}{\xi} + \frac{1}{p_i\mu_i}-1\right]+\frac{1-\lambda}{\frac{1}{N}\sum_{i=1}^{N}\left[\frac{\xi p_i \mu_i}{\xi + (1 - \xi)p_i\mu_i}\right]} \label{eq15a} \\
\mathrm{s.t.}\ \ & \eqref{mupi}, \ \eqref{nu}, \\
& 0<p_i\leq1, \ \forall i \in \{1,2,\dots,N\}.\label{eq15b}
\end{align}
\end{subequations}
It is worth noting that the network queue dynamic will gradually converge to the steady-state according to the specific policy parameters $\pi$, which implies that $\{\nu_i\}_{i=1}^{N}$ and $\{\mu_i\}_{i=1}^{N}$ are both functions of $\{p_i\}_{i=1}^{N}$. Nonetheless, the nesting of $\{\nu_i\}_{i=1}^{N}$ and $\{\mu_i\}_{i=1}^{N}$ makes it hard to derive their explicit expression. Consequently, the influence of the optimization variables $\{p_i\}_{i=1}^{N}$ on $\mathcal{P}(\lambda)$ is very difficult to analyze. To solve this problem, we propose a neural network structure to constitute the mapping from the D2D network parameters to the optimal solution of \eqref{problem2} and train the neural network unsupervised in Section \ref{dl}.
It can be seen from \eqref{a_i} and \eqref{thr_i} that the long-term throughput is the reciprocal of the average AoI for a single link where the average AoI decreases as the throughput increases. Obviously, there is no trade-off between their optimizations under LCFS-PR Geo/Geo/1 queue for a single link, and they can reach the optimal point simultaneously when $p_i\mu_i\to \sup\{p_i\mu_i\}$. Nonetheless, when it comes to the scheduling for the whole network performance, the optimal point $\sup\{p_i\mu_i\}$ of each link $i$ cannot be achieved simultaneously. The trade-off between
|
$\Delta^{Ave}$ and $T^{Ave}$ mainly depends on the network resources competition and coupling correlation between links. Intuitively, the $T^{Ave}$ maximization drives links with better channel condition to achieve higher throughput, while links with worse channel is on the contrary. Due to $T_i \in [0,1]$, the average AoI of links with the bad channel, which is the reciprocal of $T_i$, will increase sharply and result in poor $\Delta^{Ave}$ performance. More specific details on the freshness-throughput trade-off are illustrated in Section \ref{numerical}.
\subsection{Special case and an iterative approach} \label{sp}
We consider a special case for problem \eqref{problem2} with the packet generating rate $\xi =1$. In this case, each buffer will have the freshest packets available for transmission at any time slot, which means that the buffer non-empty probability is $1$. Consequently, the transmission success probability $\mu_i$ at each time slot can be precisely derived by an explicit expression $\mu_i=\rho_i\prod_{j\neq{i}}{\left(1-\frac{p_j}{1+D_{ji}}\right)}$ without the mean-field assumption owing to the Rayleigh fading and link activation probability being i.i.d.. The optimization problem can also be obtained in an explicit form as follows:
\begin{subequations}\label{problem3}
\begin{align}
\min_{\pi\in\Pi}\ \ &\hat{\mathcal{P}}(\lambda)=\frac{\lambda}{N}\sum_{i=1}^{N} \frac{1}{p_i\rho_i\prod_{j\neq{i}}{\left(1-\frac{p_j}{1+D_{ji}}\right)}}+\frac{1-\lambda}{\frac{1}{N}\sum_{i=1}^{N}p_i\rho_i\prod_{j\neq{i}}{\left(1-\frac{p_j}{1+D_{ji}}\right)}} \label{eq151} \\
\mathrm{s.t.}\ \ & 0<p_i\leq1, \ \forall i \in \{1,2,\dots,N\}.\label{eq152}
\end{align}
\end{subequations}
Before introducing the deep learning algorithm, we first propose an iterative approach to acquire a great local optimal solution for problem \eqref{problem3}. We utilize this approach as a comparison with the proposed deep learning algorithm in Section \ref{dl}. The flow of this algorithm is to optimize the activation probability of each link separately by fixing other links. An iteration consists of this univariate optimization step for all links once. After several iterations, $\hat{\mathcal{P}}(\lambda)$ converges to a local optimal solution.
Specifically, \eqref{eq151} is a non-negative linear combination of a series of convex functions when only optimizing the activation probability of a single link with other links fixed, which can be solved by dichotomy or by a commercial convex optimization solver such as CVX. In each step of the iteration, the activation probabilities of the other links utilize the latest obtained $p_i$ in the previous step, and $\hat{\mathcal{P}}(\lambda)$ is non-increasing. Consequently, $\hat{\mathcal{P}}(\lambda)$ can finally converge to a local optimal value.
This approach is difficult to apply because it requires a centralized collection of all $N^2$ channel gain information and a large amount of computation. We only use it as a comparison for the deep learning algorithm proposed in the next Section.
\section{Deep Learning Algorithm} \label{dl}
\textcolor{blue}{When utilizing a traditional algorithm to solve link scheduling problems, the path-loss of each channel must be obtained first. However, for the channel of $N$ links interfering with each other under a D2D fashion, the path-loss matrix consists of $N^2$ elements to be processed and therefore the computational complexity of the traditional approach will be at least $O(N^2)$. Meanwhile, computation and collection of path-loss are also expensive and resource-consuming.}
Inspired by \cite{8664604}, where the author utilizes a spatial convolutional neural network to learn the mapping from the $N$ geographic location information (GLI) to the optimal schedule of the sum-rate maximization problem, we propose a neural network structure to obtain the AoI and throughput trade-off curve with a similar idea. This technique greatly reduces the scale of information required to make decisions, and GLI can be obtained easily by utilizing GPS.
\textcolor{blue}{In fact, the $N^2$ CSI required to schedule links are determined by Rayleigh fading and path loss in our model, where the Rayleigh fading is captured by the random variable distribution model and path loss which can be calculated by the GLI of each link.
Thus, the mapping from GLI to the optimal solution can be learned by the neural network without the estimation of CSI. }
Different from \cite{8664604}, the objective function of our AoI and throughput jointly scheduling problem can not be expressed explicitly, which is hard to solve directly by gradient descent. We design a different training approach for this problem to make the neural network work. Besides, since the packet generation process parameter and the AoI-throughput trade-off weight are varying, we train the neural network to learn the mapping of any \{GLI, $\xi$, $\lambda$\} combinations to the best policy parameters $\pi$. The specific network structure and working fashion are introduced in the following part of this section, which is also depicted in Fig. \ref{fig2}.
\subsection{Input pre-processing}
We assume that the D2D plane is a square with a side length of $L$ and the GLI is defined as a set of vectors $\{(x_i^\mathrm{tx},y_i^\mathrm{tx}),(x_i^\mathrm{rx},y_i^\mathrm{rx})\}_{i=1}^N$ representing the geographic locations coordinates of the transmitters and receivers, with the coordinate values ranging from $0$ to $L$.
To input the GLI into the neural network, we preprocess the GLI into matrices to build more appropriate forms. Specifically, we divide the D2D plane into square grids with the size of $M\times M$, which is also the size of the device matrices transformed from the GLI. Denote $G^{i}_{TX}$ and $G^{i}_{RX}$ as the transmitter matrix and the receiver matrix of link $i$ respectively, where both the two matrices are one-hot matrices to imply the location of the devices and defined as follows
\begin{align}
G^{i}_{TX}(x,y)=\begin{cases} 1 &, \mathrm{if}\ (x,y)=\lceil (x_i^\mathrm{tx},y_i^\mathrm{tx} )* M / L \rceil \\ 0 &, \mathrm{otherwise}. \end{cases} \label{onehot}
\end{align}
It is worth noting that unactivated links have no effect on the network, and the activation probability $p_i$ indicates how frequently the link $i$ is activated. We define the network GLI matrix under a particular scheduling decision by
\begin{align}
G^{TX}=\sum_{i=i}^{N}p_iG^{i}_{TX}(x,y), \label{gtx2}
\end{align}
which is the same for the receiver matrix. Activation probability $\{p_i\}_{i=1}^N$ is used to construct the matrix to reflect the effect of activation probability on other devices, e.g., transmitters with higher $p_i$ cause more interference to other links. Therefore, $G_{TX}$ and $G_{RX}$ can be regarded as a feature matrix that contains most of the GLI information required to solve \eqref{problem2}. It is worth noting that the preprocess of GLI constructs a fixed input form regardless of the size of $N$ and forms matrices suitable for passing through the convolution layer.
\addtolength{\topmargin}{0.36cm}
\begin{figure}[t]
\centering
\includegraphics[width=14cm]{conv1.pdf}
\caption{An example of a transmitter grid passing through convolution stage. In every output matrix, each link extracts a number from the red cell, where is the location of the receiver for this link, as a feature of the GLI of the transmitters nearby.}
\label{fig1}
\end{figure}
\subsection{Convolution Stage}
Different from \cite{8664604} using a single convolution layer to simulate the total interference caused by other devices, we regard the convolution layer as a way to extract the feature from the GLI of transmitters around each receiver or receivers around each transmitter. Experiments show that our structure can output solutions with similar performance compared with the reference network when deployed to solve \eqref{problem2}, but our training is more stable and the convergence rate is greatly improved.
We use three connected convolution layers to extract GLI features, as depicted in an example in Fig. \ref{fig1}, and output a set of features for each link after each convolution layer. Fig. \ref{fig1} illustrates the processing of a transmitter grid $G_{TX}$ using arbitrarily set data. The transmitter grid passes through three convolution layers in turn with padding and outputs three matrices of the same size. Each element in the output matrix is obtained by a single convolution centered at the same index of the input matrix with the convolution filter. Consider this index as the location of a receiver, and convolution can be considered as extracting feature from all transmitters within the range of convolution filter size near this receiver. Assume that the sizes of the three convolution filters are $n_1,n_2,n_3$, respectively. Three connected convolution layers enlarge the scope of feature extraction in stages, and each element of the last output matrix contains the original grid information of $n_1+n_2+n_3-2$ size.
The convolution stage operates symmetrically on the receiver grid $G_{RX}$ with the same convolution filter to extract GLI features of the receivers around a particular transmitter. $G_{TX}$ and $G_{RX}$ pass through the convolution stage in parallel and generate three output matrices respectively. Each link in the D2D plane extracts six features in total from these matrices by the index of its own receiver or transmitter.
\begin{figure}[t]
\centering
\includegraphics[width=10cm]{Structure.pdf}
\caption{The overall structure of the neural network. }
\label{fig2}
\end{figure}
\subsection{Fully Connected Stage}
Similar to the general neural network structure, we input the intermediate features into a fully connected network to generate the output with the desired form. The fully connected network consists of two hidden layers whose activation function is the rectified linear unit and finally outputs the activation probability $p_i\in(0,1)$ after passing through a sigmoid function. For a D2D layout with $N$ links, the feature vectors of all links pass through the fully connected layer respectively and output a set of activation probability vectors $\pi$ at the end.
In addition to the six GLI features from the previous convolution stage into the fully connected stage, we also input four other features. Specifically, the first is the activation probability $p_i$ output from the previous iteration, which is used to enhance the feedback. The second is the distance between the receiver and the transmitter of this link, denoted by $d_i^{TR}=\Vert(x_i^\mathrm{tx}-x_i^\mathrm{rx},y_i^\mathrm{tx}-y_i^\mathrm{rx})\Vert_2$. This is due to the feature vector output from the convolution stage can only characterize the GLI around the transmitter and receiver of the concerned link and the relationship between the transceiver of the link itself is indeterminate, which is an important feature that needs the neural network to learn. The third and the fourth are $\xi$ and $\lambda$ respectively, which are used to identify the different requirements of the output solution. With these two features as inputs, the neural network does not need to be retrained for different packet generation rates $\xi$ and AoI-throughput trade-off weight $\lambda$.
\subsection{Iterative fashion}
The entire network structure is constituted by connecting the parts introduced above together through a feedback loop. The network works in the form of an iterative feedback fashion because a single forward path of the network is not sufficient to learn the mapping from the GLI to the optimal policy parameter. We input the output activation probability vectors $\{p_i\}_{i=1}^N$ into the pre-processing stage to form a new GLI with \eqref{gtx2}. $\{p_i\}_{i=1}^N$ will also be feedback to the fully connected stage in the next iteration as a part of the feature vectors. In this way, each iteration actually completes an improvement on the scheduling policy parameters $\{p_i\}_{i=1}^N$ of the previous iteration.
Regardless of how $\{p_i\}_{i=1}^N$ changes during the iteration, the $G^{i}_{TX}(x,y)$ and $G^{i}_{RX}(x,y)$ to construct the $G^{TX}$ and $G^{RX}$ is fixed, which means that no matter which intermediate $\{p_i\}_{i=1}^N$ the iteration experience, the final output $\{p_i\}_{i=1}^N$ should be the same. Owning to the strong approximation ability of the neural network, We find experimentally that the output will converge after several times iteration after the network is well-trained. Consequently, we just set all the initial $\{p_i\}_{i=1}^N$ to $1$ without loss of generality.
\subsection{Network Training}
The optimal solution $\pi^*$ to \eqref{problem2} can be seen as a function of the D2D system parameters. We use $\textbf{C}$ to represent these parameters and we can obtain $\pi^* = \arg \min\{\mathcal{P}(\pi|\textbf{C})\}$, where $\mathcal{P}(\pi|\textbf{C})$ is the objective function of \eqref{problem2} with all the D2D system parameters represented by $\textbf{C}$. The Neural network are essentially a function constituting mappings from inputs to outputs, i.e., $\pi = g(\textbf{w}, \textbf{b},\textbf{C})$, where $\textbf{w}, \textbf{b}$ are neural network parameters to be trained and $g(\cdot)$ represents the structure of this neural network. The training process is to find the optimal $\textbf{w}^*, \textbf{b}^*$ to approximate $\pi^*$, that is, $\textbf{w}^*, \textbf{b}^* = \arg \min\{\mathcal{P}(g(\textbf{w}, \textbf{b},\textbf{C})|\textbf{C})\}$. Consequently, we use the objective function of \eqref{problem2} to be the loss function of the neural network we proposed. This is an unsupervised learning process because for any parameters $\textbf{C}$, we do not need to manually label the corresponding $\pi^*$.
The optimization of neural network parameters is accomplished by gradient descent approach for loss function, which implies that we have to obtain the gradient of $\mathcal{P}(g(\textbf{w}, \textbf{b},\textbf{C})|\textbf{C})$ subject to $\textbf{w}, \textbf{b}$. Generally speaking, the gradients of explicit loss functions can be directly obtained by back-propagation and automatically differentiated using the Tensorflow toolbox. However, the objective function of \eqref{problem2} has only an implicit expression and cannot be back-propagated directly.
By noting that when updating the network parameters $\textbf{w}, \textbf{b}$, only the numerical value of the gradient is required, we have the following derivation:
\begin{align}
\frac{\partial\mathcal{P}}{\partial \textbf{w}} = \sum_{i=1}^{N} \frac{\partial\mathcal{P}}{\partial p_i}\frac{\partial p_i}{\partial \textbf{w}}, \ \ \ \frac{\partial\mathcal{P}}{\partial \textbf{b}} = \sum_{i=1}^{N} \frac{\partial\mathcal{P}}{\partial p_i}\frac{\partial p_i}{\partial \textbf{b}},
\end{align}
where $p_i$ can be explicit expressed by $\textbf{w}$ and $ \textbf{b}$ and can be back-propagated directly. Our goal is to obtain the numerical value of the gradient $\frac{\partial\mathcal{P}}{\partial p_i}$, which can be derived as follows:
\begin{align}
&\frac{\partial\mathcal{P}}{\partial p_i} = \lambda\frac{\partial\varDelta^{\mathrm{Ave}}}{\partial p_i} -\frac{1-\lambda}{(T^{\mathrm{Ave}})^2} \frac{\partial T^{\mathrm{Ave}}}{\partial p_i} \\
&\frac{\partial\varDelta^{\mathrm{Ave}}}{\partial p_i} = -\frac{1}{N}\sum_{j=1}^N \left[\frac{1}{(p_j\mu_j)^2}+\frac{1-\xi}{[\xi + (1-\xi)p_j\mu_j]^2}\right]\left(p_j\frac{\partial\mu_j}{\partial p_i}+\mu_j\frac{\partial p_j}{\partial p_i}\right) \\
&\frac{\partial T^{\mathrm{Ave}}}{\partial p_i} = -\frac{1}{N}\sum_{j=1}^N \frac{(1-\xi)^2}{[\xi + (1-\xi)p_j\mu_j]^2} \left(p_j\frac{\partial\mu_j}{\partial p_i}+\mu_j\frac{\partial p_j}{\partial p_i}\right),
\end{align}
where $\frac{\partial\mu_j}{\partial p_i} = (P^{-1}Q)_{ji}$, $(P^{-1}Q)_{ji}$ represents the elements of column $i$ in row $j$ of matrix $P^{-1}Q$. $P$ and $Q$ are obtained as follows (See Appendix \ref{appb} for the derivation):
\begin{align}
&P_{ji} = \frac{qB_{ij}p_i^2}{(1+qp_i\mu_i)(1+qp_i\mu_i-B_{ij}p_i)}, j\neq i,\ \ P_{jj} = -\frac{1}{\mu_j} \label{Mji}\\
&Q_{ji} = \frac{B_{ij}}{(1+qp_i\mu_i)(1+qp_i\mu_i-B_{ij}p_i)} \label{Nji}
\end{align}
where $B_{ij}=\frac{1}{1 + D_{ij}}$ and $q = \frac{1-\xi}{\xi}$. The only value have not been derived among the above expressions is $\{\mu_i\}_{i=1}^N$, which can be iteratively obtained by ultilizing \eqref{mupi} and \eqref{nu} alternatively with the initial value $\{\mu_i\}_{i=1}^N=0$. This can be easily proofed to be converged because in each iteration, $\{\mu_i\}_{i=1}^N$ are non-decreasing.
In this way, the implicit expression of problem \eqref{problem2} can be optimized during the neural network training. We randomly generate a large number of i.i.d. D2D layouts as the training set of the network. Specifically, we first generate the locations of $N$ transmitters on the square plane according to a uniform distribution within the plane size $(0,L)$, and then generate a corresponding receiver for each transmitter with the distance $d_i^{TR}$, where $d_i^{TR}\sim U(d_{min},d_{max})$, and the angle is also uniform distributed within $[0,2\pi]$. In actual training, the training set is composed of a randomly generated D2D plane and randomly generated $\xi$ and $\lambda$, which constitutes an infinite number of training samples.
\textcolor{blue}{It is worth noting that the training stage contains the above calculation and naturally requires specific information of $N^2$ path-loss. However, only $N$ GLI is required to obtain the scheduling policy parameters after the neural network well-trained. The specific information of path-loss can be obtained by the computation of GLI.}
\section{Numerical Results} \label{numerical}
In this section, we evaluate the performance of the proposed deep learning approach. Following \cite{FPLinQ,8664604}, we model the interference channel according to the short-range outdoor channel model ITU-1411\cite{ITU}, with $5$ MHz bandwidth at $2.4$ GHz carrier frequency. Antenna height and gain are set to $1.5$ meters and $2.5$ dBi respectively. The transmission power of each device $P_{tx}$ is $40$ dBm and the noise power spectral density is $-169$ dBm/Hz. The decoding threshold $\beta_i$ is set to $0$ dB for all links. We consider D2D scenarios where $N$ links are deployed in a 500 meters by 500 meters square region, and the links are randomly distributed in the way that training sets are generated with $d_{min}=2~\mathrm{m}$ and $d_{max}=65~\mathrm{m}$.
For different numbers of links, we generated $10000$ sets of D2D layouts to train the neural network, and $1000$ sets of D2D layouts for testing respectively. As to the configuration of the neural network, GLI is preprocessed to grid with grid length $M = 125$, the sizes of three convolution filters are all $15 \times 15$ and both two hidden layers of the fully connected stage are set to $30$ neurons. The parameters of the neural network are trained separately for different $N$.
For each training step, $32$ D2D layouts are randomly selected from the training set, and $32$ different $\xi$ and $\lambda$ are randomly generated to construct input data together. $\xi$ and $\lambda$ are subject to uniform distribution range in $(0,1]$. Due to the dynamic range of AoI being too large compared with throughput, we use exponential transformation to change the distribution of the $\lambda$ in loss function evaluation by $\lambda^{loss} = 10^{-5 *(1-\lambda)}$ in order to increase the data amount for small $\lambda$. We use Adam optimizer with the learning rate set to $3$e$-4$.
We compare the performance of the proposed neural network algorithm with the following benchmarks:
\begin{itemize}
\item \textbf{Iterative Minimization Algorithm:} The local optimal algorithm we proposed in Section \ref{sp} with 20 iterations. This algorithm can only work for packet generation rate $\xi=1$. In the figure legend, we use 'Ite. Min.' to represent this algorithm.
\item \textbf{Optimal slotted ALOHA Policy \cite{mankar}:} Each link in the network is activated with the same probability to minimize the objective function, i.e., $p_i=p^*,\forall i$. In the figure legend, we use 'ALOHA' to represent this algorithm
\item \textbf{Locally adaptive slotted ALOHA \cite{yang2021spatiotemporal}:} The activation probability of each link is decoupled by the mass transportation theorem \cite{8186962}. And then each link can dynamically adjust its activation probability by exchanging system information with other links in its observe window. We set the observe window to infinity to constitute the same condition as other methods. This algorithm can only work for packet generation rate $\lambda=1$. In the figure legend, we use 'Adaptive ALOHA' to represent this algorithm.
\end{itemize}
\subsection{Performance of AoI Minimization}
We first evaluate the performance of the deep learning algorithm in the AoI minimization problem where the AoI weight is set to $\lambda=1$. In the figure legend, we use 'DL Algorithm' to represent this algorithm.
\begin{figure}[t]
\centering
\includegraphics[width=10cm]{Figure_6.pdf}
\caption{Average $\varDelta^{\mathrm{Ave}}$ performance for $1000$ different D2D layouts as functions of the number of links $N$, where we set $\lambda=1$, $\xi=1$.}
\label{fig7}
\end{figure}
Fig. \ref{fig7} illustrates the average $\varDelta^{\mathrm{Ave}}$ for $1000$ testing layouts under the stationary randomized scheduling decisions obtained by each method for different numbers of links when $\xi=1$ and $\lambda=1$. We can observe that the proposed deep learning approach achieves very close performance compared with the iterative minimization algorithm while it does not require the computation of path loss and has much less computational complexity, which proves that the mapping from GLI to the scheduling decision can be correctly learned by the proposed neural network structure. What's more, the performance of the deep learning algorithm increases with the growth of $N$. Our algorithm outperforms Adaptive ALOHA and Opt. ALOHA because the former utilizes a lax approximation for decoupling the transmission successful probability between links and the latter cannot adjust the activation probability according to the channel condition differences caused by different link locations.
\begin{figure}[t]
\centering
\includegraphics[width=10cm]{Figure_100.pdf}
\caption{Average $\varDelta^{\mathrm{Ave}}$ performance for $1000$ different D2D layouts as functions of the packet generating rate $\xi$, where we set $\lambda=0$ and $N=100$.}
\label{fig71}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=10cm]{Figure_400.pdf}
\caption{Average $\varDelta^{\mathrm{Ave}}$ performance for $1000$ different D2D layouts as functions of the packet generating rate $\xi$, where we set $\lambda=0$ and $N=400$.}
\label{fig72}
\end{figure}
In Fig. \ref{fig71} and Fig. \ref{fig72}, we compare the performance of overall average AoI $\varDelta^{\mathrm{Ave}}$ under different method as functions of packet generation rate $\xi$. As depicted, the DL algorithm outperforms Adaptive ALOHA and Opt. ALOHA for diverse $\xi$, and the performance superiority increases with the number of links. Moreover, even with the packet-replacement policy, not the fastest packet generation rate performs the best, which is more evident in Adaptive ALOHA and Opt. ALOHA. This is because faster packet generation rates result in more interference when new packets are always available to transmit over. Infrequent packet generation reduces the buffer non-empty probability, which reduces the level of interference. However, as the packet generating rate further decreases, AoI will be limited by no packet transferability rather than crosslink interference. Besides, it is worth noting that our DL algorithm achieves better performance at high packet generation rates than others.
\subsection{Computation Complexity Analysis}
\begin{figure}[t]
\centering
\includegraphics[width=10cm]{computation.pdf}
\caption{Computation time consumed to obtain the activation probabilities for one layout in a log scale.}
\label{fig8}
\end{figure}
Although numerical results illustrate that the performance of the DL algorithm is slightly inferior to that of the iterative minimization algorithm, the advantage of the DL algorithm mainly lies in its low computational complexity. We analyze the computational complexity of each method we employed.
Iterative minimization algorithm needs to solve $N$ converted one-variable problems in each iteration. The dominating part of the computation of solving each one-variable problem is the matrix operation of $I_{ij}$ and $D_{ij}$. Since the size of the matrix is $N\times N$, the computational complexity of solving this problem is at least $O(N^2)$. Assuming that the number of iterations is a constant independent of $N$, the computational complexity of the iteration minimization algorithm is $O(N^3)$.
After the neural network is well-trained, the computation of the deep learning algorithm to generate scheduling decisions can be divided into the convolution stage and the fully connected stage. Convolution stage needs to complete $M^2 \times(n_1^2+n_2^2+n_3^2)$ times of operation, in which $M$ and $n_i, i=1,2,3$ are the size of GLI grid and the convolution filters respectively. In the fully connected stage, $N$ parallel fully connected networks need to be operated, and $8m_1m_2$ times of computation needs to be completed for each fully connected network, where $8$ represents the length of the input feature vectors and $m_1, m_2$ are the numbers of the neurons for each hidden layer under the utilized neural network configuration. Thus, the total computation is $r\times [M^2\times(n_1^2+n_2^2+n_3^2) + N\times 10m_1m_2]$ where $r$ is the feedback rounds, and the computational complexity of the deep learning algorithm is $O(N)$.
We record the computation time of the two algorithms in generating scheduling decisions, as depicted in Fig. \ref{fig8}. Although the two methods use GPU and CPU resources to obtain the solution respectively and cannot be compared directly, Fig. \ref{fig8} can illustrate the significant difference in computational complexity between the two methods. With the increase of $N$, the deep learning algorithm can achieve a near-optimal performance while possessing the advantage of low computational complexity compared with the iterative approach.
\subsection{Performance of Throughput and AoI Jointly Scheduling}
\begin{figure}[t]
\centering
\includegraphics[width=10cm]{Figure_5.pdf}
\caption{The Pareto front of overall average AoI $\varDelta^{\mathrm{Ave}}$ and throughput $T^{\mathrm{Ave}}$ obtained by different approaches, where we set $N=300$ and $\xi=1$.}
\label{fig11}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=10cm]{Figure_2.pdf}
\caption{Cumulative distribution of the activation probability $p_i$ for different weight $\lambda$, where we set $N=300$ and $\xi=1$.}
\label{fig112}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=10cm]{Figure_1.pdf}
\caption{Cumulative distribution of the average AoI $\Delta_i$ for different weight $\lambda$, where we set $N=300$ and $\xi=1$.}
\label{fig111}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=10cm]{Figure_4.pdf}
\caption{Cumulative distribution of the average throughput $T_i$ for different weight $\lambda$, where we set $N=300$ and $\xi=1$.}
\label{fig113}
\end{figure}
In the scheduling considering the trade-off between throughput and AoI, we set the number of links to $N=300$ and $\xi=1$. We use Ite. Min. and Opt. ALOHA as a comparison.
Fig.\ref{fig11} depicts the Pareto front of $(\Delta^{Ave},T^{Ave})$ obtained by different methods with the variety of $\lambda$. A network employing great scheduling decisions possesses high throughput and low AoI, thus curves closer to the top left in the figure represent better performance. The DL algorithm has approximate performance compared with the iterative minimization algorithm. As depicted in the figure, with the weight of throughput increasing, i.e.,$\lambda$ tends to zero, a little more throughput gain will result in a greater loss of AoI, and AoI tends to infinity when throughput is maximized.
In the case of a single link, there is no trade-off between AOI and throughput, but for large-scale D2D networks, the lowest AOI is obtained in the case of low throughput. This is because the scheduling of throughput maximization will drive the links with great channel conditions to update frequently to achieve better network throughput performance, while the links with poor channel quality will be ignored, resulting in very low update frequency and very high AoI.
Specifically, we illustrate the cumulative ditribution of the activation probability $p_i$, the average AoI $\Delta_i$ and the avearge throughput $T_i$ for links in Fig. \ref{fig112}-\ref{fig113}. We set $N=300$ and $\xi=1$ and $\lambda \in \{1, 0.1, 0.01, 1\text{e-}5\}$.
Fig. \ref{fig113} depicts the allocation of activation probabilities under different AoI weights. When high AoI weights are employed, all links have similar activation probabilities, and the distribution is relatively smooth. Links with low activation probabilities are less than the weights of other conditions. As the AoI weight decreases gradually, the link activation becomes polarized, and the cumulative distribution function has a too sharp increase.
Fig. \ref{fig111} and Fig. \ref{fig113} depict the distribution of AoI and throughput under different AoI weights. Similar to the distribution of activation probability, the higher AoI weight makes the distribution more centralized, and there are fewer links with higher AoI and lower throughput, which implies that the high AoI weight forces the links with poor channel conditions to increase the transmission opportunity. Even if it causes a sacrifice of resources, it pays more attention to the fairness between links than throughput maximizing scheduling.
\section{Conclusion} \label{conclusion}
\textcolor{blue}{In this paper, we considered the link scheduling policy aiming at the joint optimization of AoI and throughput for packets transmitted with an LCFS-PR queue rule for the D2D communication system. We model this system to a spatiotemporal interacting queue and derive the expressions of overall average AoI and throughput based on the mean-field approximation under the randomized stationary policy. A multi-objective optimization problem of minimizing AoI and maximizing throughput simultaneously is formulated under the system model and transformed into a single objective problem by the weighted sum approach. A neural network structure was proposed to directly construct the mapping from GLI to optimal scheduling decisions without estimating CSI, which is trained by unsupervised learning for an implicit objective function.
Numerical simulation obtains the Pareto Front curve between AoI and throughput by utilizing the deep learning algorithm, and reveals that 1) the maximum packet generation rate can not minimize AoI, 2) the throughput-based scheduling makes the link scheduling tend to polarization, while the AoI-based scheduling is fairer.}
|
\subsubsection*{Complete log-likelihood conditional expectation}
Because of the specific form given in Eq.~\eqref{eq:pZfact}, and because the $\Zb_i\mid T$ are Gaussian, we have that
\begin{linenomath*}
\begin{align}
\label{eq:logpZ.T}
\log{p_\Sigmab}(\Zb \mid T)
& = \sum_{j=1}^p \sum_{i=1}^n \log P(Z_{ij} \mid T)
+ \sum_{(j, k) \in T} \sum_{i=1}^n \log \left(\frac{P(Z_{ij}, Z_{ik})}{P(Z_{ij})P(Z_{ik})}\right) \nonumber \\
& = - \frac{n}2 \log \sigma_j^2 -\frac12 \sum_{j=1}^p \sum_{i=1}^n \frac{Z_{ij}^2}{\sigma_j^2}
- \frac{n}2 \sum_{(j, k) \in T} \log (1- \rho_{jk}^2) \\
& \quad - \frac12 \sum_{(j, k) \in T} \frac1{1- \rho_{jk}^2} \sum_{i=1}^n \left(
\rho^2_{jk} \frac{Z_{ij}^2}{\sigma_j^2} + \rho^2_{jk} \frac{Z_{ik}^2}{\sigma_k^2} - 2\rho_{jk} \frac{Z_{ij} Z_{ik}}{\sigma_j \sigma_k}
\right)
+ \text{cst} \nonumber
\end{align}
\end{linenomath*}
where the constant term does not depend on any unknown parameter.
In the EM algorithm, we have to maximize the conditional expectation of Eq.~\eqref{eq:logpZ.T} with respect to the variances $\sigma_j^2$ and the correlation coefficients $\rho_{jk}$. The resulting estimates take the usual forms, but with the conditional moments of the $Z_{ij}$, that is
\begin{linenomath*}
\begin{equation} \label{eq:sigma.rho}
\widehat{\sigma}_j^2 = \frac1n \sum_i \Esp(Z^2_{ij} \mid \Yb),
\qquad
\widehat{\rho}_{jk} = \frac1n \sum_i \Esp(Z_{ij} Z_{ik} \mid \Yb) \left/ (\widehat{\sigma}_j \widehat{\sigma}_k) \right. .
\end{equation}
\end{linenomath*}
which do not depend on T.
The maximized conditional expectation of Eq.~\eqref{eq:logpZ.T} becomes
\begin{linenomath*}
\begin{equation} \label{eq:ElogpZ.T.Y}
\Esp\left(\log p_{\widehat{\Sigmab}}(\Zb \mid T) \mid \Yb\right)
= - \frac{n}2 \log \widehat{\sigma}_j^2
- \frac{n}2 \sum_{(j, k) \in T} \log (1- \widehat{\rho}_{jk}^2)
+ \text{cst}.
\end{equation}
\end{linenomath*}
We are left with the writing of the conditional expectation of the first two terms of the logarithm of Eq.~\eqref{eq:PTZY}, once optimized in $\Sigmab$. Combining Eq.~\eqref{eq:pT} and Eq.~\eqref{eq:ElogpZ.T.Y}, and noticing that the probability for an edge to be part of the graph is the sum of the probability of all the trees than contain this edge, we get (denoting $\log \widehat{\psi}_{jk} = (1- \widehat{\rho}_{jk}^2)^{-n/2}$)
\begin{linenomath*}
\begin{align*}
\Esp\left(\log p_\betab (T) + \log p_{\widehat{\Sigmab}}(\Zb \mid T) \mid \Yb\right)
& = \sum_{T \in \Tcal} p(T \mid \Yb) \left(\log p_\betab(T) + \log p_{\widehat{\Sigmab}}(\Zb \mid T) \right) \\
& = -\log B + \sum_{T \in \Tcal} p(T \mid \Yb) \sum_{(j, k) \in T} \left(\log \beta_{jk} +\log \widehat{\psi}_{jk} \right) + \text{cst} \\
& = -\log B + \sum_{(j, k)} \prob\{(j, k) \in T \mid \Yb\} \left(\log \beta_{jk} + \log \widehat{\psi}_{jk} \right) + \text{cst},
\end{align*}
\end{linenomath*}
which gives Eq.~\eqref{expectation}.\\
As explained in the section above, we approximate expectations and probabilities conditional on $\Yb$ by their variational approximation.
This provides us with the approximate conditional distribution of the tree $T$ given the data $\Yb$:
\begin{linenomath*}
$$
\pt(T \mid \Yb) = \left. \prod_{jk \in T} \beta_{jk} \widehat{\psi}_{jk} \right/ C,
$$
\end{linenomath*}
where $C$ is the normalizing constant: $C = \sum_T \prod_{j, k \in T} \beta_{jk} \widehat{\psi}_{jk}$. The intuition behind this approximation is the following: according to Eq. \eqref{eq:pT}, the marginal probability a tree $T$ is proportional to the product of the weights $\beta_{jk}$ of its edges. The conditional distribution probability of tree is proportional to the same product, the weights $\beta_{jk}$ being updated as $\beta_{jk} \widehat{\psi}_{jk}$, where $\widehat{\psi}_{jk}$ summarizes the information brought by the data about the edge ($j, k$).
\subsubsection*{Steps E and M}
\begin{description}
\item[E step:]
From the above computation we get the following approximation:
\begin{linenomath*}
$$\mathds{P}(\{j,k\}\in T \mid \Yb) \simeq 1 - \sum_{T:jk\notin T} \pt(T \mid \Yb),$$
\end{linenomath*}
and so we define $p_{jk}$ as follows: \begin{linenomath*}
$$P_{jk}= 1 - \frac{\sum_{T:jk\notin T} \prod_{j, k \in T} \beta_{jk} \psi_{jk}}{\sum_T \prod_{j, k \in T} \beta_{jk} \psi_{jk}}.$$
\end{linenomath*}
$P_{jk}$ can be computed with Theorem \ref{thm:MTT}, letting $[\Wb^h]_\jk = \beta^h_\jk \widehat{\psi}_\jk$ and $\Wb^h_{\setminus \jk} = \Wb^h$ except for the entries $(j, k)$ and $(k, j)$ which are set to 0. The modification of $\Wb^h_{\setminus \jk}$ with respect to $\Wb^h$ amounts to set to zero the weight product, and so the probability, for any tree $T$ containing the edge $(j, k)$. As a consequence, we get
\begin{linenomath*}
$$
P^{h+1}_\jk = 1 - \left|Q_{uv}^*(\Wb^h_{\setminus \jk})\right| \left/ \left|Q_{uv}^*(\Wb^h)\right| \right..
$$
\end{linenomath*}
\item[M step:] Applying Lemma \ref{lem:Meila} to the weight matrix $\betab$, the derivative of $B$ with respect to $\beta_\jk$ is
\begin{linenomath*}
$$
\partial_{\beta_\jk} B = [\Mb(\betab)]_\jk \times B
$$
\end{linenomath*}
then the derivative of \eqref{expectation} with respect to $\beta_\jk$ is null for
$\beta^{h+1}_\jk = P_\jk^{h+1} \left/ [\Mb(\betab^h)]_\jk \right.$.
\end{description}
\subsection{Simulation results}
\subsubsection{Effect of dataset dimensions}
\begin{table}[ht]
\centering
\begin{tabular}{l|l|rrrrrr}
\multicolumn{2}{l|}{} & \multicolumn{1}{c}{SpiecEasi} & \multicolumn{1}{c}{gCoda} & \multicolumn{1}{c}{ecoCopula} & \multicolumn{1}{c}{MRFcov} & \multicolumn{1}{c}{MInt} & \multicolumn{1}{c}{EMtree} \\
\hline
\multirow{3}{*}{{\rotatebox[origin=c]{90}{Easy}}}
& Cluster & 0.86 (0.20) & 0 (0.08) & 0.33 (0.14) & 0.74 (0.06) & 0.38 (0.17) & 0.12 (0.09) \\
& Erdös & 0.86 (0.21) & 0 (0.15) & 0.29 (0.15) & 0.73 (0.05) & 0.38 (0.15) & 0.12 (0.08) \\
& Scale-free & 0.92 (0.04) & 0 (0.04) & 0.33 (0.11) & 0.88 (0.02) & 0.73 (0.09) & 0.34 (0.08) \\ \hline
\multirow{3}{*}{{\rotatebox[origin=c]{90}{Hard}}} & Cluster &0.87 (0.12) & 0 (0.20) & 0.15 (0.18) & 0.78 (0.05) & 0.77 (0.09) & 0.36 (0.09) \\
& Erdös. & 0.88 (0.11) & 0 (0.24) & 0 (0.15) & 0.78 (0.05) & 0.77 (0.10) & 0.35 (0.09) \\
& Scale-free & 0.94 (0.05) & 0 (0.13) & 0 (0.16) & 0.89 (0.03) & 0.94 (0.03) & 0.56 (0.07) \\ \hline
\end{tabular}
\caption{Medians and standard-deviation of FDR computed on 100 graphs of each type (\textit{easy}: $n=100, p=20$, \textit{hard}: $n=50, p=30$)}
\label{medFDR}
\end{table}
\begin{table}[ht]
\centering
\begin{tabular}{r|l|lllllll}
\multicolumn{2}{l|}{} & \multicolumn{1}{c}{SpiecEasi} & \multicolumn{1}{c}{gCoda} & \multicolumn{1}{c}{ecoCopula} & \multicolumn{1}{c}{MRFcov} & \multicolumn{1}{c}{MInt} & \multicolumn{1}{c}{EMtree} \\ \hline
\multirow{3}{*}{{\rotatebox[origin=c]{90}{Easy}}} &Cluster & 0.16 (0.11) & 0.05 (0.07) & 1.04 (0.48) & 2.26 (0.58) & 0.30 (0.13) & 0.62 (0.14) \\
& Erdös & 0.15 (0.09) & 0.06 (0.08) & 0.95 (0.50) &2.23 (0.42) & 0.30 (0.14) & 0.58 (0.12) \\
& Scale-free & 0.63 (0.13) & 0.08 (0.07) & 0.92 (0.30) &4.86 (0.44) & 0.81 (0.25) & 0.96 (0.08) \\ \hline
\multirow{3}{*}{{\rotatebox[origin=c]{90}{Hard}}} & Cluster & 0.21 (0.08) & 0.02 (0.03) & 0.02 (0.17) & 1.65 (0.33) & 0.68 (0.30) & 0.43 (0.10) \\
& Erdös & 0.21 (0.08) & 0.02 (0.02) & 0.00 (0.18) & 1.56 (0.32)& 0.66 (0.25) & 0.42 (0.10) \\
& Scale-free & 0.61 (0.12) & 0.04 (0.03) & 0.08 (0.24) &3.29 (0.40) & 3.63 (1.08) & 0.94 (0.09) \\
\hline
\end{tabular}
\caption{Medians and standard-deviation of density ratio computed on 100 graphs of each type (\textit{easy}: $n=100, p=20$, \textit{hard}: $n=50, p=30$)}
\label{meddens}
\end{table}
\begin{table}[ht]
\centering
\begin{tabular}{l|l|rrrrrr}
\multicolumn{2}{l|}{} & \multicolumn{1}{c}{SpiecEasi} & \multicolumn{1}{c}{gCoda} & \multicolumn{1}{c}{ecoCopula} & \multicolumn{1}{c}{MRFcov} & \multicolumn{1}{c}{MInt} & \multicolumn{1}{c}{EMtree} \\ \hline
\multirow{3}{*}{{\rotatebox[origin=c]{90}{Easy}}} & Cluster & 1.77 & 13.89 & 1.74 & 0.00 & 0.00 & 0.00 \\
& Erdös & 0.68 & 11.95 & 0.99 & 0.00 & 0.83 & 0.00 \\
& Scale-free & 0.00 & 1.88 & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline
\multirow{3}{*}{{\rotatebox[origin=c]{90}{Hard}}} & Cluster & 0.00 & 14.05 & 23.40 & 0.00 & 0.00 & 0.00 \\
& Erdös & 0.00 & 20.85 & 27.28 & 0.00 & 0.00 & 0.00 \\
& Scale-free & 0.00 & 5.97 & 15.46 & 0.00 & 0.00 & 0.00 \\ \hline
\end{tabular}
\caption{Percentage of empty networks computed on 100 graphs of each type (\textit{easy}: $n=100, p=20$, \textit{hard}: $n=50, p=30$)}
\label{empty}
\end{table}
\begin{table}[ht]
\centering
\begin{tabular}{lrrrrrr}
& \multicolumn{1}{c}{SpiecEasi} & \multicolumn{1}{c}{gCoda} & \multicolumn{1}{c}{ecoCopula} & \multicolumn{1}{c}{MRFcov} & \multicolumn{1}{c}{MInt} & \multicolumn{1}{c}{EMtree} \\
\hline
Easy & 29
|
.73 (2.00) & ~1.29 (~0.30) & 28.14 (1.46) & 48.7 (2.32) & 138.13 (39.60) & 50.19 (7.81) \\
Hard & 29.38 (1.31) & 40.73 (20.94) & 27.16 (1.30) & 14.1 (0.36) & ~95.27 (46.34) & 23.59 (2.09) \\
\hline
\end{tabular}
\caption{Median and standard-deviation of running times in seconds for scale-free structures, for two dataset dimensions (\textit{easy}: $n=100$, $p=20$, \textit{hard}: $n=50$, $p=30$). }
\label{timeSF}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[width=10cm]{S_effect.png}
\caption{FDR and density ratio measures of EMtree with varying values of number of sub-samples $S$ (Erdös structure).}
\label{Seffect}
\end{figure}
\begin{table}[ht]
\centering
\begin{tabular}{l|rrrrrr}
$S$ & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{10} & \multicolumn{1}{c}{20} & \multicolumn{1}{c}{50} & \multicolumn{1}{c}{150} \\ \hline
Easy & 0.66 (0.15) & 1.86 (0.23) & 7.00 (0.81) & 12.29 (1.27) & 29.50 (3.39) & 87.30 (10.36) \\
Hard & 0.45 (0.12) & 1.44 (0.14) & 5.06 (0.78) & 8.97 (0.87) & 23.35 (2.40) & 69.29 (10.83) \\
\hline
\end{tabular}
\caption{Median and standard-deviation running-time values in seconds for inference of Erdös structure with EMtree and different values of the number of sub-samples $S$.}
\label{timesS}
\end{table}
\subsubsection{Effect of network density}
\begin{table}[H]
\centering
\begin{tabular}{l|rr|rr}
& \multicolumn{1}{c}{$n < 50$} & \multicolumn{1}{c}{$n\geq 50$} & \multicolumn{1}{c}{$p < 20$} & \multicolumn{1}{c}{$p\geq 20$} \\ \hline
EMtree & 0.41 (0.11) & 0.60 (0.15) & 0.38 (0.12) & 0.71 (0.21) \\
gCoda & 0.12 (0.47) & 0.07 (0.03) & 0.05 (0.03) & 0.09 (0.06) \\
SpiecEasi & 2.41 (0.25) & 2.41 (0.25) & 2.39 (0.25) & 2.42 (0.25) \\
\hline
\end{tabular}
\caption{Median and standard-deviation of running times for each method in seconds, for $n$ and $p$ parameters. corresponding to Erdös and cluster structures with $5/p$ densities.}
\label{timeDenser}
\end{table}
\subsection{Illustrations}
\subsubsection{Effect of the edge frequency threshold}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{QET_twoDataSets.png}
\caption{Quantity of selected edges as a function of the selection threshold (\textit{left}: Fatala fishes, \textit{right}: oak mildew.)}
\label{QETOak}
\end{figure}
The curves displayed on Fig. \ref{QETOak} are very smooth, which illustrates the difficulty of setting this threshold.
\subsubsection{Fatala River fishes}
\label{names_Baran}
\paragraph{Species names with highest betweenness scores:}
13: Galeoides decadactylus, 17:Ilisha africana,19: Liza grandisquamis, 22: Pseudotolithus brachygnatus, 25: Pellonula leonensis, 30: Pseudotolithus typus.
\subsubsection{Fish populations in the Fatala River estuary}
\label{barans}
Networks on Fig.~\ref{baransNets} suggest a predominant role of the \textit{site} covariate compared to the \textit{date}. Indeed, adjusting for the \textit{site} results in much sparser networks (Fig.~\ref{QETOak} in appendix). It deeply modifies the network structure: the \textit{site} network has 12 new edges and only 6 in common with the \textit{null} network. Besides, the highlighted nodes only change when introducing the \textit{site} covariate. This suggests that the environmental heterogeneity between the sites has a major effect on the \modif{variations of species abundances}, while the effect of the date of sampling is moderate.
\begin{figure}[H]
\includegraphics[width=\linewidth]{BaransNets.png}
\caption{Interaction networks of Fatala River fishes inferred when adjusting for none, both or either one of the covariates among \textit{site} and \textit{date}. Highlighted nodes spot the highest betweenness centrality scores. Widths are proportional to selection frequencies. $S=100$, $f'=90\%$. }
\label{baransNets}
\end{figure}
\subsubsection{Oak powdery mildew}
\label{oak}
When providing the inference with more information (tree status, distances), the structure of the resulting network is significantly modified. Nodes with high betweenness scores differ from one model to another. There is an important gap in density between the \textit{null} model and the others, starting from a $25\%$ selection threshold (Fig.~\ref{QETOak} in appendix). From a more biological point of view, the features of the pathogen node are greatly modified too: its betweenness score is among the smallest in the \textit{null} network (quantile $16\%$), and among the highest in the two other networks (quantiles $93\%$ and $96\%$). Its connections to the other nodes vary as well.
\modif{
Accounting for covariates results in less interactions with the pathogen but a greater role of the latter in the pathobiome organization.
}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{OakProbNets.png}
\caption{Pathogen interaction networks on oak leaves inferred with EMtree when adjusting for none, the \textit{tree} covariate or \textit{tree} and distances. Bigger nodes represent OTUs with highest betweenness values, colors differentiate fungal and bacterial OTUs. Widths are proportional to selection frequencies. $S=100$, $f'=90\%$ . }
\label{oakNets}
\end{figure}
\modif{Using the dataset restricted to infected samples (39 observations for 114 OTUs) and correcting for the leaves position in the tree (proxy for their abiotic environment), \citet{jakuch} identifies a list of 26 OTUs likely to be directly interacting with the pathogen. Running EMtree on the same restricted dataset with the same correction yields a good concordance with edge selection frequencies, as shown in Fig.~\ref{otujak}.}
\begin{figure}[H]
\centering
\includegraphics[width=9.5cm]{EAneighbors.png}
\caption{EMtree selection frequencies of pathogen neighbors compared to \citet{jakuch} results, computed on infected samples and adjusting for the leaf position (100 subs-samples). }
\label{otujak}
\end{figure}
\section*{Notations} \input{notations}
\newpage
\section{Introduction}
\input{introduction}
\section{Material and methods}
\subsection{Model} \label{sec:model} \input{model}
\subsection{Inference with EMtree} \label{sec:inference} \input{inference}
\subsection{Simulation and illustrations} \input{simulation}
\section{Results}
\subsection{On simulated data} \input{results} \label{sec:simul}
\subsection{Illustrations} \input{illustration} \label{sec:illustration}
\section{Discussion} \input{discussion}
\paragraph{Data accessibility.}
The method developed in this paper is implemented in the R package EMtree available on GitHub: \url{https://github.com/Rmomal/EMtree}.
\paragraph{Acknowledgement.}
The authors thank P. Gloaguen and M. Authier for helpful discussions and C. Vacher for providing the oak data set. This work is partly funded by ANR-17-CE32-0011 NGB and by a public grant as part of the Investissement d'avenir project, reference ANR-11-LABX-0056-LMH, LabEx LMH.
\paragraph{Author's contributions statement.}
All authors conceived the ideas and designed methodology; R. Momal developed and tested the algorithm. All authors led the writing of the manuscript, contributed critically to the drafts and gave final approval for publication.
\bibliographystyle{apsrev}
\subsubsection{Effect of dataset dimensions}
\label{adverse}
Behaviors are compared on an easy setting ($n=100$, $p=20$) and a hard setting ($n=50$, $p=30$). Fig.~\ref{TPFN} displays FDR and density ratio measures for all methods on the different cases. Detailed values of medians and standard-deviations are given in Tables \ref{medFDR} and\ref{meddens}. \modif{The behaviour of methods remains virtually the same across Erdös and Cluster structures. Scale-free structure appears to entail a greater difficulty for all methods except ecoCopula: the FDR increases in easy cases of about $15\%$ for SpeacEasi, MRFcov and EMtree, and about $35\%$ for MInt.\\
The greater difficulty affects all methods. gCoda standard-deviation increases by $10\%$. MRFcov, EMtree and MInt show an increase in FDR of about $5\%$, $20\%$ and $30\%$ respectively. Density ratios overall decrease, especially for ecoCopula which ratio is close to 0 and yields a proportion of empty networks of $15$-$25\%$ (Table \ref{empty})}.
\modif{Considering FDRs and density ratios combined, EMtree appears to be the method with the lower FDR which maintains a density ratio reasonably close to 1. As a consequence, the proposed methodology compares well to existing tools on problems with varying difficulties. EMtree is also comparable on running times. Table~\ref{timesTPFN} shows that for Erdös and Cluster it is the third quicker method in easy cases and the second in hard ones. Table \ref{timeSF} (in appendix) shows that on scale-free problems, EMtree is the second quicker method in hard cases, and is curiously slow on easy ones.}
Interestingly, in easy cases when the network density is well estimated, methods yield FDR of $10\%-30\%$ in median. This reminds that network inference from abundance data is a difficult task, and that perfect inference of the network remains an out-of-reach goal.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{panel_TPFN_signed.png}
\caption{FDR and density ratio measures for all methods at two different difficulty levels and 100 networks of each type. White squares and black plain lines represent medians and quartiles respectively. \small{\textit{ecoCopula selection method: AIC. Number of subsamples for SpiecEasi and EMtree: $S=20$. SpiecEasi and gCoda: $lambda.min.ratio=0.001$, $nlambda=100$.}}}
\label{TPFN}
\end{figure}
\begin{table}[ht]
\centering
\begin{tabular}{l|rrrrrr}
& \multicolumn{1}{c}{SpiecEasi} & \multicolumn{1}{c}{gCoda} & \multicolumn{1}{c}{ecoCopula} & \multicolumn{1}{c}{MRFcov} & \multicolumn{1}{c}{MInt} & \multicolumn{1}{c}{EMtree} \\
\hline
Easy & 25.45 (1.87) & 0.11 (0.06) & 5.55 (0.64) & 34.51 (3.68) & 43.04 (19.76) & 11.72 (1.89) \\
Hard & 28.43 (1.30) & 0.53 (0.25) & 9.6 (0.65) & 8.29 (0.36) & 33.77 (18.20) & 8.17 (0.50) \\
\hline
\end{tabular}
\caption{Median and standard-deviation running-time values (in seconds) for Cluster and Erdös structures, including resampling with $S=20$ for SpiecEasi and EMtree.}
\label{timesTPFN}
\end{table}
\subsubsection{Effect of network structure}
As expected for a fixed $p$, the higher the number of observations $n$, the better the performance for all methods and structures. Interestingly, the same happens when $p$ increases for a fixed $n=100$ (except for SpiecEasi).
EMtree performs well on Scale-free structures (Fig.~\ref{SFAUC}) which was also expected; the other methods performance worsen compared to other structures. When lowering $n$ to 30, EMtree performance deteriorates along with $p$, yet remaining above $70\%$ in median in the extreme case where $p=n$ (Fig.~\ref{SFAUC}, right). The structure being Erdös or Cluster, each method is affected in the same way by an increase of $n$ or $p$ (Fig.~\ref{panelErdClust}). Besides, increasing the difference between the two structures by tuning up the \textit{ratio} parameter has no effect. Overall EMtree performs better than gCoda and SpiecEasi on all the studied configurations. Running times are summarized in Table~\ref{timeNP}. EMtree is about 10 times slower than gCoda (4 for small $n$), and 4 times faster than SpiecEasi. The high standard deviation for small $n$ seems to be due to gCoda struggling with Scale-free structures.
\begin{figure}[H]
\centering
\includegraphics[width=0.7\linewidth]{panel_SF.png}
\caption{Effect of Scale-free structure on AUC medians and inter-quartile intervals for parameters $n$ and $p$.}
\label{SFAUC}
\end{figure}
\begin{table}[H]
\centering
\begin{tabular}{l|rr|rr}
& \multicolumn{1}{c}{$n < 50$} & \multicolumn{1}{c}{$n\geq 50$} & \multicolumn{1}{c}{$p < 20$} & \multicolumn{1}{c}{$p\geq 20$} \\
\hline
EMtree & 0.44 (0.14) & 0.60 (0.17) & 0.41 (0.13) & 0.76 (0.21) \\
gCoda & 0.11 (26.8) & 0.05 (0.05) & 0.05 (0.04) & 0.09 (0.54) \\
SpiecEasi & 2.09 (0.26) & 2.37 (0.28) & 2.42 (0.27) & 2.42 (0.26) \\
\hline
\end{tabular}
\caption{Median and standard-deviation of running times for each method in seconds, for $n$ and $p$ parameters.}
\label{timeNP}
\end{table}
\subsubsection{Effect of network density}
The comparison of top and bottom panels of Fig.~\ref{panelErdClust} shows that network inference gets harder as the network gets denser, whatever the method and the structure of the true graph
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.